Rakesh Isaac
2026
AI-nativeExecutive productGRCPrototypePrototype

An AI-native prototype for enterprise risk

A working concept for what risk and compliance look like when the manual work is automated, the data is current, and the decisions are faster.

ServiceNow · Advance Risk · AI-native prototype
Lead designer · concept, IA, prototype, narrative
Most of the work inside a risk function is still manual — gathering, reconciling, summarising, escalating. The brief was to show what changes when an AI layer takes that off the table and gives the time back to the decisions that matter.

Where AI actually earns its place in risk

Risk and compliance teams spend a disproportionate share of their day on work that is necessary but not strategic. Pulling reports together. Reconciling data across systems. Drafting summaries. Following up on stale tasks. These are the steps where AI has the clearest, most defensible value — not as a replacement for judgement, but as a way to make sure the data underneath the judgement is current and complete.

The internal opportunity for Advance Risk was to make that case with a working prototype, not a deck. Something concrete enough to align leadership on direction, credible enough to use in customer conversations, and detailed enough to give the engineering and design teams something to build against.

The strongest argument for AI in enterprise software is not 'it does more.' It is 'it removes the work that was never the point.'

An end-to-end flow, designed inside the production language

I designed the prototype as a continuous flow rather than a deck of disconnected screens — each surface flowed into the next, so the experience could be walked through start to finish. The set covered the moments where AI assistance changes the work most clearly: posture and exposure overviews, scenario thinking, control and audit readiness, and the conversational surface that ties them together.

The visual language was extracted live from production Figma files using a custom design-system skill I built, so the prototype reads as continuous with the actual product rather than a separate marketing concept. That made it usable across audiences — executive alignment in one room, design and engineering reference in another, customer-facing demo in a third.

Three ideas that kept recurring

A few patterns surfaced repeatedly and ended up shaping the design across every screen. First, charts and visualisations needed conversational entry points shaped to their grain — not a generic chat icon floating somewhere, but typed affordances that let the user pull on a thread directly from where they were looking. Second, every AI response had to carry its working — citations linking back to the source records, the same evidentiary pattern an auditor would expect. Third, recommended actions needed a reversible trial step before commit, so the user always had the last word.

Together, these turned the AI layer from a sidebar into something more structural — a way of working with the product, not a feature attached to it.

These patterns have stayed with me. I now use them as a checklist whenever I am designing an AI-native enterprise surface.

The prototype became reference material for several follow-on projects across the GRC area — both as a visual direction and as a way of thinking about where AI assistance belongs in the workflow. For me personally, it sharpened a working thesis I have continued to develop: that AI-native design in enterprise products is mostly about removing the work that was never the point, and being precise about the small set of moments where the agent earns its place.

  • 01Used as reference material across multiple follow-on projects
  • 02Aligned executive, design, and customer-facing audiences on direction
  • 03Established three recurring AI-native interaction patterns