Vendor onboarding when the AI does the first draft
An AI-assisted Smart Start workflow for third-party risk — built around the insight that the assessment template, not the product around it, is what risk teams actually need help with.
“Risk teams do not run on assessments. They run on the templates the assessments are built from. Designing for the template is designing for the actual product.”
The template is the product
Third-party risk programs are made of assessment templates — the questionnaires that decide whether a vendor passes. These templates carry the institutional memory of a risk function: which regulations matter, which vendor types get which scrutiny, which questions an analyst trusts and which they have learned to ignore. Onboarding a new vendor is, in practice, a template configuration project.
That insight reframed what we were building. Smart Start was not an onboarding feature with a template generator inside it. The template generator was the product. Everything else was scaffolding around it.
If you design the workflow around the wrong unit of value, the AI ends up assisting the wrong thing.
Several iterations, one essential argument
The work went through several rounds of prototyping. Early versions explored a step-by-step wizard structure — defensible, predictable, and the kind of thing risk software has always done. It tested badly. Analysts felt the AI was being held back by the form.
Later iterations leaned the other way, towards a single conversational surface with the AI generating the template directly. That tested well for first-time use and badly for returning users, who wanted to skip ahead and operate at the speed of someone who already knew what they were doing.
What worked, eventually, was a hybrid. A persistent context panel for what the AI knows about the vendor. A primary canvas for the generated template. An ambient input that accepts both natural language and structured edits. Different users used the same surface differently — and that turned out to be the design's actual point, not its compromise.
Two things that mattered more than the layout
Two design decisions ended up doing more work than the visual structure. First, every generated question needed to show its working — traceable to a regulation, a peer template, or an explicit input from the analyst. Without that, the analyst could not trust the template enough to edit it; they would discard it and start over.
Second, when the analyst flagged something as wrong, the system needed to capture why, not just that. A discard button does not improve the next generation. A 'this is wrong because…' field does. Designing the feedback loop turned out to be most of the AI design work.
The work is in flight. The hybrid pattern is influencing how the team thinks about AI-assisted workflow surfaces in TPRM, and several of the underlying ideas — traceable generation, structured feedback, the context-canvas-input layout — are showing up in adjacent surfaces. Not all of it has shipped. The honest status is that this is a body of design thinking that is still finding its way into the product, not a closed project with a clean before-and-after.
- 01Multiple iterations tested with internal users
- 02Hybrid layout pattern influencing adjacent AI-assisted surfaces in TPRM
- 03Established traceable generation and structured feedback as core requirements