In summary:
- How design and research help teams stay focused on real needs, keep people visible, and think beyond short-term delivery in AI projects.
- An explanation of how early design and research reduce risk, cost, and disruption by shaping better decisions before paths lock in.
- Five practical tools – speculative design, problem framing, inclusive research, real-world testing, and pre-mortems – that teams can use to make clearer, safer decisions in fast-moving AI work.
- Why responsible AI delivery depends on cross-disciplinary collaboration, shared responsibility, and deliberate choices about how work is done.
The first instalment of this two-part series, Design, risk and the reality of AI delivery, explored how AI work can drift when pace accelerates and pressure builds. Incorporating real-world examples where technology moved faster than the thinking around it, we discussed the conditions that allow harm to emerge through ordinary decisions made under strain.
This second blog focuses on what helps teams avoid those blind spots. It looks at the practical value of design and research in AI work and introduces a set of simple tools that help teams stay anchored to real needs, keep people visible, and surface risk early – without slowing delivery to a halt.
The value of design and research in AI
Design and research play a critical role in helping teams avoid the patterns that cause problems in AI work. When they are built in from the start, rather than treated as optional extras that can be dropped under pressure, they create the space needed to think clearly, stay aligned, and make better decisions as pace increases.
Keeping the focus on needs
At their core, design and research help teams keep the focus on real needs. They create space to start with the problem before settling on a solution. They clarify what a service must actually achieve, so the work isn’t shaped around whatever the technology happens to be capable of.
They also help teams see when an AI approach genuinely adds value – and when a simpler, safer option would serve people better.
Keeping people visible
Just as importantly, design and research keep people visible. They keep real users, real contexts, and real constraints in the room. When delivery pressure builds, this is often the first thing to thin out. And it’s exactly what stops teams seeing how an AI system behaves once it meets people dealing with stress, uncertainty, low confidence, or limited access.
Keeping people visible closes the quiet gaps where problems tend to grow later.
Keeping an eye on the future
Design and research also help teams keep an eye on the future. They make it easier to look beyond the immediate deadline and spot risks before they become embedded. They surface how choices made now might behave at scale, how assumptions can harden into system behaviour, and where impacts are likely to land once a service is live.
Keeping risks to a minimum
And beyond all of that, design and research play a critical role in de-risking investment. Because this work happens early – before decisions set and paths lock in – it reduces the cost and disruption of change later. It helps teams explore options while the work is still flexible and avoid the expensive corrections that come from building the wrong thing quickly.
This is why design and research matter in AI. They give teams the clarity, insight, and foresight needed to make better decisions under pressure – and they reduce the risk of sleepwalking into problems no one ever intended to create.
Five tools for better decisions
It’s clear that design and research have a role in AI work, so the question becomes: how do we keep that role alive in fast-moving delivery?
The following tools are simple, practical ways to keep work anchored in real needs, keep people visible, and surface risks early. They aren’t heavy processes. They sit inside the flow of delivery and help teams think before decisions harden.
To make them concrete, imagine a council under pressure to process a high volume of grant applications. It’s a realistic case, and it shows how each tool helps at the moment it matters.
1. Speculative design and provocations
Speculative design is used early, before a project takes shape. It helps teams imagine possible futures through simple provocations — rough sketches, scenarios, props, or short prototypes.
These aren’t predictions. They’re “what if” prompts that make emerging ideas tangible enough to discuss. By pushing ideas slightly beyond the expected path, they surface assumptions and questions that don’t appear in standard planning.
In the grant example, the team creates a rough rejection letter generated by an automated system. It’s clipped, generic, hard to interpret, and adds pressure by suggesting reapplication. Reading it together sparks real questions: what signals would the model rely on, how would applicants understand the decision, where might bias creep in, and what happens to people with low digital confidence?
That single artefact opens a deeper conversation than a slide deck ever could, and it does so before decisions become expensive to change.
2. The problem-framing canvas
A problem-framing canvas is a simple one-page tool that slows the work just enough to clarify what’s being solved, who is affected, and what “better” looks like.
It asks basic questions – but that’s why it works. It pulls teams back to first principles when pressure is high. It keeps focus on needs instead of technology, keeps people visible, and pushes attention beyond short-term activity.
In the grants project, completing the canvas clarifies that staff are overwhelmed and residents face long waits. It identifies who is affected and what outcomes matter. The work shifts from “use AI to speed this up” to a clearer, shared understanding of the real problem.
3. Inclusive research and personas
Inclusive research brings in the people who carry the most risk if things go wrong – those with the least margin for error and the fewest safety nets.
It adapts to people’s confidence, time, access, and circumstances. Done well, it reveals pressures and fragile points that never appear in dashboards.
Capturing this insight in a small set of edge-case personas creates practical anchors. These personas focus on needs and constraints at the edges. If the service works for them, it usually works for everyone.
In the grant example, personas might include someone applying on a shared device, a supporter juggling competing demands, or a business owner intimidated by official language. Each design decision is tested against them, preventing small choices from turning into harm.
4. Testing in real-world contexts
Real-world testing puts work in front of real people, in real contexts, while there’s still time to change it. It starts small and builds over time.
This is where assumptions fall away. A model that looked confident in a demo may struggle with real questions. An automation meant to reduce effort may shift work onto staff. These issues only appear under real conditions.
In the grant scenario, testing reveals that users rely heavily on the AI’s first answer, even when it’s incomplete, and that staff see increased calls from people seeking clarification. None of this means the idea is wrong – it means the reality is clearer now, when the work is still flexible.
5. Pre-mortems to surface risks
A pre-mortem is a short, focused exercise that asks: “If this goes wrong, how is it most likely to fail?”
Run with a cross-disciplinary group, it surfaces risks quickly. The value lies in turning those risks into early signals and simple mitigations while the design is still flexible.
In the grant example, a one-hour pre-mortem identifies risks around outdated guidance, misunderstood eligibility, and staff over-reliance on AI. The team agrees practical steps – clearer language, update processes, staff training, and extra testing – before rollout.
Using the tools, together
These tools aren’t expensive or heavy. Most can be done in a few hours. But they reduce risk, save future cost, and protect against pain later.
- Speculative design helps when someone is pushing for AI to be added to a service, or when you need to see how a new approach might change things. It opens up the future before decisions set in.
- The problem-framing canvas comes in early and gets updated as you learn more. It keeps the work anchored to the real need rather than the preferred solution.
- Inclusive research comes in whenever people are affected — especially those most exposed if things go wrong. It keeps the work grounded in real conditions and real constraints.
- Real-world testing starts as soon as there’s something to test. You begin small and keep going. It shows how the work behaves once it meets real life, not just a demo.
- And pre-mortems work early and often. They surface the things that might cause trouble and give you simple ways to address them before they land.
You don’t need all five on every project. But when stakes are high, when AI reshapes a service, or when decisions are about to harden, they create space to think clearly and avoid costly surprises.
Their value lies in how they counter pressure: keeping work tied to real needs, keeping people visible, and surfacing long-term consequences early.
And none of this sits with one discipline alone.
Safe, purposeful AI comes from teams working together. Leaders set conditions. Designers and researchers keep people visible. Technologists and operational teams bring depth and understanding of complexity. When they work together, blind spots shrink, decisions improve, and responsibility is shared.
Choosing how we move forward
AI will keep accelerating. Expectations will keep rising. The pressure won’t ease.
But there are still choices inside all of that. Organisations can choose to keep people visible. They can choose small, deliberate pauses that make the overall flow steadier. They can choose clarity over assumption, and alignment over noise.
Teams that work this way don’t just build better services. They make better decisions, reduce avoidable risk, and build trust – with users, with colleagues, and with partners.
And they use AI in ways that support people, rather than creating the problems that only become visible later.
Find out more about people-centred AI transformation
This blog was based on the webinar, Shaping the future of AI through design. If you’d like to watch the full webinar, you can access it on demand now.
In the same series, our Director of AI - Becky Davis, held a webinar on Human-centred transformation in the age of AI, where she explored two AI-driven futures: one that empowers people through ethical leadership and innovation, and another that risks harm through lost trust and rising inequality.
Finally, we held a panel discussion about The AI challenge: How to deliver transformation that serves people. In this session we brought together senior leaders from Nationwide, Home Office, Oracle, NHS SBS and Sopra Steria, to explore how organisations are turning AI ambition into reality.
All of this, and more, can be found on our People-centred Hub. Discover a series of expert-led webinars, insightful thought leadership articles, and practical tools - all designed to guide leaders to deliver people-first change.