In summary:
The first instalment of a two-part series based on the webinar Shaping the Future of AI Through Design, exploring the role design and research play in keeping AI work aligned, responsible, and grounded in real needs.
- AI has moved from curiosity to urgency, creating pressure to act fast, show progress, and prioritise output over careful thinking.
- When pace narrows thinking, systems can drift from real human needs, leading to harm at scale, as seen in cases like the Dutch childcare benefits scandal and flawed automated news summaries.
- Embedding design and research early helps teams stay grounded, keep services safe and fair, and prevent small rushed decisions from becoming large systemic problems.
Over the past couple of years, something has shifted.
AI has moved from being an interesting possibility to something that shows up in almost every conversation. It’s there in meetings with clients and partners, in conference agendas, in policy discussions and leadership briefings. It appears in vendor demos, roadmap conversations, and in those casual moments where someone says, “We probably need to do something with this.”
A lot of that energy is genuine. AI really does offer new possibilities. It can speed up routine work, help teams handle complexity that used to slow things down, and open up new ways to deliver services. Many organisations see reach, efficiency, and scale where there used to be friction. The sense of opportunity is real, and it’s understandable.
But that opportunity has also created a new environment.
It’s an environment where expectations rise quickly, timelines compress, and visible progress becomes a signal of competence. Teams feel pressure to move fast, even when the path isn’t yet clear. Leaders feel pressure to act, simply because others are acting.
In that kind of environment, the thinking space narrows. Not dramatically, and not because anyone intends it to. It narrows just enough that the human side of a service starts to slip to the edges. It isn’t consciously deprioritised; it’s simply overshadowed by pace, noise, and urgency.
This is the context that now surrounds much AI work, and shapes the choices we make. In that environment, the risk isn’t usually technical failure, it’s misalignment.
Why design matters in AI
We know that AI gives organisations real power to improve services. But that power comes with pressure. And if that pressure isn’t handled well, the work can drift away from the things that matter most for people. It becomes easy to focus on output and miss the moments that keep services safe, fair, and usable. These aren’t minor details. They’re the foundations of whether something actually works in the real world.
Design and research help bring that space back.
They steady the work. They keep attention on real needs. And they help ensure decisions hold up in practice, not just on paper.
When this is done well, AI can genuinely strengthen services. When it isn’t, the risks become real, and people feel the consequences. The aim isn’t to slow everything down, but to give teams what they need to stay aligned as the pace increases.
Automated systems shaping outcomes
To solidify understanding, it helps to look at what happens when technology moves faster than the thinking around it.
A few years ago in the Netherlands, the tax authority introduced algorithms to spot possible childcare benefit fraud and flag “high-risk” claims. On paper, the idea seemed sensible: use data to focus checks where they were most needed.
In practice, the system relied on crude risk factors. Families from migrant backgrounds were far more likely to be flagged, often without evidence of wrongdoing. Once flagged, people were treated as if they had committed fraud and were told to repay large sums of money. Many were already under pressure, and these decisions pushed them into debt, job loss, and housing problems. In some cases, families even lost custody of their children.
There was no single dramatic failure. The harm emerged from ordinary decisions made under pressure: adopting a tool that looked efficient, assuming the data would hold up, and scaling the system before understanding how it would behave in real lives.
The lesson isn’t that AI is inherently harmful. It’s that when the environment around it is rushed or narrow, people pay the price – often only once the system is live, when it’s hardest to put things right.
When automated summaries mis-represent reality
A very different example shows the same pattern.
At the end of last year, Apple introduced an AI feature that generated short news alert summaries. The aim was to turn long or complex stories into quick, helpful notifications. It sounded useful. It promised efficiency. And it fit neatly into the wider push to show progress on AI.
But once the feature went live, problems appeared almost immediately. Users began receiving alerts, branded with trusted news logos, that contained statements the original articles hadn’t made.
One alert wrongly implied that a man under arrest had shot himself. Another suggested a well-known athlete had come out as gay, when the article was actually about someone else. These weren’t obscure edge cases. They were routine alerts, pushed out at scale.
The issue wasn’t a single technical error. It was that the feature moved from idea to launch without enough attention to how it would behave in real conditions – how people read alerts, how tone changes meaning, how sensitive topics land, and what happens when an automated summary becomes the version of the story most people see.
Again, the pattern is familiar: strong technical capability, pressure to show progress, and just enough narrowing of the thinking space that the risks only became visible after launch.
The conditions that create harm
When you step back from these examples, what stands out isn’t the AI itself. It’s the conditions around it.
Most teams recognise these conditions immediately:
- shifting priorities,
- tight deadlines,
- ambitious targets,
- complex politics,
- limited resources,
- and a drive to show visible progress.
None of this is unusual. It’s normal delivery reality. AI just turns the volume up.
AI moves fast, acts at scale, and embeds decisions deep into services. When something goes wrong, it isn’t one interaction that drifts off course – it’s thousands. Small choices amplify. Quiet assumptions become system behaviour. Harms travel further before anyone sees them.
As pace increases, timelines shorten, expectations rise, and decisions feel urgent even when the picture is still forming. The thinking space gets thin. People slip from view. Assumptions settle quietly. Decisions that look tidy on paper behave very differently in real life.
Negative impacts rarely come from dramatic choices. They come from ordinary decisions made under pressure. They show up as small frictions, people excluded, systems behaving unpredictably, and responsibilities quietly pushed onto users who were never meant to carry them.
Spotting the warning signs
If the consequences are familiar, the causes usually are too.
In fast-moving conditions, certain patterns start to appear. They aren’t dramatic, and they aren’t unique to any one project. They’re habits that creep in as attention narrows and pressure builds. For some organisations, they’re simply the default way of working.
They often make sense in the moment, which is why they pass without comment. But they shape work in ways that cause problems later, especially when AI is involved.
Think of them as early warning signs – signals that the work may be drifting away from real needs, or that decisions are being made on shaky ground.
1. Leading with technology
The first pattern is choosing an AI solution or technology before understanding whether it’s needed or the right choice.
A system looks impressive. A vendor demo is polished. Another organisation announces a new feature. There’s a sense that now is the moment to act.
Before the problem is clearly understood, the work tilts toward the solution. The question shifts from “What’s the issue?” to “How can we use this?” It feels like progress, but the work is now anchored to technology rather than need.
When the solution comes first, the problem gets shaped to fit it. Teams end up with over-engineered systems and services that look impressive but don’t meet real needs. And once that anchor is set, it becomes much harder to ask the simplest question: Is this the right way to solve it at all?
2. Losing sight of people
The second pattern is that people fade from view.
Under pressure, the activities that connect teams to real users are often the first to shrink: research sessions, inclusive approaches, conversations with frontline staff, small usability checks. Technical tasks survive because they feel non-negotiable. Human insight starts to look optional.
But AI behaves very differently when it meets people under stress, dealing with uncertainty or low confidence. If those realities aren’t understood early, they only become visible once the system is live – when change is hardest.
3. Short-term focus
The third pattern is a shift toward short-term thinking.
The focus becomes getting something working now. Longer-term impacts slide to the edge. Small delivery decisions that seem harmless grow once the system operates at scale. Unintended consequences appear far from where the decisions were made, and by the time they surface, they’re costly to unpick.
This is often the point where teams say, “We didn’t see that coming.”
Embedding design before problems scale
To move away from the patterns that can cause problems in AI work, it’s crucial that teams look to incorporate Design and Research from the beginning, and not as optional extra that can be dropped as the pressure rises.
In the second instalment of this two-part blog series, Minimising AI risk through design, we look at the value of design and research in AI and explore a practical set of tools and practices that teams can use to ensure their AI work is aligned, responsible, and grounded in real needs.