The publication of the UK Government’s AI Assurance Roadmap by DSIT marks a critical moment in the UK’s journey to harness AI responsibly. For me, it’s a moment that feels strikingly familiar.
In the early days of Information Assurance, as a CLAS consultant and CHECK Team Member I was fortunate to be part of the cross-industry collaboration that was driven by the Cabinet Office and CESG (now part of the NCSC) and bodies like the Information Assurance Collaboration Working Group (IACWG). At that time, the challenge was to secure the information that underpinned the Transformational Government initiative. The stakes were high, particularly regarding classified data, the risks not consistently understood, and the technology and user demand was evolving faster than traditional governance could keep pace.
What made the difference back then was not just the frameworks and standards, but the way in which government, regulators, and industry worked together. Information Assurance became a shared endeavour, cutting across the 'stove-pipes' of government. By building a common language, agreed approaches, and collective trust, the UK Government created an environment where organisations could move forward with confidence, knowing that risks were being addressed in a proportionate and consistent way. This challenge was in no way unique to the UK, I spent two years in Australia, working with the Federal and State governments on exactly the same challenges as the UK.
Fast forward to today and I see the same opportunity with AI. The AI Assurance Roadmap is about more than risk management. Like Information Assurance before it, this is about laying the foundations for trust. And trust, in turn, is what enables innovation.
Too often, assurance is viewed as a constraint or a barrier to progress. In my experience, the opposite is true. Effective assurance doesn’t hold innovation back, it provides the guardrails that allow it to accelerate safely.
Organisations that embrace assurance early are the ones that unlock real advantage: they can innovate at pace, they can demonstrate accountability to stakeholders, and they can avoid the missteps that erode trust.
The AI Assurance Roadmap from DSIT recognises this. It signals the UK’s intent not just to adopt AI, but to do so responsibly, inclusively, and at scale. By defining the steps we need to take – from developing standards, to shaping assurance techniques, to enabling adoption across sectors – DSIT is providing a clear direction of travel and this is to be applauded.
Parallels with Information Assurance are important here. Just as Cabinet Office and CESG convened government, industry and academia to tackle shared information assurance challenges, the AI Assurance Roadmap will only succeed through collaboration. No single organisation has all the answers. By working together, we can ensure that assurance is practical, proportionate, and supportive of innovation, rather than an afterthought.
The UK has a real opportunity to lead globally in this space. If we can show that AI can be adopted safely and responsibly, underpinned by trusted assurance frameworks, we will attract investment, talent, and international credibility.
For me, the lesson from Information Assurance is clear: when we build trust, we enable innovation. The UK led the world with Information Assurance in the Information Age, and the AI Assurance Roadmap from DSIT is the next chapter in that story for an AI world – and it’s one we should all take part in writing.