Let me take you to a day in my life, ten years from now. It’s 2035, and in just one short decade the world feels radically different.
Not because we’ve become more robotic, but because we’ve become more human. Technology doesn’t dominate life anymore; it enables it. Quietly and seamlessly, it supports a society we have consciously chosen to create.
A glimpse of my day in 2035
I start my morning, as I always do, with coffee – and a soft glow on my kitchen wall. Overnight, my personal assistant has summarised global headlines, tuned to my values, not my clicks, and has already mapped out my day.
My morning begins at home, meeting with global policymakers to explore how AI can safely scale in emerging economies. Later, I mentor graduates across Europe through a holographic learning space that feels as natural as sitting across a table.
By lunchtime, I’m out helping to solve challenges for our ageing population alongside community leaders. I don’t think about travel or logistics anymore, my team of digital agents takes care of that. The whole day is about connection with people. We no longer struggle with language barriers or time zones and there’s no friction in collaboration. We have all the information and context we need to get straight to solving real-world problems. I don’t even book taxis anymore as my personal mobility agent does it for me. Reading my calendar, checking traffic, syncing with the vehicle, paying, and rerouting if I’m running late.
Now we have time again to build relationships, to debate and to think, and of course, AI elevates our debates, our innovation, our action.
We once thought AI would just help us process big data, but it’s done much more than that. AI sees the patterns, the nuance, the interrelationships - the tiny details we could never notice, because we could never hold that much data in the human mind at once.
I spend my days deploying AI technology that closes gaps, reduces delays, and redirects funding in real time to the people and places that need it most. Need is spotted early and met before crisis hits.
It’s not just my day that has changed
Across town, my daughter who is a civil servant, walks into a meeting where her AI co-pilot has already read and analysed every citizen submission from the night before. Hundreds of thousands of voices translated into clear, actionable insight she can share and act on.
Even my local café runs on quiet intelligence. Its kitchen agent analyses supplier availability and community habits to plan each day’s menu, ensuring it’s fresh, seasonal and waste free. By the time the doors open, deliveries have arrived and the blackboard menu has rewritten itself.
Technology is embedded in every element of life, but we trust it. Because it aligns us towards something better: a fairer, more equitable, and happier world. From government to citizen, every algorithm is explainable. Every decision is contestable. Communities are part of the system, not excluded by it.
When I look around me at broader society, I see what once felt impossible…
Teachers are freed up to teach. Carers are supported to care. Waiting lists have all but disappeared.
Yes, there was job displacement at first. But we reskilled people, invested and focused on meaningful work. The reality is now, that most of us have more time for what really matters to us.
The barriers that once held people back like rigid systems, one-size-fits-all policies and invisible bias have been dismantled. The world has made space for people to live fully, with dignity and opportunity.
We lead differently too, with empathy, accountability, and a deeper sense of impact. What we measure has changed: not throughput or margin, but inclusion, belonging, and contribution.
This is what purpose-led AI leadership looks like. Putting people and planet at the heart of performance and building trust.
But this isn’t the AI story we were told in 2025, it’s better because we didn’t just scale AI fast, we scaled responsibly.
It started with a different kind of leadership - one that understood that technology alone doesn’t transform the world. It’s vision, trust and people that do.
The other possible future
It’s still 2035. Still efficient, AI-enabled, modern. But something’s off. In this version, we moved too fast. We rolled out too wide and we lost sight of our North Star.
We let algorithms make decisions without the checks and balances we promised ourselves we’d build in. Governance and legislation came too late and we got caught up in the hype. Agents built on agents – orchestrating processes across services, then systems, then whole networks.
Everything connected to everything. But no one truly knew what was connecting to what or where the data was flowing. Inside our organisations, leaders didn’t understand the power of AI technology. Technologists didn’t fully grasp the human impact of what they were creating. Nobody challenged the data that fed the algorithms. We just wanted enough of it to get accurate outputs.
What started with good intentions shifted. The algorithms amplified the culture. We all began with AI to elevate human contribution with “humans in the loop.” But as trust in AI grew, the focus became cost efficiency, not out of greed or profit, but out of fear.
Fear that if we didn’t keep up with the noise, the pace, the promise of automation, we’d be left behind, commercially unviable and written out of the market.
And so AI quietly became the operating model. It became the workforce, the decision-maker. Somewhere along the way, we stopped noticing who was really in control.
The ‘offness’ was subtle at first. Nobody really noticed.
A patient’s life-saving treatment delayed because an algorithm predicted a low likelihood of success.
A student’s personalised learning path limited by an old dataset that equated postcode with potential.
An automated scheduling system quietly trimming human breaks to maximise utilisation.
This is the rise of the AI bureaucrats – systems making life-altering decisions with no way for people to understand, to challenge, or to change the outcome. But we didn’t get here because of bad intentions, we got here because no one stopped to ask: “Is this really the world we meant to build?”
The signs in 2025
Back to today, and we’ve already seen the shadows of this future. Not because the systems were malicious, but because they went unquestioned.
In the Netherlands, an algorithm was used to detect childcare benefit fraud. It sounded fair, data-driven, and impartial. But it used “foreign-sounding names” and dual nationality as risk factors. Thousands of innocent families, mostly from migrant backgrounds, were falsely accused and forced to repay money they didn’t owe. Some lost homes, jobs and even custody of their children.
Amazon built an AI to automate recruitment – a tool trained on historic hiring data. But because most of that data came from men, the algorithm quietly learned that male applicants were preferable. CVs mentioning the word “women” were marked down. No one programmed it to discriminate. It simply reflected the bias already embedded in history. The project was scrapped but it showed how easily our past can become encoded into our future if we don’t question what our systems are learning.
In truth itself, these aren’t AI failures. They are leadership failures. Failures of alignment. Failures of oversight. Failures of courage.
And that’s why what we do next matters so much.
Building the responsible future
As we race to embed algorithms and agents deep into our organisations, we must do it with intention and with our eyes wide open. If we want to reach the 2035 where technology amplifies humanity, we must move beyond awareness into alignment.
To do so, we’ve been working on an AI roadmap. A blueprint for leaders who want to build AI responsibly, ethically, and at scale.
Because AI is not “just another technology”. It doesn’t slot neatly into existing governance frameworks or digital strategies. It behaves fundamentally differently from the tools we’ve used before. Traditional technologies execute instructions. AI systems learn patterns.
And that single shift, from rules to reasoning, changes everything about how we must think, lead, and govern.
The six pillars of AI maturity
Over the past year, we’ve built a framework that enables you to bring AI technology into your organisation responsibly and purposefully.
Leading with AI isn’t just about deploying new tools, it’s about transforming how organisations think, decide, and behave. Too many strategies still start with technology, when the real levers of success sit elsewhere: in purpose, in governance, in data, in culture, and in the confidence of your people to use AI well.
If we focus only on the tech, we risk building powerful AI that has no direction.
All of the pillars need to be considered to create responsible and purposeful AI adoption:
Strategy: the extent to which AI is aligned with and driven by organisational goals. This includes leadership commitment, investment planning, and long-term vision.
Governance: ensures responsible, ethical, transparent, and legally compliant AI practices, including data protection, algorithmic accountability, and risk management.
Tech and infrastructure: covers the robustness, security and scalability of platforms, tools, and cloud infrastructure needed to develop, integrate and deploy, and monitor AI solutions.
Data: refers to the availability, quality, governance, and accessibility of data required to train and support AI systems.
Culture: encompasses leadership mindset, openness to innovation, and organisation-wide engagement with AI adoption, agility to accelerate change and experimentation.
Expertise: reflects internal capabilities, access to skilled talent, external partnerships, and the ability to upskill teams effectively.
These six pillars aren’t independent, they’re interdependent. Your strategy sets your purpose, and the governance makes sure it’s delivered safely. Your technology makes innovation possible, but only if your data is trusted and your culture is ready to use it. But none of it is sustainable without the right expertise – people who can bridge the gap between ambition and action.
When one pillar is weak, you really see the impact – whether it be a Proof of Concept that never makes it live, or no return on your investment on your Gen AI roll out.
We know that a brilliant strategy in place without governance creates risk and implementing technology without transforming culture creates resistance. AI maturity is about balance, making sure that vision, ethics, systems, data, people and culture grow together.
Because only when they do, can AI truly drive the value we are all hoping for.
Beyond the AI hype: putting people first
True AI maturity isn’t measured by how much technology you deploy, or how fast you go – it’s measured by how responsibly, inclusively, and effectively you align it to purpose.
The choices and decisions will ripple far beyond today, and will shape our future. Leadership in this moment isn’t about being the fastest. It’s about moving with purpose.
If we lead with ethics, empathy, and long-term thinking, AI becomes the enabler of progress we always hoped it could be and one that doesn’t leave people behind.
To explore how we can build this future together, we are hosting CCMonth 2025, a month-long programme that is dedicated to exploring how organisations can move beyond the hype and make people the focus of AI transformation.
Through a series of expert-led webinars, a resource hub packed with thought leadership, practical tools and insights, and the launch of our customer and citizen centricity assessment, we’ll help leaders understand what people-first transformation looks like - and how to make it happen.
Here's a quick glance at some of the events we've got lined up: