AI transformation is not the same as digital transformation. But you are not starting from scratch

by Becky Davis - Director of AI, Sopra Steria Next UK
| minute read

In summary:

  • AI transformation is not simply an extension of digital transformation, because it introduces new behaviours, less predictable risks and leadership decisions about whether the technology should be used at all.
  • Successful AI adoption still relies on familiar foundations, including outcome-led thinking, a deep understanding of how work actually happens and active leadership support to help people change.
  • Responsible AI depends on data quality, trust, governance and people confidence, so organisations must design transparency, judgement and guardrails into AI use from the very start.

AI transformation often feels like the next phase of digital transformation. 

That’s understandable. But it’s important to make a clear distinction between the two because applying the same assumptions and approaches can lead to the wrong decisions and missed opportunities. 

AI transformation is different in important ways. The technology behaves differently, the risks are less predictable and the leadership challenge asks something new. 

At the same time, organisations are not starting from scratch. Much of what made digital transformation successful still applies and a lot of this will feel familiar to anyone who has led complex change before. It’s still about trying to improve something and people will still need to make that change work day-to-day. 

So, in practice, leaders now need to hold two perspectives at the same time. There are some familiar anchors here but there are also critical shifts that need to be understood in order to use AI responsibly and effectively. 

AI transformation vs digital transformation 

A two‑column comparison table with purple headers titled “What stays the same” on the left and “What changes with AI” on the right.

The left column lists three unchanged principles: start with the outcome you want to achieve; look properly at the work and processes; and recognise that people make change land.

The right column lists three AI‑specific shifts: deciding whether AI belongs in a situation at all; recognising that data, trust and governance play a much bigger role; and acknowledging that AI challenges people on a deeper level.


What still holds true? 

1. Start with the outcome 

The most useful place to begin hasn’t changed. 

Organisations need to be clear about what needs to get better and why. That might be a service that isn’t working well today or a team spending too much time on work that could be simpler. 

From there, there is a decision about whether AI has a role to play. 

AI is not a solution on its own, it simply brings a new capability. The value comes from how deliberately it is applied and works best when tied to something real. 

For example, a team dealing with high volumes of repetitive customer queries might look to AI to generate responses. That can work well, but only if the goal is clear, such as reducing response time or improving consistency and the human experience is designed properly and responsibly around it. 

2. Look properly at the work 

This is a lesson many organisations learned the hard way in digital transformation and it still applies here. 

The technology is rarely the hardest part of change because the real issue usually sits in the work around it. A process may already be harder than it needs to be, but people have learned to live with that over time so don’t realise that it could be simplified. Just putting AI on top of it will not solve the underlying problem. 

The value comes from understanding where the work is getting stuck and deciding whether AI is actually the right way to improve it. In some cases, it will be. In others, a more conventional automation approach will do a better job. Or the process itself might need fully redesigning before any technology is added at all. 

For example, a reporting process that takes hours each week might look like a good candidate for AI summarisation. But if the data or inputs are inconsistent and the process itself is unclear, redesigning the workflow may have far more impact than introducing AI too early. 

This is still a familiar leadership task; look properly at the work, be honest about what is not working and then decide what role the technology should play. 

3. People make change land 

The importance of people is another familiar anchor. A strong use case and a promising tool only take organisations so far. People still need to understand what is changing, why it matters and what it means for them in practice. Without that, confidence stays low and adoption tends to stall. 

This becomes clear very quickly. Teams given access to AI tools without context tend to use them cautiously or not at all. Teams that understand where and how AI helps are far more likely to experiment, learn and improve. 

The leadership basics still matter; clear communication, trust and space for questions. Visible support from leaders matters too, especially when people are trying to work out how a new technology fits into day-to-day work. 

None of that changes just because the technology is newer or more powerful. 

What is different with AI? 

This is where the shift becomes more obvious. 

1. Decide whether AI belongs there at all 

One of the biggest differences with AI is that the decision starts earlier. 

With much of digital transformation, the conversation often began with how to improve or digitise a process. AI asks for a different kind of judgement first. Before getting into rollout, design or tooling, organisations need to decide whether AI belongs in that context at all. 

That judgement depends on more than technical capability. It includes whether the technology is suited to the task, whether it can be used safely, whether it will improve the outcome in a meaningful way and whether close human oversight is needed throughout. 

There is a wider call to make as well. Even where AI can be used, it still may not be the right answer. A simpler fix may solve the problem better. Standard automation may be enough. Process redesign may create more value with less risk. 

AI is not a cheap investment either. That is true financially, environmentally and in human terms. These systems rely on significant energy, compute power and infrastructure. They also carry human impact, from the labour behind the technology itself, through to the effect they may have on jobs, skills and inequality over time. 

For example, using AI to deliver a marginal efficiency gain in a low-impact process may look attractive on paper. But when cost, complexity and long-term impact are considered, it may not be justified. 

This is where responsible AI becomes a leadership issue rather than a technical one. The hardest call is often not whether the technology can do something, but whether it should. 

2. Data, trust and governance shape whether AI is safe and useful in practice 

This is a particularly important shift. 

In digital transformation, data was often treated as something that needed to be cleaned up and organised so systems and reporting or new insight could be created. That still matters, but in AI transformation, data plays a more active role because it influences how the system behaves, not just what it has access to. 

The quality of the AI outputs depend on the quality of the patterns the system is learning from. If the material and data it draws from is biased, inconsistent or poorly managed those weaknesses can show up in the results. If the organisation’s knowledge base is patchy or historic decisions reflect habits you would not want repeated, AI can absorb that and play it back. In that sense, AI reflects the quality of thinking, language and judgement that sits underneath it. 

For example, an AI assistant trained on outdated or incomplete policy documents may produce answers that sound confident but are wrong. Without the right controls, that risk is easy to miss. 

Trust becomes more demanding in this environment. Traditional systems were far more deterministic, outputs could be traced back through defined rules and it was fairly easy to understand why an answer had been produced.  

AI is more jagged than that. The same model can perform well in one context and poorly in another. It can also produce something polished that is still incorrect. 

That changes what trust requires. Confidence on its own isn’t enough. There needs to be enough transparency to understand how the model got to those outputs, what it is drawing from and where its limitations sit. That needs to be designed in from the start, not added later as an extra control. 

The same is true of governance. Central governance still matters. So do clear principles, standards and guardrails. But good judgement also needs to sit much closer to the work, in the teams who are actually using the technology day-to-day. That is where decisions get made about what AI should be used for, what needs checking and where the red lines are.   

3. AI changes the people challenge more deeply than most digital change did 

AI lands in an environment where people already have strong feelings about it. 

Employees are hearing headlines all the time. They are seeing hype, fear, bold claims and constant predictions about what AI will change. By the time organisations begin talking about their own plans, many people have already formed a view or at least absorbed part of the wider narrative. 

That means the leadership task now sits at two levels: helping people through the local change within the organisation, while also helping them make sense of the global story they are already hearing around AI. 

Trust and confidence sit right in the middle of that. They shape how openly people experiment, how willing they are to question what they see and how ready they feel to admit when they do not understand something yet. 

This also goes beyond adoption. Over time, AI will change tasks, roles and expectations. Job descriptions will shift. New responsibilities will emerge. Some work will reduce and other work will grow in importance.  

For example, roles that once focused on producing content or analysis may shift towards reviewing, validating and improving AI-generated outputs. That changes where judgement sits and what “good” looks like. 

Organisations need to think seriously about this. Where does judgement sit in the future? What new roles are needed? How are people supported through that shift? 

That is one reason AI literacy is so key at leadership level. It is not just about whether people can use AI tools. It is whether they understand the limits, challenge the outputs, use them responsibly and apply sound judgement. 

A practical way to approach it 

Organisations need a way to think about AI that is grounded in reality.  

Not just at board level. Not just when they are ready for a large transformation programme. But in the day-to-day decisions teams are making as they experiment and apply AI in their work. 

That’s why we developed the six pillars of AI Maturity. A framework that helps both leadership and delivery teams to ask the right questions, make sensible decisions and assess what really needs to be in place if AI is going to work well. 

The framework can be used top-down, across the organisation, to shape thinking about strategy, governance, technology, data, culture and people. Or it can be used bottom-up to assess individual use cases. 

Because in practice, AI adoption isn’t neat, most organisations aren’t moving through a perfect linear programme. 

Different teams are trying different things. Some experiments create value. Some do not. Some look promising at first but raise bigger questions when examined more closely. 

The six pillars provide a way to step back and assess whether the conditions are there for AI to work well.  

Is this tied to a meaningful outcome?  

Should AI be used here at all?  

Is the technology fit for purpose?  

What is the data teaching the system?  

Do the right guardrails exist?  

Does the culture support both trust and challenge?  

Do people have the confidence and judgement to use it responsibly? 

The framework is there to support practical judgement and promote better decisions 

Final thought 

AI itself is not new. This technology has been around since the 1950s. My grandpa was working on this in Cern in the ‘60’s!  

What has changed is the level of investment and the leap in capability, driven in large part by huge technology investments over recent years. 

Today, the challenge is not about bringing the technology into existence. It’s already here. 

The real challenge now is to work out where it is genuinely useful in the real world, where it is not and what it takes to use it responsibly. 

There is a lot that will feel familiar. Start with the outcome. Look properly at the work. Bring people with you. 

Then challenge yourself to go further. 

We need to think carefully about where AI belongs. Be honest about the wider cost of using it. Build for trust and transparency from the start. And prepare people not only to use AI, but to use it responsibly and with judgement. 

Turning AI ambition into real impact  

We’ll be expanding our thinking on AI transformation at our upcoming AI for Leaders webinar series. Across three 30-minute sessions, our AI experts will be diving into what drives real AI value and determines successful adoption: leadership literacy, data foundations, and AI experimentation.  

📅 12 May - The leadership literacy behind effective AI adoption with Becky Davis 

📅 13 May - Building the data foundations for successful AI with Susannah Matschke 

📅 14 May - Learning faster through AI experimentation with Gary Craven and Jonti Dalal-Small 

These sessions will offer clear, practical perspectives to help you lead AI with more confidence.

Search

ai-and-technology

consulting

digital-transformation

Related content

Announcing our client learning programme: The Sopra Steria Next Skills Academy

We're pleased to announce the launch of our innovation client learning programme, the Sopra Steria Next Skills Academy.

Bringing Digital Identity to the citizen and consumer

The paper offers valuable insight into the public view on digital identity. It explores how we can, as an industry, encourage safe and secure digital identity adoption that benefits governments, companies, users and society as a whole. 

Debt Management - a holistic approach

In this research report, we highlight and examine the need to adopt a holistic approach on the management, recovery, and mitigation of debt due to the rising cost of living crisis and economic downturn in the UK.