Human-centred transformation in the age of AI

by Becky Davis - Director of AI, Sopra Steria Next UK
| minute read

In summary:

  • AI is reshaping how organisations think and operate, and leaders today must choose whether the future becomes empowering and human‑centred or efficient but unfair.
  • Modern AI, from machine learning to generative and agentic systems, works by learning patterns, making human judgment, oversight, and literacy more important than ever.
  • Building a successful AI‑driven organisation requires trust, ethical governance, high‑quality data, adaptable technology, and a culture where people feel confident experimenting and learning.

In my recent blog, Leading with purpose in an AI-driven world, I invited participants to imagine the world in 2035 – a world where AI has been harnessed for good, shaping a more human, equitable, and sustainable society. I explored what this future could look like if leaders make the right AI choices today:

  • AI empowering people, not replacing them - helping us work smarter, think more creatively, and make better decisions.
  • Organisations transformed, where AI systems and human judgment work together seamlessly to deliver both social and economic value.
  • Ethical and responsible governance, ensuring transparency, fairness, and trust underpin how AI is built and used.
  • Education and inclusion, giving everyone the literacy and confidence to participate in an AI-driven world.
  • A redefined sense of purpose, where technology amplifies what makes us most human - compassion, curiosity, and creativity.

But it’s important to acknowledge another possible future. A 2035 where algorithms quietly become the bureaucrats of everyday life: efficient, but unaccountable; where decisions about healthcare, opportunity, and justice happen without transparency; where the race for speed and cost eclipses empathy and human oversight. A world that feels optimised, but not fair.

The choices leaders make now will determine which version of 2035 becomes reality. As AI technology becomes more accessible, leaders must build their understanding of this powerful technology, harness its capabilities responsibly, and deploy it thoughtfully.

Unlike previous waves of digital transformation, AI not only changes the tools we use but impacts how organisations think, decide, and behave. To understand this, it’s crucial to recognise how AI differs from traditional software.

Understanding the different types of AI

AI is a broad term, covering many systems and techniques, but at its core is machine learning - software that learns from data by identifying patterns rather than following explicitly programmed rules. Stripped of technical complexity, AI systems don’t “think” like humans; they find patterns at scale.

Traditional software relies on explicit rules. Think of an Excel spreadsheet: you input a formula, and the outcome is predetermined by the developer. AI works differently. You define its goal, provide examples (thousands or millions of them), and the system learns patterns itself within established guardrails. AI learns what to do; traditional software is told what to do.

Most AI in use today is “narrow AI,” designed to perform a specific task exceptionally well - whether recognising speech, detecting fraud, generating text, or predicting equipment failures. These systems don’t understand tasks as humans do; they identify statistical patterns and apply them to produce useful results within a specific context.

Generative AI: predicting the next word

Generative AI, such as ChatGPT, Microsoft Copilot, Google Gemini, or Claude, works slightly differently. These systems don’t just analyse data, they create content: documents, emails, code, images, analysis. Built on large language models (LLMs) trained on billions of examples of human text, they generate text by predicting the next most likely word.

Although they feel intelligent - able to explain quantum physics simply or debug code - this is pattern matching, not reasoning. As Ethan Mollick describes, LLMs mimic reasoning by generating responses that statistically resemble knowledgeable human answers. Mo Gawda, former Chief Business Officer at Google X, analogises it to someone who has read every book but never experienced the world: they can describe it but haven’t lived it.

This is also why LLMs can produce “hallucinations” - plausible but incorrect outputs. They aren’t trying to deceive; they are doing exactly what they were designed to do: predict the next word based on patterns. Human judgment is essential to contextualise these outputs.

Agentic AI: from automation to autonomy

Agentic AI represents a significant leap from traditional automation. Unlike automation, you give agentic AI a goal rather than a process. It breaks down the steps to achieve that goal, orchestrates actions across systems, and can even communicate with other agents.

For example, an agentic AI goaled to prepare for a client meeting might scan emails, calendar entries, and notes; identify relevant trends and insights; generate briefing documents; update a presentation; summarise previous discussions; and suggest strategic questions. Agentic AI doesn’t just follow instructions, it decides how best to achieve the outcome you’ve defined.

Modalities: expanding AI into everyday work

Recent AI advancements in modalities mean that AI can now see, hear, and converse with humans in natural language. You no longer  need to code in Python to interact with AI. You can have AI summarise documents, coach presentations, or even generate insights - bringing unprecedented collaboration into everyday work.

For decades, technology executed human instructions. AI changes that dynamic. Value now emerges from a collaborative relationship: AI learns, predicts, generates, and surfaces insights, while humans guide, interpret, and contextualise outputs.

This can be framed as:

Value = (human insight × AI capability) + human judgment in context.

The investment, therefore, must be in humans as much as technology.

How organisations are approaching AI

Many organisations tackle AI in one of two ways:

  1. Use case approach – similar to digital transformation, organisations start with specific AI use cases, such as chatbots or analytics tools. Successes are seen, but many initiatives stall in proof-of-concept stages.
  2. Experimentation approach – organisations deploy co-pilots and generative AI across teams, testing value without a specific use case. While culture shifts and productivity gains occur, Return on Investment (ROI) can be difficult to quantify.

The organisations excelling are those recognising AI’s real potential: connecting data, systems, and insights to enable smarter, faster, and more creative decision-making - with humans driving the application of AI in context.

Building people-centred AI cultures

Digital transformation strategies alone are no longer enough. AI adoption requires sustained engagement, experimentation, cultural readiness and meaningful behavioural change. People must trust AI, feel confident using it and understand their role in the “loop.”

Building AI literacy is deeply personal, shaped by role, motivation and empowerment to experiment. Fear and uncertainty about evolving roles are real, so supporting people through this ambiguity is key to shaping a new culture.

A starting point for organisations would be to ask: Do we have a clear vision for the future, shared by shareholders, leadership, employees, and customers? Aligning people behind a clear AI narrative and fostering psychologically safe environments are as critical as training.

In our work, we often observe that:

  • People need to understand how AI helps them achieve personal goals to be motivated to learn.
  • Honest conversations about job security are essential.
  • Internal AI messaging must be transparent and authentic to overcome external hype.

Six practical steps for leaders

Purpose and strategy - start with ‘Why’
Anchor your AI ambition in organisational purpose and human outcomes.
→ Ask: How does this make our organisation smarter, fairer, or more human?
→ Do: Define a north star that blends commercial value with ethical intent.

  1. Governance and ethics - lead with trust
    Trust is built through transparency and accountability, not policy alone.
    → Ask: Who sets the goals, who checks the guardrails, and who stays accountable?
    → Do: Create an AI governance rhythm - review goals, bias, and outcomes regularly.
  2. Data - build the intelligence foundation
    AI learns from patterns in data; human judgement gives those patterns meaning.
    → Ask: What story does our data tell, and whose voices are missing?
    → Do: Focus on high-quality, diverse, and well-connected data sets.
  3. Technology and infrastructure - design for adaptability
    We’re moving from rules-based systems to reasoning systems.
    → Ask: Are our platforms flexible enough to learn and evolve safely?
    → Do: Start small with agentic or co-pilot pilots that solve real human problems.
  4. Culture - shift behaviour, not just process
    No AI strategy succeeds in a culture that resists it.
    → Ask: How do we make experimentation safe and curiosity rewarded?
    → Do: Use behavioural science and human-centred design to build engagement and confidence.
  5. Expertise - build AI literacy and confidence
    AI literacy goes beyond technical skills - it’s about critical thinking, ethical reasoning, and confident use.
    → Ask: Do our leaders and teams know when to trust, question, or override AI?
    → Do: Invest in hands-on learning; just 10 hours of real use builds intuition (Ethan Mollick, Professor at Wharton School, University of Pennsylvania and Author: Co-Intelligence: Living and Working with AI.)

Shaping an AI-driven future is not just about technology - it’s about people. Leaders who invest in human-centred, ethical and collaborative approaches to AI today will define whether 2035 is a world that empowers, includes, and inspires - or one optimised but unaccountable.

Find out more about people-centred AI transformation

This blog was based on the webinar, Human-centred transformation in the age of AI. If you’d like to watch the full webinar, you can access it on demand now.

Following this, our Head of Design - Mark Skinner, held a webinar on, Shaping the future of AI through design, where he demonstrates how design can centre AI around users, helping organisations spot risks early, reduce wasted investment, and enhance both user experience and systems.

Finally, we held a panel discussion about The AI challenge: How to deliver transformation that serves people. In this session we brought together senior leaders from Nationwide, Home Office, Oracle, NHS SBS and Sopra Steria, to explore how organisations are turning AI ambition into reality. 

All of this, and more, can be found on our People-centred Hub. Discover a series of expert-led webinars, insightful thought leadership articles, and practical tools - all designed to guide leaders to deliver people-first change. If this is written by Becky, it's a bit weird that it talks about her in the third person.
Search

ai-and-technology

consulting

Related content

Celebrating young innovators: TeenTech Awards 2024

Becky Davis was a lead judge for the Best Innovation Award at the recent TeenTech Awards filled with inspiring and passionate young innovators. Take a read about her experience.

Announcing our client learning programme: The Sopra Steria Next Skills Academy

We're pleased to announce the launch of our innovation client learning programme, the Sopra Steria Next Skills Academy.

Bringing Digital Identity to the citizen and consumer

The paper offers valuable insight into the public view on digital identity. It explores how we can, as an industry, encourage safe and secure digital identity adoption that benefits governments, companies, users and society as a whole.