1. Value first, hype second
Neil kicked off his talk with a hard truth: too many AI projects end up in the dreaded Proof of Concept (PoC) graveyard because they chase hype over value. He stressed that value management isn't a new concept – it’s clearly defining how AI contributes to customer needs, employee productivity, and business goals. Without measurable Key Performance Indicators and a realistic PoC, AI is just tech for tech's sake. His message? If AI doesn’t drive tangible outcomes, it’s not worth doing.
2. Manage AI-specific risks
He then looked at the unique risks of AI, warning that traditional risk management models don’t cut it anymore. With AI, the model is the data – and once it learns, it’s nearly impossible to unlearn. He raised serious concerns about data privacy, vendor trust, and the lack of control over how AI might reuse sensitive or proprietary information.
He highlighted the danger of hallucinations from generative AI, which can lead to reputational damage, and pointed to techniques like Retrieval Augmented Generation (RAG) to ground AI responses. But even with such tools, Neil stressed that testing AI demands new skills – it’s no longer just pass or fail, but a nuanced assessment of statistical accuracy, relevance, and intent.
3. Think sustainably
Moving on to AI’s environmental impact, Neil referenced a European Commission report that warns of a 45% rise in digital-related emissions by 2030. With digital tech already accounting for double the emissions of civil aviation, he urged a rethink of how and when we use AI.
Neil emphasised that AI models – especially large ones – consume vast energy during both training and inference, and much of the sustainability gains can be made early in the design process. To be blunt: if you don’t need AI, don’t use it. He poked fun at misguided use cases like using Large Language Models (LLMs) for basic math, arguing that traditional tools are often more efficient and more accurate. Instead, he advocated for smarter, leaner AI – using smaller models, fine-tuning, and only applying AI where it truly adds value.
4. Build transparency and trust
To foster adoption, especially in sensitive sectors like healthcare, AI systems must be transparent and auditable. Public unease around AI, from job loss to decision-making power, is growing and should be addressed head-on. Using an example from a medical device project, Neil showed how regulated environments demand clear, trustworthy AI processes. He recommended applying established digital best practices like Development, Security, and Operations (DevSecOps) and robust quality management systems.
He also highlighted the importance of structured data handling – using separate training, test, and validation sets – and building feedback loops for continuous improvement. Trust, he argued, is earned through transparent, auditable development from start to finish.
5. Create a safe space for innovation
When it comes to innovation, Neil emphasised that experimentation is key to unlocking the potential of any new technology, and AI is no exception. He shared how Sopra Steria enables this through its internal Digital Enablement Platform: a secure, modular sandbox designed for rapid AI prototyping and learning. Built with DevOps automation, boilerplate starter projects, and a plug-and-play architecture, the platform supports both sovereign and cloud deployments while protecting Intellectual Property (IP) and data. The goal is to give teams the tools to make innovation safe, fast, and frictionless – laying the groundwork for broader cultural change.
6. Design for scalability
Neil then went on to describe how a major issue with AI is that people innovate and experiment, they do a PoC, but then it stops. The AI PoC graveyard problem. One common issue behind this is that innovation cannot show business value, it’s too difficult to scale out into an operational system or does not address a business need.
He outlined how cloud hyperscalers offer an accessible starting point for scaling AI, thanks to their flexible, pay-per-use models and ready-made AI services. As usage grows, organisations can shift to prepaid or dedicated environments for greater efficiency and control. But for some, especially those with strict data protection needs or heavy workloads, private infrastructure and Graphic Processing Units may be the smarter move.
Neil explained how modular, cloud-native setups like Kubernetes help retain scalability even in fixed environments. He also pointed to Nvidia’s Triton Inference Server as a powerful tool for managing AI traffic, model selection, and caching. Ultimately, he emphasised that as scale increases, so too does the importance of financial optimisation, making DevOps and FinOps (Financial Operations) crucial allies in any AI rollout.
7. Prioritise security and privacy
AI relies heavily on data, which makes security and compliance non-negotiable. From General Data Protection Regulation (GDPR), to emerging risks like prompt injection attacks, AI must be secured by design – especially when used in software development. Neil stressed that when it comes to AI, security and compliance aren’t optional – they’re foundational. From safeguarding customer data under GDPR to protecting IP, strong privacy practices are essential.
He highlighted emerging threats like LLM prompt injection attacks, pointing to evolving standards such as those from the Open Worldwide Application Security Project (OWASP) Generative AI Security Project. Neil also raised concerns around AI-assisted coding, warning that without proper DevSecOps in place, there’s a risk of introducing malicious code.
Bottom line: AI security needs to evolve as fast as the technology itself.
Image credit:
8. Empower your workforce
Towards the end of the session, Neil explored the evolving dynamic between humans and AI, noting that AI isn’t just another tool like Excel – it’s more like a partner. Building confidence in this human-AI relationship starts with safe, familiar environments like M365 and Teams Copilot, where employees can learn prompting techniques and validate AI outputs.
He emphasised the importance of securing data access based on roles to unlock AI’s full potential responsibly. As AI adoption matures, more tailored use cases and automation opportunities will emerge. Tools like Copilot Studio enable the creation of bespoke AI agents, but accuracy and trust remain essential.
Neil also touched on the rise of agentic AI – where networks of intelligent agents collaborate, blending functional and cognitive capabilities, reinforcing the shift from AI as a copilot to AI as a true coworker.
10. Collaboration depends on good data
To wrap up the session, Neil reinforced that effective AI starts, and ends, with data. Without it, AI simply can’t function. He pointed out the importance of building structured, enterprise-ready data environments like data warehouses and lakehouses to ensure data is accessible, integrated, and fit for purpose.
Governance, discoverability, and access rights are crucial to manage both structured and unstructured data securely. From events to transactions, he noted that good data management isn’t just the foundation of AI, it’s the enabler of everything AI can achieve.
Know where you stand
Understanding your organisation’s AI maturity is the first step to scaling it effectively and responsibly. Our AI Maturity self-assessment helps you do just that, giving you a clear view of your current strengths and areas to improve, and providing a tailored roadmap to guide your next steps.
It takes less than 10 minutes, and the insight is immediate.