Cultivating psychological safety in AI decision making

by Jonti Dalal-Small - Organisation Psychologist & Behavioural Science Lead
| minute read

The potential of Artificial Intelligence (AI) and automated decision-making tools to create efficiency and drive pace is exciting. But as organisations move to incorporate AI into their day-to-day work, a challenge is to balance innovation with results. Organisations must consider the human side of the equation by nurturing a positive and adaptive culture. The key to that is psychological safety.

Effective decision-making is at the core of organisational success, and AI is already delivering benefit in this space. AI has the potential to provide much needed efficiency improvements in organisational decision making. By harnessing the power of machine learning and data analysis, AI can revolutionise how organisations process information, identify patterns, and ultimately make decisions. But automated decision-making must be backed up by human intelligence and accountability.

AI alone isn’t the answer

Decision making efficiencies are not just about speed. Often the aspiration is that decisions are tailored yet consistent, empathetic yet fair - a feat that requires an understanding of human behaviour, values and motivation. While AI promises to be quicker, it comes with its own challenges. Biases can seep into algorithms, leading to undesirable results and exaggerating societal inequalities. For instance, the 2020 attempt to grade A-Level and GCSE exams with a machine learning algorithm resulted in nearly 40% of students receiving lower grades than anticipated by teachers. This led to public uproar and legal action, especially when the lower grades happened more frequently in inner city state schools.

Even with these challenges, it’s possible to get it right. Combining the strengths of AI with human elements will optimise the process. By integrating insights from behavioural science and ethical AI, we can enhance decision-making, ensuring it becomes fairer, more informed, and empathetic. While AI is not the sole answer to decision-making challenges, it can contribute significantly when used in tandem with human intelligence and ethical considerations. We need humans to be part of the process. More than that, we need humans to feel psychologically safe in contributing to that process.

Culture and behaviours of an organisation

In a time of rapid, complex and unpredictable change, focusing on psychological safety is a critical factor when implementing artificial intelligence. The term ‘psychological safety’ refers to an individual's sense of security in taking risks, voicing opinions, and making mistakes without fear of punishment or retribution.

Psychological safety is vital to the successful adoption of AI. Individuals need to see AI as an opportunity rather than a threat, and this can only happen if they feel psychologically safe. Otherwise, individuals may resist the adoption of AI tools and view them as threats, hampering the successful integration of AI into an organisation's operations.

Fostering psychological safety

To foster psychological safety, organisations need to promote a culture where open communication is encouraged, and concerns about AI can be openly discussed. Google's Project Aristotle found that the biggest determinant of a successful team was whether individuals felt safe to speak up and share ideas. This highlights the importance of psychological safety in promoting success within an organisation.

The importance of psychological safety extends to the development and deployment of ethical AI. When individuals feel psychologically safe, they are more likely to trust AI and participate in implementing responsible AI. This can help organisations spot and mitigate any biases being built into AI algorithms, rendering more effective and efficient systems.

Poor implementation of AI can damage psychological safety within an organisation. However, if organisations focus on ensuring their employees feel psychologically safe, the introduction of AI is likely to be smoother. This will enhance the employee experience, improve decision-making, and lead to better overall outcomes.

In our work with clients – and within our own organisation – we are implementing a psychologically safe culture. We’re passionate about this approach, because we know that organisations that thrive with AI won’t just have the right tech, data and governance. They will be defined by collaboration, an ethical approach and cultures of psychological safety.

Search

ai-and-technology

consulting

Related content

Celebrating young innovators: TeenTech Awards 2024

Becky Davis was a lead judge for the Best Innovation Award at the recent TeenTech Awards filled with inspiring and passionate young innovators. Take a read about her experience.

Announcing our client learning programme: The Sopra Steria Next Skills Academy

We're pleased to announce the launch of our innovation client learning programme, the Sopra Steria Next Skills Academy.

Bringing Digital Identity to the citizen and consumer

The paper offers valuable insight into the public view on digital identity. It explores how we can, as an industry, encourage safe and secure digital identity adoption that benefits governments, companies, users and society as a whole.