What does Brexit mean for ethical technology?

by Ben Gilburt - AI Ethics Lead
| minutes read

The release of the European Commission’s whitepaper on AI embodies the continued and growing European narrative towards ethical technology, building on the High-Level Expert Group on AI’s ‘Ethics Guidelines for Trustworthy AI’ and the following ‘Policy and Investment Recommendations’. Ursula von der Leyen, President of the European Commission, has even committed to putting forward legislation “for a coordinated European approach on the human and ethical implications of Artificial Intelligence” within the first 100 days of her presidency.

Cut back to the UK. Less than three weeks after Brexit day and only one day after the release of the new Whitepaper on AI, Google publishes an update to its terms and conditions. On the surface it seems innocuous; an attempt by Google to make their terms more transparent and easier for everyday people to understand. Lurking beneath, however, is something a little bit different, with Google making an update to say that the data from UK customers will be moving from Ireland to the USA.

Why? Brexit.

In this blog, we want to dig a little deeper into this move by Google and the landscape for digital ethics in the UK in light of Brexit.

Why the move?

Since Google announced UK customers data would be moving from Ireland to the USA last week headlines have been claiming that the UK is ‘losing EU data protection’. While this is true in a sense, as is often the case, the truth is more nuanced and less shocking, but still worthy of our consideration.

First, when the UK formally left the European Union on the 31st of January 2020 the UK entered a transitionary phase which will last until the end of the year, where many of the same rules still apply. At the conclusion of this transitionary period, it is fair to say that the UK no longer receives ‘EU data protection’, specifically referencing GDPR, but it will continue to work under UK GDPR. Following the Data Protection Act of 2018, GDPR was made into domestic law in the UK and is functionally identical to EU GDPR. Further to this, it’s never been the case that European citizens data needs to be held by a European country to be afforded the rights set out by GDPR. One of the most forward-thinking aspects of GDPR was its scope to be applied anywhere globally that the data of European citizens is being stored, shared, processed or used in any other way.

In a statement, Google claimed that there would be no changes to the way it treated UK customer’s data. The move appears to be motivated instead by data ‘adequacy’. Data adequacy refers to the requirement for a ‘third country’ (in this context, meaning not a member of the EU and countries without the EU’s right to free movement) demonstrate the same levels of data protection as an EEA state, with appropriate safeguards and approved codes of conduct, in order to allow the free-flow of data without additional checks. While adequacy is certainly something that the UK will be pursuing, with a goal to have an agreement in place before the conclusion of the transitionary period, nothing at this stage is certain. Storing data in Ireland means that if the UK fails to achieve data adequacy before the end of the transitionary period, services may cease to operate normally until either the data is moved, or the UK is granted adequacy. By moving the data to the USA these challenges can be avoided and as we’ve said before, this move does not mean the end for a GDPR type regulation on UK customers data.

So is this a bad thing? On reflection, I think not. Not only can we see the effect of GDPR extending beyond Brexit and protecting UK customer’s data, but the further requirement for data adequacy has prompted the UK to evidence the robustness of their data protection programme and to reinforce it where it may be insufficient. Security, safety and robustness are all important aspects of an ethical digital solution.

Wider implications

This story extends far beyond Google’s latest move. It raises questions around the UK’s future commitment to ethical AI and alignment with the European Union. The new legislation promised by von der Leyen’s parliament need not apply to the UK, so will we take a backwards step and loosen regulation?

While the upcoming regulation may not be legally binding for the UK, other European frameworks have never been legally binding. Two of the documents listed at the start of this blog, the HLEG’s ‘Ethics Guidelines for Trustworthy AI’ and ‘Policy and Investment Recommendations’ are not legislation either. Rather, the ‘Trustworthy AI’ is a conceptual framework for a brand of AI made in the EU which is lawful, ethical and robust. The lawful component may relate to local laws (though, as a European brand it’s fair to say this relates to European law), the ethical aspect extends beyond regulation and robustness is highly dependent on context, expected useable life and likelihood of adversarial attacks. The Policy and Investment Recommendations, on the other hand, are closer to legislation but serve as a framework for future European policy. While this may be best tailored to the wider European context, there’s no reason why the UK couldn’t take some inspiration from the recommendations in its own future policy.

While the UK will not be beholden to the same laws as the EU, it accounted for 45% of the UK’s exports in 2018. Even with the EU ceasing to be a legal force on the UK, they continue to exert an economic force and while GDPR may apply only to data subjects in the EU, it’s arguably had a larger effect on global data policies. The EU stands as too large a market for global players to cut out, and often it’ll make more sense to build a GDPR compliant service even for businesses based outside of the EU. Maybe, just maybe, it makes more sense for that business to roll those same data protection rules out globally, potentially saving the maintenance of two different data policies, or saving building one at all if the service is being designed from the ground up.

While we appreciate a key motivation for the UK leaving the European Union was to have greater control over our national laws, frameworks for building ethical digital technology stand to have a positive impact on people’s lives.

Looking further ahead, the UK’s momentum to build and regulate digital ethics doesn’t appear to be slowing. The UK’s policy landscape supports the continued development of ethical digital technology. The recently established ‘Centre for Data Ethics and Innovation’, reporting into DCMS is carrying out research into key areas for concern, including a recent review of ‘online targeting’ and advises central government policy on these areas. We are also supported by a fairly unique type of institutional infrastructure, with groups like the ‘Alan Turing Institute’ and ‘Ada Lovelace Foundation’ supporting research and developing best practice.

While Google’s data move comes as a surprise, sometimes it takes a surprise to make us take stock of the situation and the future for digital ethics in the UK is looking bright.


Digital Ethics at Sopra Steria

We are a natural partner to take action on Digital ethics issues, moving the discussion from philosophical to practical, collaborating with a range of stakeholders and industry groups to shape a better future, while helping organisations navigate the challenges of Digital Ethics today. Visit our website for further contact information and to discover more.






Related content

Related contents

Technology and the fight against child abuse

Technology has a critical role to play in the fight against the abuse of children and minors.
It enables law enforcement in identifying offenders online and assisting in the investigation of criminality relating to child abuse.