Many people in the Digital Identity space are talking about the UK’s Digital Identity and Attributes Trust Framework, and rightly so. As the market grows and fragments, the framework marks an important step in the UK Government’s commitment
to shaping how digital identities are developed in the UK.
So, what is the framework?
The UK Digital Identity and Attributes Trust Framework is a document produced by the
Government to outline how organisations who create and use identity services should behave. Those who follow the rules of the Trust Framework will be granted a ‘trust mark’ which will communicate the organisation’s trustworthiness
with the public and other organisations. Following consultation with a range of public and private organisations last year, the Government has produced a draft (or ‘Alpha’) version of the document, which it has invited organisations and
citizens to comment on.
The Trust Framework aims to describe the high level principles of digital identity, including creating shared definitions of terms such as ‘attribute’ and ‘identity’. This is a welcome step forward in a fragmented landscape which
can include anything from a Facebook login to a digital passport – depending on who you ask. The UK Government describes digital identities as “a digital representation of a person [which] enables them to prove who they are during interactions and transactions”.
Unlike a Facebook login, it must include attributes which tie an identity to a real person, with evidence that shows they exist and are who they say they are.
Why does it matter?
Digital identity solutions are inevitably intertwined with ethical challenges. Depending on how they’re designed, they can either exacerbate or reduce issues like fraud, data loss and digital exclusion. So, frameworks governing the use of digital
identity solutions and whether they should be trusted are essential to protecting the public.
While this draft Trust Framework is helpful in providing a high-level introduction to digital identity and the areas of consideration for service providers, it would benefit from more detail regarding how principles (such as privacy, interoperability
and inclusion) can be integrated into a solution. As a result, it is difficult to understand how an organisation may be certified against these requirements. This is central to building trust between service providers and users, a principle which
Sopra Steria believes to be integral to the success of the digital identity market. While recommendations may give organisations scope to start thinking about the importance of these principles, providing rules would create a means against
which organisations can be held accountable for protecting the rights of users.
We recognise that there are many principles that underpin the development of ethical and trustworthy Digital Identity systems. However, to explore the draft Trust Framework and key areas of importance, we have chosen to focus on three.. We were delighted
to see these themes included in the Government’s Trust Framework but also recognise opportunities for more rigorous and practical advice.
Any Digital Identity solution risks excluding vulnerable and/or minoritised groups, so it’s essential that these solutionss are designed to encompass the needs of all users. This is already recognised by
the Government, which dedicated section 2.3 of the paper to the topic. The draft framework includes requirements to comply with the Equality Act 2010, noting how technologies can exclude specific user groups, especially if they’ve only
been tested with a particular demographic. However, we believe that the paper needs to go further. While the Equality Act forms the basis of anti-discrimination governance in the UK, reports such as the CDEI’s Landscape Summary into Bias in Algorithmic Decision-making demonstrates that the Act does not sufficiently cover all manifestations of algorithmic bias. Now is the opportune time for the Government to take action on combatting algorithmic bias by introducing algorithmic assessments and asking organisations
to transparently communicate the results of their audits. Furthermore, the Equality Act doesn’t recognise the rights of individuals who don’t identify as male or female (for example those who identify as non-binary). As a result,
the Act is not sufficient for ensuring that digital identity solutions don’t discriminate on the basis of gender.
We believe that the Framework needs to be explicit about groups which might be disproportionately disadvantaged by these technologies, drawing on best practice from organisations like the World Bank (see fig. 1). This would help organisations
to ensure that they are actively mitigating against discrimination of particular groups, and offer opportunities for organisations to communicate what they are doing in this space. This does not necessarily require “find[ing] out as much as you can about the types of people that will use [the service]” as outlined in the framework – which contradicts the principle of data minimisation. Instead, it requires building on new and existing research in this area, and speaking to users to find out the right information about them.
For instance, their reliance on services, how they currently access them, what devices they have access to and their ability to use them, any barriers they face to accessing services, etc.
This takes us to a final requirement on inclusion in the Trust Framework: the ‘annual exclusion report’:
“All identity service providers must submit an exclusion report to the governing body every year. The governing body will tell you exactly what information should go in the report. It will at a minimum need to say which demographics have been, or are likely to be, excluded from using your product or service. You must explain why this has happened or could happen.”
While this requirement is a welcome step in asking organisations to actively pre-empt areas of exclusion and mitigate against them – as well as holding organisations accountable for the result of this work – it also means that organisations
will need to collect demographic data on their users that may not otherwise be needed. Without describing which categories of exclusion will be measured, it is difficult for organisations to start putting changes into place. Demographic data
should not be collected without a reasonable purpose for doing so, so it’s essential that the Government ties its exclusion report to clearly defined areas of discrimination. This is not a new area of exploration, and there is plenty
of pre-existing work that the Government can draw on to highlight how discrimination is embedded into technology. Some notable examples include Joy Buolomwini’s Gender Shades project
which looks at how facial recognition technologies produce dramatically poorer results on the faces of darker-skinned women, or Cathy O’Neill’s book Weapons of Math Destruction which
highlights many examples of algorithmic bias, including how algorithms can be weaponised to discriminate on the basis of socio-economic background.
2. Privacy and transparency
Privacy and transparency are increasingly important to corporate agendas, as high-profile privacy breaches have reinforced the risks (as well as the benefits) that using data can bring. The draft Trust Framework clearly recognises this; indeed,
the word is mentioned throughout the document as a key principle for successful digital identity solutions. However, we believe that the document requires more detail to help organisations make clear, informed decisions about best practice
when it comes to data use. For example, the document nods to the fact that “users will be able to choose which organisations can see and share their personal data [but] not have a choice in specific situations”. This is
a useful principle of consent to outline in the context of digital identity, but should form part of a clear privacy mandate which highlights which specific situations will prevent users from seeing how their data is used (and why), as well
as advice for ensuring privacy and transparency throughout the identity ecosystem and lifecycle. Many end-to-end digital identity solutions will require the tools of multiple suppliers, leaving us with a key question – do all these suppliers
need to be trusted by the Trust Framework, and if not, how do we ensure that individuals are protected as they navigate through the ecosystem? How will all organisations (not just user- or customer-facing ones) be incentivised to comply with
the Trust Framework?
Sopra Steria believes that the concept of authorisation and consent should be central to this Trust Framework. User consent has been difficult to measure and guarantee, as evidenced by data privacy scandals such as Cambridge Analytica, where Facebook
users were unaware of how their data was shared and used (despite this being 'present' in terms and conditions). The draft Trust Framework states that users should be told how a product or service works by clearly explaining “any terms and conditions of use that the user needs to be aware of”.
But what scandals such as Cambridge Analytica teach us, is how loosely the concept of a ‘clear explanation’ can be. We believe that the Trust Framework should mandate practical rules for ensuring the readability of terms, such
as achieving a certain readability score or summarising key points in a reasonable number of words.
Surprisingly, the draft Trust Framework provides very little reference to biometric data. In fact, the document provides no detail around the specific considerations that the collection, storage and use of biometric data may require. This is unexpected,
considering the public attention that biometric technologies such as facial recognition have garnered. Additionally, the use of biometrics in other national identity systems have taught us some important lessons about the risks of their error
rates: one study found that 20% of households in the Indian state of Jharkhand failed to get food rations due to biometric errors – a rate five times
higher than that of ordinary ration cards. As a result, we believe that future iterations of the Trust Framework should outline specific considerations relating to these technologies, with particular reference to known risk areas including
accessibility & inclusion, security, and privacy.
Digital Identity solutions have the opportunity to drastically reshape how people across the UK access products and services. As outlined in the draft Trust Framework, making interactions and transactions available online can save organisations time and
money; reduce the risk of fraud; make it quicker and easier for users to complete; and encourage innovation. Although this is true, achieving these benefits is dependent on the design of a solution. While on the one hand a digital identity solution
can reduce fraud, if it is not designed with the appropriate privacy and security controls in place, it can also increase fraud. The draft Trust Framework provides a great first step in recognising the importance of good design. However, we must not
forget the potential for harm to users, and how organisations will be held accountable.
The draft Trust Framework highlights that it: “would be owned and run by a governing body established by the government. […] The governing body will also make sure that organisations and schemes follow the rules, and decide what to do if they don’t. The body will point you to sources of help for issues which can’t be solved by trust framework members, and may get involved in redress cases.” We
welcome the idea of a governing body, and believe that this body has great responsibility to ensure that digital identity services are appropriately developed and used. However, the draft Trust Framework does not explicitly outline the responsibilities
of the body, such as which redress cases they might get involved in. We are concerned that this could create gaps in accountability, where users are unclear where to turn to for redress. We welcome a second iteration of the Trust Framework that outlines
clear and actionable responsibilities for the body, as well as details of its composition.
The draft Trust Framework provides ample advice or sources of information for organisations dealing with a data breach, including how to respond to and investigate an incident. However, there is little mention of the support required for users whose data
has been lost or who have become victims of fraud. Responses to data breaches should not only include a requirement to tell users that their data has been lost, but also to support users in understanding how to respond to this. This should
include communicating with users the cause of the data loss (if known), the potential impact on them, any next steps they should take, support for economic or emotional damage and details on how to request compensation. This should also include any
specific considerations for the loss of biometric data, which has more complex consequences. Organisations must be held accountable for the impact that any failure in their system has on users, and providing clear requirements for doing so would strengthen
the trust that the Trust Framework is designed to engender.
The draft Trust Framework marks a significant step in the development of the Digital Identity market in the UK, and we are excited to follow the conversations it fosters. The draft Trust Framework has certainly helped to advance the conversation in this
space, and draw on some fundamental principles for successful solutions – including privacy, inclusion, accessibility and interoperability. We are glad to be shaping the conversation in this space, and look forward to seeing how the continued
collaboration of the public and private sectors shapes the next iteration of the Trust Framework.