AI has the potential to be the greatest ever invention for humanity. And it should be for the benefit of all humanity equally, but instead we’re heading towards a particular group, the geeks, who will benefit most from AI. AI is fundamentally more likely to favour the values of its designers, and whether we train our AI on a data set gathered from humans, or with pure simulated data through a system like deep reinforcement learning bias will, to a greater or lesser extent, remain.
A disclaimer – Humans are already riddled with bias. Be it confirmation, selective or inclusive bias, we constantly create unfair systems and draw inaccurate conclusions which can have a devastating effect on society. I think AI can be a great step in the right direction, even if it’s not perfect. AI can analyse dramatically more data than a human and by doing so generate a more rounded point of view. More rounded however is not completely rounded, and this problem is significant given any AI which can carry out a task orders of magnitude faster than a human.
To retain our present day levels of inequality while building a significantly faster AI we must dramatically reduce the number of unethical decisions it produces. For example, if we automate a process with a system which produces only 10% as many unethical decisions as a human per transaction, but we make it 1000x faster, we end up with 100x more injustice in the world. To retain todays levels that same system would need to make only 0.1% as many unethical decisions per transaction.
For the sake of rhyme, I’ve titled this blog the geek shall inherit. I am myself using a stereotype, but I want to identify the people that are building AI today. Though I firmly support the idea that anyone can and should be involved in building these systems that’s not a reflection of our world today. Our society and culture has told certain people, women for instance, from a young age that boys work on computers and girls do not. This is wrong, damaging and needs remedying. That’s a problem to tackle in a different blog! Simply accepting in this instance that the people building AI tend to be a certain type of person – Geeks. And if we are to stereotype a geek, we’re thinking about someone who is highly knowledgeable in an area, but also socially inept, and probably a man.
With more manual forms of AI creation the problem is at its greatest. Though we may be using a dataset gathered from a more diverse group of people, there’s still going to be selection bias in that data, as well as bias directly from the developers if they are tasked with the annotation of that data. Whether intentionally or not , humans are always going to favour things more alike themselves and code nepotism into a system, meaning the system is going to favour geeky men like themselves more so than any other group.
In 2014 the venture capital fund ‘Deep Knowledge Ventures’ developed an algorithm called ‘VITAL’ to join their board and vote on investments for the firm. VITAL shared a bias with it’s creators, nepotism, showing a preference to invest in businesses which valued algorithms in their own decision making (Homo Deus, Harari, 2015). Perhaps VITAL developed this bias independently, but the chances area it’s developers unconsciously planted the seed of nepotism, and even the preference towards algorithms due to their own belief in them.
A step beyond this is deep reinforcement learning. This is the method employed by Google’s Deep Mind in the Alpha Zero project. The significant leap between Alpha Go and Alpha Go Zero is that Alpha Go used data recorded from humans playing Go, whereas Alpha Go Zero learned simply by playing against itself in a simulated world. By doing this, the system can make plays which seem alien to human players, as it’s not constrained by human knowledge of the games. The exception here is ‘move 37’ against Lee Sedol, which Alpha Go Lee used, prior to the application of Deep Reinforcement Learning. This move was seen as a stroke of creative brilliance that no human would ever have played, even though this system was trained on human data.
Humans also use proxies to determine success in these games. An example of this is Alpha Go playing chess. Where humans use a points system on pieces as a proxy to understand their performance in a game, Alpha Go doesn’t care about its score. It’ll sacrifice valuable pieces for cheap ones when other moves which appear more beneficial are available, because it doesn’t care about its score, only about winning. And win it does, if only by a narrow margin.
So where is the bias in this system? Though the system may be training in a simulated world, two areas for bias remain. For one, the layers of the artificial neural network are decided upon by those same biased developers. Second, it is simulating a game designed by humans – Where the game board and rules of Go were designed. Both Go and Chess for instance offer a first move advantage to black. Though I prefer to believe that the colours of pieces on a game board has everything to do with contrast and nothing to do with race, we may be subtly teaching a machine that one colour is guaranteed by rules an advantage over others in live.
The same issue however remains in more complex systems. The Waymo driverless car is trained predominantly in a simulated world, where it learns free from human input, fatigue and mistakes. It is however, still fed the look and feel of human designed and maintained roads, and the human written rules of the highway code. We might shift here from ‘the geek shall inherit’ to ‘the lawyer shall inherit’. Less catchy, but simply by making the system learn from a system or rules that was designed by a select group of people will introduce some bias, even if it’s simulating it’s training data within the constraints of those rules.
So, what should we do?
AI still has the potential to be incredibly beneficial for all humanity. Terminator scenarios permitting, we should pursue the technology. I would propose tackling this issue from two fronts.
This would be hugely beneficial to the technology industry as a whole, but it’s of paramount concern in the creation of thinking machines. We want our AI to think in a way that suits everyone, and our best chance of success is to have fair and equal representation throughout its development. We don’t know how much time remains before a hard take-off of an artificial general intelligence, and we may not have time to fix the current diversity problem, but we should do everything we can to fix it.
Because damage caused by biased humans, though potentially catastrophic will always be limited by our inherent slowness. AI on the other hand can implement biased actions much faster than us humans and may simply accelerate an unfair system. If we want more equality in the world a system must focus more heavily on equality as a metric than speed, and ensure at the very least that it reduces inequality by as much as the process speed is increased e.g.;
- If we make a process 10x faster, we must reduce the prevalence and impact of unequal actions by at least 90%.
- If we create a system 1,000x faster, this reduction must be for a 99.9% reduction of inequality in its actions.
Doing this only retains our current baseline. To make progress in this area we need go a step further with the reduction in inequality before increasing the speed.
Authored by Ben Gilburt