Skip to main contentSkip to navigationSkip to navigation
Diagram showing a cable entering a socket on the side of a human head.
‘Machines can only work from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics.’ Photograph: Alamy Stock Photo
‘Machines can only work from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics.’ Photograph: Alamy Stock Photo

Robots are racist and sexist. Just like the people who created them

This article is more than 7 years old

Machines learn their prejudices in language. It’s not their fault, but we still need to fix the problem

Can machines think – and, if so, can they think critically about race and gender? Recent reports have shown that machine-learning systems are picking up racist and sexist ideas embedded in the language patterns they are fed by human engineers. The idea that machines can be as bigoted as people is an uncomfortable one for anyone who still believes in the moral purity of the digital future, but there’s nothing new or complicated about it. “Machine learning” is a fancy way of saying “finding patterns in data”. Of course, as Lydia Nicholas, senior researcher at the innovation thinktank Nesta, explains, all this data “has to have been collected in the past, and since society changes, you can end up with patterns that reflect the past. If those patterns are used to make decisions that affect people’s lives you end up with unacceptable discrimination.”

Robots have been racist and sexist for as long as the people who created them have been racist and sexist, because machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics. As long ago as 1986, the medical school at St George’s hospital in London was found guilty of racial and sexual discrimination when it automated its admissions process based on data collected in the 1970s. The program looked at the sort of candidates who had been successful in the past, and gave similar people interviews. Unsurprisingly, the people the computer considered suitable were male, and had names that looked Anglo-Saxon.

Automation is a great excuse for assholery – after all, it’s just numbers, and the magic of “big data” can provide plausible deniability for prejudice. Machine learning, as the technologist Maciej Cegłowski observed, can function in this way as “money laundering” for bias.

This is a problem, and it will become a bigger problem unless we take active measures to fix it. We are moving into an era when “smart” machines will have more and more influence on our lives. The moral economy of machines is not subject to oversight in the way that human bureaucracies are. Last year Microsoft created a chatbot, Tay, which could “learn” and develop as it engaged with users on social media. Within hours it had pledged allegiance to Hitler and started repeating “alt-right” slogans – which is what happens when you give Twitter a baby to raise. Less intentional but equally awkward instances of robotic intolerance keep cropping up, as when one Google image search using technology “trained” to recognise faces based on images of Caucasians included African-American people among its search results for gorillas.

These, however, are only the most egregious examples. Others – ones we might not notice on a daily basis – are less likely to be spotted and fixed. As more of the decisions affecting our daily lives are handed over to automatons, subtler and more insidious shifts in the way we experience technology, from our dealings with banks and business to our online social lives, will continue to be based on the baked-in bigotries of the past – unless we take steps to change that trend.

Should we be trying to build robots with the capacity for moral judgment? Should technologists be constructing AIs that can implement basic assessments about justice and fairness? I have a horrible feeling I’ve seen that movie, and it doesn’t end well for human beings. There are other frightening futures, however, and one of them is the society where we allow the weary bigotries of the past to become written into the source code of the present.

Machines learn language by gobbling up and digesting huge bodies of all the available writing that exists online. What this means is that the voices that dominated the world of literature and publishing for centuries – the voices of white, western men – are fossilised into the language patterns of the instruments influencing our world today, along with the assumptions those men had about people who were different from them. This doesn’t mean robots are racist: it means people are racist, and we’re raising robots to reflect our own prejudices.

Human beings, after all, learn our own prejudices in a very similar way. We grow up understanding the world through the language and stories of previous generations. We learn that “men” can mean “all human beings”, but “women” never does – and so we learn that to be female is to be other – to be a subclass of person, not the default. We learn that when our leaders and parents talk about how a person behaves to their “own people”, they sometimes mean “people of the same race” – and so we come to understand that people of a different skin tone to us are not part of that “we”. We are given one of two pronouns in English – he or she – and so we learn that gender is a person’s defining characteristic, and there are no more than two. This is why those of us who are concerned with fairness and social justice often work at the level of language – and why when people react to having their prejudices confronted, they often complain about “language policing”, as if the use of words could ever be separated from the worlds they create.

Language itself is a pattern for predicting human experience. It does not just describe our world – it shapes it too. The encoded bigotries of machine learning systems give us an opportunity to see how this works in practice. But human beings, unlike machines, have moral faculties – we can rewrite our own patterns of prejudice and privilege, and we should.

Sometimes we fail to be as fair and just as we would like to be – not because we set out to be bigots and bullies, but because we are working from assumptions we have internalised about race, gender and social difference. We learn patterns of behaviour based on bad, outdated information. That doesn’t make us bad people, but nor does it excuse us from responsibility for our behaviour. Algorithms are expected to update their responses based on new and better information, and the moral failing occurs when people refuse to do the same. If a robot can do it, so can we.

Most viewed

Most viewed