Amazon has scrapped a “sexist” tool that used artificial intelligence to decide the best candidates to hire for jobs. Members of the team working on the system said it effectively taught itself that male candidates were preferable.
The artificial intelligence software was created by a team at Amazon’s Edinburgh office in 2014 as a way to automatically sort through CVs and select the most talented applicants. But the algorithm rapidly taught itself to favour male candidates over female ones, according to members of the team.
They realised it was penalising CVs that included the word “women’s,” such as “women’s chess club captain.” It also reportedly downgraded graduates of two all-women’s colleges. The problem arose from the fact the system was trained on data submitted by applicants over a 10-year period, much of which was said to have come from men. Some of the team members pointed to the fact this mirrored the way shoppers rate products on Amazon.
“They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those,” one of the engineers said.
But by 2015, it was obvious the system was not rating candidates in a gender-neutral way because it was built on data accumulated from CVs submitted to the firm mostly from males.
But concerns have previously been raised about how trustworthy and consistent algorithms which are trained on information which has the possibility of being biased will be. In May last year, a report claimed that an AI-generated computer program used by an American court for risk assessment was biased against black prisoners.
The program flagged black people were twice as likely as white people to re-offend due to the flawed information that it was learning from.
As the tech industry creates artificial intelligence, there is the risk that it inserts sexism, racism and other deep-rooted prejudices into code that will go on to make decisions for years to come.