Google trying to rid its algorithm of AI bias
Search giant Google is refining and training its machine learning (ML) algorithm to obliterate bias with regard to words such as gay, lesbian and transgender. The Silicon Valley technology company recently found that its ML algorithm considered those words as toxic, Google artificial intelligence (AI) senior software engineer Ben Hutchinson said.
The reason was that most of those words on the internet on which this model was trained were used to abuse and harass people, Hutchinson told a data science conference at the University of Sydney, reported ComputerWorld. He added that, as a result, the model learnt the word gay to be toxic.
Google has been working on Perspective, an application programming interface (API) that uses ML to detect abuse and harassment online. The Perspective platform was created by Jigsaw and Google’s counter abuse technology team in a collaborative research project called Conversation-AI, in which the technology company has listed media institutions like The New York Times and The Guardian as partners.
Since the finding, Google has gone back to online moderation tools. "The important question is not whether our model is learning patterns from the data correctly? But rather how do we want our systems to impact people?” Hutchinson told the audience, ComputerWorld said.
This happens regularly in the technology world, where the AI and ML algorithms don't get enough diversity in the training data. Last year in March, Google said that it was collecting statements about how marginalised group describe themselves and loved ones.
Last week, Google had also announced the formation of an external advisory council for the responsible development of AI and other emerging technologies.