Google’s new AI tool aims to stamp out child sexual abuse content
Tech giant Google has said that it is deploying artificial intelligence (AI) technology to contain the spread of child sexual abuse content online.
In a blog post, Google said its cutting-edge AI technology uses deep neural networks for image processing, which assists reviewers sorting through many images by prioritising the most likely child sexual abuse material online.
The new tool will be made available for free to non-governmental organisations (NGOs) and other industry partners via a new Content Safety API service that could be offered upon request, the company added. API stands for application programming interface.
“Using the Internet as a means to spread content that sexually exploits children is one of the worst abuses imaginable,” wrote engineering lead Nikola Todorovic and product manager Abhi Chaudhuri.
Google hopes the new AI technology will significantly help service providers, NGOs and other tech firms to improve the efficiency of detecting child sexual abuse content and reduce human reviewers’ exposure to it.
“Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse,” Todorovic and Chaudhuri said.
The company also claims that this initiative will speed up review processes.
“We’ve seen firsthand that this system can help a reviewer find and take action on 700% more CSAM content over the same time period,” they added.
According to the blog post, Google has been working across the industry and with NGOs to combat online child sexual abuse. Among its partners include WePROTECT Global Alliance, Britain-based charity organisation Internet Watch Foundation, and The Technology Coalition.