Microsoft calls for regulation of facial recognition technology to prevent abuse
Microsoft Corp has called for government regulation for facial recognition software, saying such artificial intelligence technology is too risky for tech giants to police themselves.
"Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression," Microsoft president Brad Smith wrote in a blog post. He urged US lawmakers to create an expert commission to assess the best way to regulate the use of facial recognition technology in the United States.
“This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike. Facial recognition will require the public and private sectors alike to step up – and to act. The only way to regulate this broad use is for the government to do so,” said Smith.
Smith's blog comes as human rights groups and privacy experts have called for widespread bans on use of facial-recognition AI, which they feel could lead dangerously to misidentifications and more invasive surveillance.
In May, US civil liberties groups called on Amazon.com Inc to stop offering facial recognition services to governments, warning that the software could be used to target immigrants and people of colour unfairly.
More than 40 groups had sent a letter to Amazon CEO Jeff Bezos saying the technology from the company’s cloud computing unit was ripe for abuse.
The letter underscored how new tools for identifying and tracking people could be used to empower surveillance states.
Smith pointed out that the usage of face recognition, which is used extensively in China for government surveillance, should open the technology to greater public scrutiny and oversight. He said that allowing tech companies to set their own rules would be an inadequate substitute for decision-making by the public and its representatives.
Regulators should consider whether police or government use of face recognition should require independent oversight and what legal measures could prevent the AI from being used for racial profiling, Smith said.