Microsoft scientist says firm sacrificed revenue to prevent unethical use of AI

Microsoft scientist says firm sacrificed revenue to prevent unethical use of AI
Photo Credit: Reuters
16 Apr, 2018

A top research scientist at Microsoft claims that the tech giant has sacrificed "significant sales" by nixing deals over concerns that potential customers may use artificial intelligence (AI) for unethical purposes.

Eric Horvitz, a longtime technical fellow and director at Microsoft Research, was quoted by GeekWire as saying that the company took the steps as it is serious about the ethical use of AI.

Horvitz made the remarks while addressing an audience at Carnegie Mellon University in the US.

According to a subsequent report by Business Insider, however, Microsoft said it had never cancelled a deal with an existing customer but had only shied away from new deals in cases where it felt a company might not be using AI in the right way.

"Microsoft may decide to forego the pursuit of business proposals for numerous reasons, including the company's commitment to upholding human rights," a Microsoft spokesperson was quoted as saying.

Horvitz had also said that Microsoft had put in place restrictions for existing customers with regard to the use of the company's AI capabilities.

"... various specific limitations were written down in terms of usage, including 'may not use data-driven pattern recognition for use in face recognition or predictions of this type.'" Horvitz was quoted as saying.

The research scientist discussed the issue while talking about AETHER, which stands for AI and ethics in engineering and research -- the Redmond-headquartered firm's ethical oversight committee. 

"Microsoft created the Aether committee to identify, study and recommend policies, procedures, and best practices on questions, challenges, and opportunities coming to the fore on influences of AI on people and society," a Microsoft spokesperson was quoted as saying.

Concerns over the unethical use of AI have been repeatedly raised by Tesla chief Elon Musk, who had sparked a debate in 2014 when he had said that the human race could be doomed if machines became smarter.

He has subsequently expressed fears that AI could start a war and that too much power concentrated in the hands of a few tech giants could pose a threat to people.

Last year, Facebook co-founder Mark Zuckerberg said that naysayers “try to drum up these doomsday scenarios”, adding that it was “pretty irresponsible.”