Alphabet Inc.'s Google has formed an external advisory council for the responsible development of artificial intelligence (AI), the company said in a blog post. The council will advise the company on the ethical issues relates to AI as well as other emerging technologies.
The eight-member panel called the Advanced Technology External Advisory Council (ATEAC) will work with Google executives and furnish a report by the end of this year. The group will start its work early next month and its members include technology experts, digital ethicists, and people with public policy backgrounds, Google senior vice-president for global affairs Kent Walker said in the blog post.
"This group will consider some of Google's most complex challenges that arise under our AI principles for product development, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work," Walker added.
The announcement comes after the internet search giant announced its AI principles for product development last year.
"We recognise that responsible development of AI is a broad area with many stakeholders. In addition to consulting with the experts on ATEAC, we will continue to exchange ideas and gather feedback from partners and organisations around the world," Walker said.
The impact and influence of AI has forced large technology companies to look at the ethical side of implementing several products.
Early this year, Facebook had partnered with the Technical University of Munich to establish an independent research centre to explore the ethical issues involved with AI.
Redmond-based tech giant Microsoft had late last year released guidelines for responsible conversational AI for its conversational AI platform Cortana, which competes with Amazon's Alexa, Google Assistant and Apple's Siri.