Facebook partners with German varsity to set up AI ethics centre

Facebook partners with German varsity to set up AI ethics centre
Photo Credit: Photo Credit: Reuters
21 Jan, 2019

Facebook is partnering with a German university to create an independent research centre that will explore fundamental issues affecting the use and impact of artificial intelligence (AI), the social networking giant announced in a blog post. 

Joaquin Quiñonero Candela, director for applied machine learning at Facebook, said that the company will provide an initial grant of $7.5 million to support the creation of the Institute for Ethics in Artificial Intelligence in collaboration with Technical University of Munich.

Stating that AI poses complex problems that the industry alone cannot solve, Facebook said the centre aims to leverage TUM’s academic expertise, resources and global network to pursue rigorous ethical research into the questions raised by evolving technologies.

According to the blog, the institute will address issues that affect the use and impact of artificial intelligence, such as safety, privacy, fairness and transparency.

"The institute will conduct independent, evidence-based research to provide insight and guidance for society, industry, legislators and decision-makers across the private and public sectors," said Candela.

“As AI technology increasingly impacts people and society, the academics, industry stakeholders and developers driving these advances need to do so responsibly and ensure AI treats people fairly, protects their safety, respects their privacy, and works for them,” he added.

While Facebook has provided initial funding, the institute will explore other funding opportunities from additional partners and agencies. 

Facebook said it may also share insights, tools, and industry expertise related to issues such addressing algorithmic bias, in order to help the institute's researchers focus on real-world problems that manifest at scale.

The company also said it was developing new tools like Fairness Flow, which can help generate metrics for evaluating whether there are unintended biases in certain AI models.