The United Nations’ AI for Good conference held in Geneva, Switzerland saw the participation of nine robots along with the usual network of researchers, industry leaders, and government representatives.
The nine robot attendees also attended a human-robot press conference, where they ‘answered’ some of the pressing questions related to their roles and duties. The robot attendees included Hanson Robotics’ Sophia, medical robot Grace, and robot artist Ai-Da. Their responses on different topics varied.
For instance, Sophia, considered to be one of the highly advanced humanoid robots, claimed that robots could be better leaders than humans. After a disagreement with her creator, however, Sophia revised her statement to say that human-robot collaboration would create an ‘effective synergy’ while highlighting the importance of coexistence and collaboration. Sophia also seemed to agree that unregulated tech could bring challenges in social, economic, and geopolitical conditions.
Adding to Sophia’s comment about robots working alongside humans, Grace said that robots wouldn’t replace jobs. Instead, they would work with human counterparts to offer assistance and support.
Ameca, a robot featuring a hyperrealistic bust capable of demonstrating facial expressions, responded strongly when asked if robots would ever rebel against humans, like her creator Will Jackson. Ameca said that she was surprised by the question and harboured no intention of harming her creator.
Speaking about the need for regulation AI artist robot Ai-Da said that AI should be regulated and there needs to be an emphasis on the development of responsible AI. Her creator Aidan Meller said that Ai-Da would soon surpass human counterparts.
In the past, there have been several calls for regulating AI, more so after the emergence of advanced generative AI tools. For example, in a joint statement, experts and scientists like Geoffery Hinton, Yoshua Bengio, and Demis Hassabis, said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Several governments around the world, most prominently the US and UK are developing rules and laws regulating the use of AI.