OpenAI has formed a new team, led by Ilya Sutskever, the company's chief scientist and co-founder to focus on developing methods to steer and control Superintelligent AI systems.
According to a blog post published on Wednesday, the new team will be co-led by Sutskever and Jan Leike, the head of alignment at the research lab.
OpenAI said that there is a possibility that AI could surpass human intelligence within the next decade. If this happens, it raises concerns about AI's intentions and the need to research ways to control and restrict its behavior.
“Superintelligence will be the most impactful technology humanity has ever invented and could help us solve many of the world’s most important problems,” the company said. “But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”
OpenAI plans to allocate 20% of its existing computing power to a new project aimed at developing an automated alignment researcher. This researcher would support the ChapGPT maker in ensuring the safety and alignment of superintelligent systems with human values.
The team's objective is to utilise human feedback to train AI systems, enabling them to evaluate other AI systems and ultimately conduct alignment research themselves.
According to the blog post, OpenAI has expressed optimism regarding several promising ideas from initial experiments. They also highlighted the availability of improved metrics to track progress and the opportunity to empirically investigate various problems using current AI models. OpenAI intends to share a roadmap for this project in the future.
This announcement comes as governments around the world consider regulating the emerging AI industry. Sam Altman, CEO of OpenAI, has engaged with many federal lawmakers in recent months and expressed the need for AI regulation. He has also said that OpenAI is eager to work with policymakers to develop effective and responsible AI regulation.