Loading...

OpenAI may quit Europe If AI regulation proves 'challenging': CEO Sam Altman

OpenAI may quit Europe If AI regulation proves 'challenging': CEO Sam Altman
Photo Credit: Pixabay
Loading...

American artificial intelligence firm OpenAI’s chief executive officer Sam Altman said on Wednesday the ChatGPT maker might consider leaving Europe if it could not comply with the upcoming artificial intelligence (AI) regulations by the European Union.

Notably, the EU is working on its first set of rules globally to govern AI. As part of the draft, companies deploying generative AI tools, such as ChatGPT and Bard AI will have to disclose any copyrighted material used to develop their systems.

The EU’s AI Act is one of the laws proposed by the governing body in 2021 that would classify AI into three risk categories. Some AI poses an “unacceptable risk” like social scoring systems, manipulative social engineering AI, or really anything that would be violating fundamental rights. It also stated that companies have to comply with across-the-board standards for transparency and oversight.

Loading...

Further, the EU drafted new provisions to the law in December 2022 that would impose safety checks on the LLMs that run the AI chatbots. A committee in the European Parliament approved these changes earlier this month. The European Data Protection Board said that it was monitoring ChatGPT to make sure it complied with its privacy laws, which would ideally make developing new AI models from scratch more expensive.

However, Altman said that under the way the proposed law is currently drafted, both ChatGPT and the large language model GPT-4 could be designated high-risk. This would require the company to comply with certain requirements. 

“If we can comply, we will, and if we can’t, we’ll cease operating… we will try. But there are technical limits to what’s possible,” he said on the sidelines of a panel discussion at the University College London, part of an ongoing tour of European countries, as reported by Time on Wednesday.

Loading...

Italy has already banned ChatGPT in March this year. The Italian authority points to “the absence of a legal basis justifying the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform.”

“Various cities in France too mulled banning its use, as reported by Ouest-France. The city of Montpellier wants to ban ChatGPT for municipal staff, as a precaution,” it said.

The Irish data protection commission, according to the BBC, also said that it will coordinate with all EU data protection authorities in connection with the ban.

Loading...

Also the Information Commissioner’s Office, the UK’s independent data regulator, told the BBC that it would “support” developments in AI but that it was also ready to “challenge non-compliance” with data protection laws.

Meanwhile, OpenAI president Greg Brockman said, earlier this week, that the organisation is seeking ways to gather diverse input on decisions that impact its AI systems.
At the AI Forward conference held in San Francisco, Brockman said that OpenAI is actively considering democratic decision-making processes to involve a broader range of stakeholders in shaping the future of AI.

Last week, Altman told US lawmakers, he was all for regulation that could even include new safety requirements or a governing agency to test products and ensure regulatory compliance. He called for some regulation “between the traditional European approach and the traditional US approach,” to add synergy to AI usage globally.

Loading...

ChatGPT unveiled to the public in November 2022 and was quickly taken up by millions of users and has proved to be quite effective in delivering answers to real-life questions. Its activities range from writing long essays, research reports and coding to crack even difficult exams, among other broad spectrum of tasks.

India too has emphasised on a framework for AI that needs to be developed. On Tuesday, the government has said that it has tasked a seven working groups constituted under India’s National programme for AI or INDIAai with the creation of a data governance framework for AI to look into the regulatory aspects of Al. They are likely to submit recommendations for a comprehensive framework governing Al in the next two weeks.

Meanwhile, Rajeev Chandrasekhar, union minister of state for electronics and IT, said that in the midst of impending threat of AI-related misinformation, the government will create necessary checks through upcoming Digital India Act (DIA) will strictly deal with misinformation and ‘high-risk AI’ to prevent user harm.

Loading...

“We are not going to regulate AI but we will create guardrails. There will be no separate legislation but a part of DIA will address threats related to high-risk AI,” the minister said. 


Sign up for Newsletter

Select your Newsletter frequency