Loading...

Mint DIS 2023 | Important to regulate tech to deal with security concerns of AI: NS Nappinai

Mint DIS 2023 | Important to regulate tech to deal with security concerns of AI: NS Nappinai
Loading...

The public release of ChatGPT has sparked debates around privacy, security and responsibility in the use of artificial intelligence (AI) tools, but a key area that remains to be explored is around who remains liable for the development of generative AI tools such as ChatGPT. Highlighting this, NS Nappinai, Supreme Court lawyer and founder, Cyber Saathi said at the Mint Digital Innovation Summit 2023 on 9 June that while the technologies must be allowed to grow, it will also be important to regulate nascent fields of technology “to deal with specific security concerns regarding AI platforms like ChatGPT.”

To elucidate, Nappinai highlighted the instance of a lawyer in the United States facing the brunt due to ChatGPT giving instances of false legal cases that did not exist, cases of defamation filed against a political figure due to ChatGPT, and other instances of “how a lot of data that is unsubstantiated is used.”

“The primary issue that arises due to this is, where is the liability?” Nappinai said.

Loading...

To explain, she highlighted cases ranging from Tesla and Uber’s autopilot failures, where the human behind the wheel were penalised for negligence. Similarly, in a case of a robotic process’ deployment in a hospital, and its subsequent failure, the company deploying the service was eventually prosecuted, “since they had modified the tool’s software.”

“In India, while the Digital India Act and the Personal Data Protection Act will both have huge impacts on AI, at present, courts are being asked to apply existing law to these new technologies,” she said.

Through applying the existing law, Nappinai added, “We must not inhibit the growth of technology and let innovation thrive, but it is important to regulate these technologies to deal with specific security concerns regarding AI platforms like ChatGPT.”

Loading...

Alongside, Nappinai also highlighted other areas of legal concerns in adoption of technology. One such example was in the field of facial recognition — with Clearview AI, a US-based surveillance tech company that was forced to settle a lawsuit with American Civil Liberties Union in May last year. In this lawsuit, the Union alleged that Clearview AI violated the state’s Biometric Information Privacy Act by taking images of people into their database without their consent. As part of the settlement, Clearview was banned from granting paid or free access to its facial recognition database to private entities, permanently.

Nappinai further underlined a similar instance in India, where in Telangana last year, the Hyderabad Police asked civilians to take their masks off so their photos could be taken, despite the pandemic. The Telangana High Court issued a notice to the state government on a public interest litigation (PIL), challenging the increasing use of facial recognition technology (FRT) in the state.

“It is important to understand that at what extent can your biometric data been shared by others, and how public data from a user’s passport or driving licence can be used to blacklist or whitelist an individual,” she added.

Loading...

The legal and ethical risks that the use of ChatGPT poses, as well as the need to regulate the deployment of similar generative chatbots, is currently being debated across the world.

The European Parliament (EU) is considering placing the use of generative AI models, such as ChatGPT, in a “high risk” category. Further, the National Institute of Standards and Technology (NIST) has issued an “Artificial Intelligence Risk Management Framework” to create awareness about the unique risks posed by AI products, such as their vulnerability to being manipulated, due to the data that their algorithms are trained on being tampered with.


Sign up for Newsletter

Select your Newsletter frequency