Loading...

India needs stronger AI regulations

India needs stronger AI regulations
Loading...

It all started when I was developing Conversational AI and related features for Indian end customers, e.g., speech-to-text, topic modelling, digitizing handwritten text etc. For any such artificial intelligence enabled solutions, one would require a model and a curated dataset to train it. Unlike US or other European countries, it is challenging to get any authorized, clean, trustworthy data that mimic the diverse Indian population or any open-source model that is pre-trained on the dataset generated from Indian citizens. 

The exercise reveals some interesting facts about governance, regulations, and use of artificial intelligence in India. Considering cultural nuances, diversity and demographical differences, India needs to enact a comprehensive legislation to address the ethical dimensions, regulation of data usage, training, and adoption of the model. 

In 2018, NITI Aayog crafted a national strategy for building a vibrant AI ecosystem in India but is silent on some of the regulatory requirements. Algorithmic transparency, explainability, liability, accountability, bias, discrimination, and privacy are some of the critical regulatory concerns to be addressed while implementing a national artificial intelligence strategy. In the absence of clear regulations, many start-ups find it challenging to conduct the trials of AI based solutions and prove their effectiveness. By the time the regulatory framework is agreed, approved, and put in place, the solutions often result in technological obsolescence. Without adequate institutional planning and regulatory framework, a national AI strategy might run into a risk of being monolithic in nature. Let’s understand some of the nuances of these concerns. 

Loading...

The issue of transparency can be addressed by internal and external auditors while the solution is still in pilot mode, or during and after the implementation on a recurrent frequency. The audience of audit can be categorized into multiple levels, ranging from an individual to senior leaders and even public at large. On many occasions, end users might be interested to know if AI takes an autonomous decision or plays an augmenting role with the human decision-making process. 

A legal requirement around explainability is the ability to explain how certain parameters are utilized to arrive at an outcome. The developer should be able to explain two facets of the solution: how the data is differentiated, processed via various computations and which factors cause the difference in outcomes. 

AI being a nascent field, its full impact on economy and society is yet to be ascertained, especially in the areas where there is direct human interaction. The framework should impose obligations of accountability to ensure that the solutions are developed in conjunction with constitutional standards, e.g., fundamental rights articulated in Part III of the Constitution. 

Loading...

One of the most voiced concerns is that AI solutions inherently include gender bias, racial prejudice due to the data collection processes. It is strongly recommended that the data should be shaped in such a manner to minimize the algorithmic bias. Moreover, the developers should also be able to explain if they have introduced counter bias in the algorithm to rectify the biased data. It is also crucial to be able to review the fairness of the solution by an independent oversight body. 

Along with bias, another critical aspect is about the discrimination, exclusion of certain groups and structural inequalities possessed by the AI solutions. Based on the way certain attributes e.g., race, religion, gender, neighbourhood etc. are modelled in the algorithms, the solutions can pose denial of services to certain individuals or even circumvent the anti-discriminate laws. In the light of high cultural, ethnic, economic diversity across Indian geography, there is a dire need for regulations to control the outcomes unfavourable to certain sections of the society. 

India is a data-dense country with the largest younger population and accelerated digitization rate. Without a robust privacy regime, it gives easier access to large amounts of data than might be found in other countries with stringent privacy laws. On the other hand, researchers, scientists and starts-up are searching for genuine datasets, either federated or encrypted or watermarked or data sandboxes which host large, anonymized data in controlled environments. To encourage innovation, the central and state governments should actively pursue a national data accessibility policy. 

Loading...

Indian consumers are aggressively embracing online retail, delivery, communication, payment solutions that generate enormous data. In such a context, the regulatory framework cannot ignore data-oriented mergers and acquisitions which lead to dominant position, monopoly and significant competitive advantage for a few AI players and posing high entry or even sustaining barriers for others. Litigations around this are being fiercely fought in the US courts but India can learn from these events and pre-empt such business risks via adequate safeguard mechanisms. 

Although the use of AI is still evolving, many of the developed nations already have some sort of regulations in adopting the technology. Having addressed above concerns, the regulatory framework in India would boost the long-term business opportunities, facilitate a level playing field for the emerging and established players in the industry yet safeguard the constitutional rights of its citizens. This would also be a baby step in making India “a developed nation” by 2050, a vision the Prime Minister of India painted from the Red Fort recently.

Santosh Kulkarni

Santosh Kulkarni


Santosh Kulkarni is the Senior Director of CDK Global India.


Sign up for Newsletter

Select your Newsletter frequency