Loading...

Why responsible AI is no longer an option for the enterprise

Why responsible AI is no longer an option for the enterprise
Photo Credit: Pixabay
Loading...

Artificial Intelligence (AI) is emerging as a powerful technology trend driving business growth. Fuelled by rapid digitisation, companies across sectors such as banking, healthcare, and automotive are increasingly adopting AI to automate processes, reduce errors, and improve decision-making. However, this growing reliance on AI brings risks like algorithmic bias and faulty decisions, which could damage public trust. As greater power brings greater responsibility, experts emphasise that responsible AI principles are essential for the ethical development and deployment of this technology.

Responsible AI is an emerging governance framework integrating ethical, moral, and legal values into AI development and deployment for societal benefit. Global organisations like OpenAI, Google, IBM, Microsoft, and Accenture are actively working on responsible AI, emphasising ethical considerations and positive societal impact through transparency, fairness, and accountability. Governments worldwide are also tightening AI regulations, including the EU's AI Liability Directive and AI Act, and the US Blueprint for an AI Bill of Rights. China has also proposed AI regulations. These changes necessitate business adaptation. In India, the government plans to launch four open-source responsible AI solutions via the AIKosh platform as part of the IndiaAI Mission.

Despite enthusiasm for AI adoption, successful enterprise implementation remains limited. Rapid AI advancements outpace regulation, creating challenges in mitigating potential harms. A recent report by HCLTech and MIT Technology Review Insights revealed that while most business executives recognize the importance of responsible AI, they feel unprepared to implement it, increasing the risk of failure, increasing the risk of failure and exposure to regulatory, financial, and reputational damage. 

Arun Kumar Parameswaran, EVP & Managing Director - Sales & Distribution (South Asia) at technology company Salesforce, opines that AI is only as powerful as the data that fuels it. “Trusted, unified, and real-time data is the foundation for effective AI, enabling meaningful insights, smarter decisions, and responsible automation. The ability to connect and contextualise data across systems is what ultimately empowers AI to deliver value at scale,” he said, adding that this transformation must also be anchored in trust. The promise of AI can only be realised through transparency, accountability, and ethical use. That’s why we are embedding trusted AI into the fabric of our platform, in the flow of work, with humans at the centre.

Debojyoti Dutta, Chief AI Officer of global technology firm Nutanix, also believes that responsible AI requires substantial structural changes to ensure automated systems operate within legal, internal, and ethical boundaries, not merely a superficial effort. 
That said, customers, employees, shareholders, and governments expect responsible AI usage, especially as concerns about brand reputation grow. “The key challenge lies in maximising AI's benefits while effectively safeguarding against its inherent dangers through socially and ethically responsible strategies,” he said.

Experts also believe that AI auditing assesses and mitigates risks to an algorithm's safety, legality, and ethics. It maps risks in technical functionality and governance, recommending mitigation measures. Deekshith Marla, Co-Founder of AI startup Arya.ai, believes what matters most is not how fast we innovate, but how responsibly we scale. “Transparency, auditability, and human alignment must define the next chapter of AI.” She believes that factors such as efficacy, robustness, explainability and privacy play an important role in assessing AI systems. For example, the system should be reliable, safe, secure, and resistant to tampering. It should avoid bias or unfair treatment. Further, the system should be trained using data minimisation principles and privacy-enhancing techniques.

Responsible AI ensures efficiency, ethical operation, and prevents reputational and financial damage. Businesses must adopt responsible AI to comply with emerging regulations, maintain competitiveness, and avoid liability. Early adoption and broad stakeholder collaboration are crucial for a holistic approach.

As Sachin Panicker, Chief AI Officer at Pune-based digital engineering firm, Fulcrum Digital, believes, true appreciation of AI comes with responsibility. “As we embed AI deeper into enterprise and societal frameworks, ethical stewardship, transparency, and human-centric design must remain front and centre. The future of AI lies not in replacing human potential, but in amplifying it,” he said. 

Loading...

Experts believe that the responsible AI is no longer an option. The future of AI lies in amplifying human potential, necessitating investment in AI talent, responsible frameworks, and adaptive learning models.


Sign up for Newsletter

Select your Newsletter frequency