
CXOs should view responsible AI beyond compliance: HCLTech’s Heather Domin


While Artificial Intelligence (AI) adoption is advancing at an enormous pace in the enterprise, its transformative potential also brings ethical dilemmas, including bias, hallucinations as well as accountability, and privacy issues. In an interview with TechCircle, Heather Domin, VP and Head of the Office of Responsible AI and Governance, HCL Technologies, India’s third-largest IT services provider, explains how businesses can work toward a responsible, equitable and beneficial integration of AI tools into everyday solutions, products and operations while ensuring trust and protecting the enterprise. Edited excerpts.
You have been an advocate for ethical and responsible AI for many years now. How do you see enterprise adoption of responsible AI evolving?
Responsible AI is no longer theoretical—it’s becoming integral to how enterprises build and scale technology. From healthcare and finance to entertainment and workplace productivity, AI is driving real-world impact. However, growing awareness of its risks has made responsible AI practices and governance essential.
At HCLTech, we’ve embedded responsible AI across our operations through policies, education, and technical safeguards as we support clients across critical sectors like healthcare, financial services, and public services. With Gartner predicting that 50% of governments will regulate AI by 2026, enterprises are now viewing responsible AI not just as compliance, but as a driver of trust, innovation, and competitive advantage. Proactive governance, bias mitigation, and stakeholder collaboration are key to thriving in the generative AI era.
What are the biggest barriers you see companies facing when trying to implement responsible AI practices alongside technological innovation?

One of the biggest challenges in implementing responsible AI is the gap between intention and execution. A joint study done by HCLTech and MIT Tech Review shows that while 87% of leaders see it as critical, 85% feel unprepared. This disconnect is often due to limited expertise, operational risk management issues, and a lack of governance frameworks. As generative AI advances, traditional compliance alone can't manage risks like bias, privacy, and model misuse. The rise of shadow AI—tools used outside central oversight—further complicates governance. The real need is for integrated systems that make responsible AI scalable, consistent, and sustainable.
What is the role of governance in AI development?
AI governance is now a critical enabler of enterprise innovation. As organisations embed AI into core functions, risks like bias, misuse, and regulatory non-compliance have become real operational concerns. Governance helps ensure AI is explainable, auditable, and aligned with ethical and organisational values—enabling responsible, scalable adoption. For example, at HCLTech, our Responsible AI framework is built around five tenets: accountability, fairness, security, privacy, and transparency. We view governance not as a constraint, but as strategic infrastructure that supports innovation. It’s most effective when integrated early—across data pipelines, model design, and human-AI interaction—especially in high-stakes sectors like healthcare and finance.
How do clear guidelines prepare organisations for generative or agentic AI?
Emerging technologies bring new risks that demand clear, adaptable guardrails. Without strong guidelines, organisations face challenges like bias, IP violations, misinformation, and loss of human oversight. At HCLTech, we treat guidelines as evolving frameworks—not static documents. They define fairness, safety, accountability, and explainability, building trust within and beyond the organisation. In a fast-paced landscape, clear governance doesn’t slow innovation—it enables safe, scalable adoption.
Given the rapid advancements in AI, how should CIOs manage the integration of new technologies while ensuring data protection?

AI is projected to add $15 trillion to the global economy by 2030, yet 70% of companies remain unprepared for responsible AI governance. As AI evolves rapidly, CIOs must balance innovation with integrity by embedding responsible frameworks from the start. At HCLTech, we promote an AI governance model that combines data privacy, transparency, and bias mitigation, enabling organisations to unlock AI’s value securely and ethically.
What key components should CIOs focus on to build a truly responsible AI ecosystem?
To build a responsible AI ecosystem, CIOs must take a holistic approach—embedding governance, transparency, and ethics across the AI lifecycle. This includes explainable systems, privacy-aligned data practices, bias mitigation, and security against threats. We support this with tools like TrustifAI, ORA Toolbox, and Content Safety Module that can help monitor model behaviour in real time. We also offer LLMOps, responsible AI consulting, and privacy-enabled tooling to help clients deploy AI that is secure, scalable, and trusted.
How do you develop responsible AI policies in your organisation?
At HCLTech, responsible AI is embedded into how we design, develop, and deploy intelligent systems. Our Office of Responsible AI and Governance drives cross-functional practices that unlock AI’s value for clients and society. We develop and implement responsible AI policies working across business units and with insights from broader stakeholder engagement, and further strengthen our capabilities through partnerships and proprietary tools. HCLTech aligns its responsible AI and governance framework to leading industry standards, including ISO (42001, 23894, and 22989), the NIST AI RMF, NASSCOM, and the OECD’s principles. These standards serve as a foundation for HCLTech’s own framework, which informs our responsible AI policies.
What does the next 12-18 months look like for HCLTech when it comes to AI?

AI is constantly evolving. While innovation is happening at a rapid pace, AI governance plays a growing and increasingly critical role. At HCLTech, we believe that laying a foundation of responsible AI and governance will set the stage for years to come and will allow us to harness the power of AI while mitigating its risks. HCLTech will continue to help clients in responsible AI adoption and provide training to its employees as innovation and governance requirements evolve. Through strategic partnerships and responsible implementation, we are well-positioned to maintain leadership in the AI market.
How do you see the role of AI evolving in the future, and what implications will this have for businesses and society?
AI is evolving from a tool for automation to a strategic driver of innovation and societal impact. It’s reshaping customer experiences, supply chains, R&D, and decision-making across industries. To stay ahead, businesses must become AI-native, rethinking operations, talent, and governance. Society must focus on responsible AI, equitable access, and thoughtful regulation. At HCLTech, we embed trust, transparency, and purpose into every AI solution—enabling innovation that benefits both business and society.