Loading...

How responsible AI is driving compliance, ethicality and human-centricity

How responsible AI is driving compliance, ethicality and human-centricity
Photo Credit: LinkedIn
Loading...

The attack on a U.K. engineering firm, early last year, was not your garden-variety scam. It used an AI-generated deepfake video and psychology to trick an employee into transferring $25 million to the perpetrators. There are many more such examples.

As cybercrime evolves in unimaginable ways by exploiting AI, little wonder that few people are willing to trust the technology completely. A large global study on trust and attitudes about artificial intelligence found that while two-thirds (66 per cent) of the 48,000 respondents from 47 countries used AI, less than half (46 per cent) trusted it.

Trust issues pose a significant barrier to AI adoption among enterprises, which are concerned not just about cyberattack – both AI’s role and vulnerability to it – but also about algorithmic opacity, unreliability and bias.  As long as these issues prevail, organisations will find it hard to scale their AI implementations to success. 

Loading...

The solution lies in Responsible AI (RAI), a framework that ensures AI development and deployment conforms to trust, ethical, regulatory, privacy, security and human-centric principles.  

Data and AI governance leads to regulatory compliance and trustworthy innovation

Data integrity is paramount for achieving high-quality algorithmic outcomes. A robust data & AI governance framework establishes the policies and procedures related to data collection, storage, transformation, access and disposal to ensure compliance with applicable regulations. Data protection measures, including information security infrastructure, identity and access management tools, and a privacy-by-design approach, help to safeguard sensitive data from accidental exposure or willful breach. Regular audit of data assets, processes, policies and controls opens a window into data use to enable enterprises to align data management practices with compliance requirements and evolve a security culture. Last but not least, good data governance creates a foundation for trustworthy AI innovation.  

Responsible development leads to trustworthy and ethical applications

Responsible AI emphasises fairness in AI models to avoid bias, and expects organisations to be accountable for AI deployments and their consequences. It offers insights into data inputs, algorithmic models and decision-making considerations to address the lack of transparency and explainability in AI systems. RAI also mitigates ethical transgressions by inculcating good data practices, such as using personal information only with the owner’s consent, determining the origin and ownership of data before using it, clearly declaring AI-generated content, and avoiding subjectivity and bias when writing prompts. In order to achieve accurate, reliable and non-discriminatory algorithmic outputs, organisations should ensure that their AI training datasets are clean, factual, consistent, complete and unbiased. Employers need to be transparent about how the organisation uses AI in employee-related matters – to screen resumes, collect performance data for employee evaluation, etc. – to garner employees’ trust and put them at ease about using AI tools in their daily jobs. 
Involve humans to produce human-centric outcomes

Loading...

AI is tightly coupled with humans: it learns from human-generated data, mimics human intelligence, interacts with people and exists to serve them.  Hence, AI development should factor in the behaviour of humans and societies to create human-centric outcomes. Here, enterprises can tap the knowledge of social scientists or recruit experts, for example, an AI ethics board, to assess models for bias, inaccuracies and other ethical issues. Top American universities are running programs where data scientists and social scientists are collaborating to innovate tools for identifying and reducing bias in AI algorithms.

In the United Kingdom, an initiative is bringing philosophers, social scientists, data scientists, designers, policymakers and industry executives together on an interdisciplinary platform to come up with ideas to align AI development with social values and customs. Organisations should also involve stakeholders from different functions in formulating AI policies and decisions to build awareness of responsible AI and trust among employees. This also includes putting humans in the loop to oversee AI and ensure that its outcomes are aligned with human values such as fairness and diversity.

To summarise, successful AI scaling requires organisations to embrace Responsible AI practices that foster trust, uphold ethical standards, meet regulatory requirements, and integrate human oversight in their AI initiatives.

Loading...
Gaurav Bhandari

Gaurav Bhandari


Gaurav Bhandari is AVP, Senior Principal - Business Consulting, Data Analytics and AI, Infosys


Sign up for Newsletter

Select your Newsletter frequency