Loading...

Lack of adequate policies increase impact of AI on human rights: UNDP

Lack of adequate policies increase impact of AI on human rights: UNDP
Photo Credit: Pixabay
Loading...

Companies must adopt proactive and more responsible policies in order to mitigate biases in artificial intelligence (AI)-based decision making within an organization, said a study by the Aapti Institute and United Nations Development Programme (UNDP) set to be released tomorrow.

The study further noted that adapting policies and regulations with increasing digitization of businesses could help companies address the impact of AI on human rights – a growing concern as more enterprises automate their services.

According to the study, companies use the guise of algorithm-based decision making to “obfuscate deliberate company policies” – instead of looking to establish a responsible and explainable AI model. It summarizes that a lack of conducive company policies and regulations can exacerbate the impact of AI and automation on the human rights of workers.

Loading...

An explainable AI model is one where the actions taken by an AI algorithm and its logic can be explained, thereby leading to better understanding of underlying biases – and training algorithms accordingly.

This question of algorithmic bias, according to the study, has the greatest impact on financial services, healthcare, retail and the gig worker industries. Among these sectors, the most impacted workers are from vulnerable and marginalized sections of the population – for whom direct access to technology is limited, thereby restricting their ability to seek recourse if they feel violated by an automated decision made by their employers.

Dennis Curry, deputy resident representative of UNDP India, stated that while AI has helped in “improving” lives through speeding up diagnosis times in healthcare and improving convenience and accessibility for disabled individuals through smart homes, it is important to build “inclusive and resilient digital ecosystems that are rights-based”.

Loading...

This, Curry said, would help improve inclusion and reduce the aspect of bias in AI systems.

Finally, the report added that existing biases in traditional AI models, when implemented by companies, could have an even bigger impact on women and individuals from the economically backward stratas of the society. Use cases such as implementation of automated work hour computation systems without taking into account contextual human rights, and automated, “predatory” data collection systems are key examples of unregulated use of AI.


Sign up for Newsletter

Select your Newsletter frequency