The growing power disparity between who benefits from artificial intelligence (AI) and those who are harmed by the technology is a top challenge facing the internet, according to 2022 Internet Health Report, which states that AI and automation can be a powerful tool for the influential – for example, the tech titans who are making more profit out of it, but at the same time can be harmful for the vulnerable groups and societies.
The report compiled by researchers of Mozilla, the non-profit that builds the Firefox web browser and advocates for privacy on the web, said, “In real life, over and over, the harms of AI disproportionately affect people who are not advantaged by global systems of power,"
"Amid the global rush to automate, we see grave dangers of discrimination and surveillance. We see an absence of transparency and accountability, and an overreliance on automation for decisions of huge consequence,” said Mozilla researchers.
While, the report noted that systems trained with vast swaths of complex real-world data, is revolutionising computing tasks, including recognising speech, spotting financial fraud, piloting self-driving cars, and so on, that were previously difficult or impossible, there are enough and more challenges in the AI universe.
For example, machine learning models often reproduce racist and sexist stereotypes because of bias in the data they draw from internet forums, popular culture and photo archives.
The non-profit believes that big companies aren't transparent about how they use our personal data in the algorithms that recommend social media posts, products and purchase, among others.
Further, recommendation systems can be manipulated to show propaganda or other harmful content. In a Mozilla study of YouTube, algorithmic recommendations were responsible for showing people 71% of the videos they said they regretted watching.
Companies like Google, Amazon and Facebook have major programs for dealing with issues like AI bias, yet subtle ways biases have been injected into the algorithms. For example, The New York Times had pointed out the Google Photo scrutiny of 2015 where Google apologised after photos of Black people were labelled as gorillas. To address such disgraceful problems Google simply eliminated labels for gorillas, chimps, and monkeys.
Likewise, on 2020’s mega protests over George Floyd’s killing in the US, Amazon made money from its facial recognition software and sold it to police departments even when research has shown that facial recognition programs falsely identify people of colour compared to white people, and also that its use by police could result in an unjust arrest that will largely affect Black people.
Facebook also featured clips of Black men in dispute with white civilians and police cops.
But Mozilla researchers differ in their way, stating that though Big Tech funds a lot of academic research and that even papers focusing on AI's social problems or risks, they do not walk the walk.
“The centralisation of influence and control over AI doesn’t work to the advantage of the majority of people,” Solana Larsen, Mozilla’s Internet Health Report Editor said in the report. The purpose is to “strengthen technology ecosystems beyond the realm of big tech and venture capital startups if we want to unlock the full potential of trustworthy AI,” she said.
Mozilla suggested that a “new set of regulations can help set guardrails for innovation that diminish harm and enforce data privacy, user rights, and more."