Facebook expands fact-checking programme to include images and videos
Social networking giant Facebook in a statement said it is including photos and videos in its fact-checking programme aimed at fighting fake news and misinformation.
Algorithms to spot fake images and videos have been deployed in 17 countries, including the United States, Facebook said, adding that the programs would send flagged content to outside human fact-checkers for further review.
The fact-check algorithms have been in test mode since March at news agency AFP, current affairs TV network France 24, and others. Now, with the full-scale deployment of the programs, all of Facebook’s 27 outside human fact-checkers will receive potentially fake content for verification. The human fact-checkers can also sniff out fake content on their own, without the help of algorithms. Moreover, photos and images can be flagged by Facebook users for review.
Doctored photos and strong visuals have been posted on social media by Russian agents in a bid to influence 2016 US presidential election and other global elections.
Facebook shared a couple of examples of human fact-checkers spotting false content. In one such, the face of a Mexican politician had been plonked on a US green card to incorrectly suggest he is a US citizen.
In another example, a photo’s caption that called India’s prime minister the “seventh most corrupted” in the world was spotted as fake by a news outlet in the South Asian country after the text’s source came to light: BBC News Hub, which isn’t a part of the BBC.
Facebook’s deployment of algorithms is the latest in a series of steps the social media giant has taken to fight fake news and misinformation on its site.
Under this rating system, if a user reports something as fake news but Facebook’s fact-checkers find it to be true, that user’s reputation score takes a hit. Conversely, if the fact-checking team finds that the user had correctly reported the news to be fake, that user gets a rating boost.
Facebook will not show the score to users.
In a blog post published last month, Facebook said it had instituted a separate team of engineers, product developers and policy makers to deal with hate speech and misinformation in Myanmar, which had fuelled ethnic violence against the Rohingya population earlier this year. The company said it has invested heavily in artificial intelligence to highlight posts that break its rules.
Recent reports said that Facebook founder Mark Zuckerberg has taken personal responsibility to maintain the integrity of the general elections that are going to take place in India in 2019.