Social networking giant Facebook Inc. is rating users on trust, assigning them reputation scores in a bid to combat fake news, reported The Washington Post.
Under this previously-unreported rating system, if a user reports something as fake news but Facebook’s fact-checkers find it to be true, that user’s reputation score takes a hit. Conversely, if the fact-checking team finds that the user had correctly reported the news to be fake, that user gets a rating boost.
Facebook will not show the score to users.
Facebook’s product manager, Tessa Lyons, in charge of fighting misinformation, told The Washington Post that the social network has developed the ranking system over the past year.
It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons added. She also mentioned that a user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, nor is there a single unified reputation score that users are assigned.
The score is one measurement of new behavioural clues that Facebook is now taking into account as it seeks to understand risk.
In a blog post published last week, Facebook said it had instituted a separate team of engineers, product developers and policy makers to deal with hate speech and misinformation in Myanmar, where it had fuelled ethnic violence against the Rohingya population earlier this year. In a report on its investment in the country, Sara Su, a product manager at the social networking site, stated in the blog post that the company has invested heavily in artificial intelligence to highlight posts that break its rules.
Recent reports say that Facebook founder Mark Zuckerberg has taken personal responsibility to maintain the integrity of general elections that are going to take place in India in 2019.