Social media giant Facebook has made changes in its policy guidelines to introduce checks on deepfakes or manipulated media across its platform, it said on Monday.
The content in question will be removed if it has been edited or synthesised, beyond adjustments for clarity or quality, in ways that aren’t apparent to the average person, Monika Bickert, vice president, global policy management at the social media behemoth said in a company blogpost.
The policy, however, does not apply to internet media of parody or satire.
The content also needs to be a product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic in order for Facebook’s new regulation to be applicable.
“Manipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence (AI) or “deep learning” techniques to create videos that distort reality – usually called “deepfakes.” While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” Bickert added.
The development comes at a time when one of the largest democracies in the world, the United States enters into presidential election mode, slated for November 2020. In the past, deepfakes have been used to misrepresent well-known politicians in videos.
“A recent example highlighting the danger of manipulated videos is a video of Speaker of the House Nancy Pelosi (D-Calif.) that made it appear as if she were drunk and slurring her words. It got more than 2.5 million views on Facebook, and while it was relatively easy to tell that the video had been altered, it went viral anyway, with an assist from President Trump, who tweeted a clip that first aired on Fox News,” Danielle Citron, a law professor at Boston University had said in an online university publication in September. Citron has also advised Facebook on its new policy.
Deepfakes are hard to detect, harder to debunk, highly realistic videos and audio clips that make people appear to be saying and doing things they never said or did. They are enabled by rapidly advancing machine learning, and are distributed at lightning speed through social media, Citron added.
“Facebook wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation,” Drew Hammill, deputy chief of staff to Pelosi tweeted on Tuesday, in response to the development.
Facebook’s community standards, the wide-set of policy guidelines that monitors and employs its set of regulations on content moderation, are majorly run by independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages.
The Menlo Park, California-headquartered company claims to “significantly” reduce distribution of media that’s found to be false. People who see or share such content are alerted with warnings of the content being false.
“…If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context,” Bickert said in the post.
Facebook has been mired in its own set of crises, since 2018 with two separate investigations from New York Times and political consultancy Cambridge Analytica, on Facebook’s involvement in the 2016 US presidential elections.
The reports raised concerns surrounding the legitimacy of political neutrality at the world’s largest social media company online.