Facebook updated its policies on deepfakes and manipulated media following conversations with over 50 global experts with backgrounds in tech, policy, media, legal, civics and academics.
However, the video that helped shine the spotlight on deepfakes—a doctored video of House Speaker Nancy Pelosi (D-Calif.), which went viral last May, in which her speech was slowed down to make her appear drunk—would not be removed, even under the social network’s new policies.
Vice president of global policy management Monika Bickert outlined the new policies in a Newsroom post late Monday.
She wrote, “Our approach has several components, from investigating artificial intelligence-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts.”
Bickert said manipulated media would be removed from Facebook’s platform if it:
- Has been edited or synthesized in ways that an average person would not detect, not including adjustments for clarity or quality, in an effort to mislead viewers into thinking that someone said things they did not actually say.
- Appears to be authentic due to the use of AI or machine learning to merge, replace or superimpose content.
The new policy does not cover parody, satire or videos that have been edited solely to remove portions or change the order of words, and Bickert said audio, images or videos that violate Facebook’s community standards will be removed.
She added that content that doesn’t meet the standards for removal is still eligible for review by the social network’s third-party fact checkers and, if determined to be false or partly false, organic content will face reduced distribution in News Feed, while ads will be removed.
“This approach is critical to our strategy, and one we heard specifically from our conversations with experts,” Bickert wrote. “If we simply removed all manipulated videos flagged by fact checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labeling them as false, we’re providing people with important information and context.”
One of the experts who has worked with Facebook on its policies on deepfakes—Hany Farid, a digital forensics expert at the University of California-Berkeley—felt that the company’s policy update came up short, telling Tony Romm, Drew Harwell and Isaac Stanley-Becker of The Washington Post in an email, “These misleading videos were created using low-tech methods and did not rely on AI-based techniques, but they were at least as misleading as a deepfake video of a leader purporting to say something that they didn’t. Why focus only on deepfakes and not the broader issue of intentionally misleading videos?”
Romm, Harwell and Stanley-Becker pointed to the Pelosi video, saying that it was altered using simple video-editing software, and adding that researchers studying disinformation refer to the techniques used as “cheapfakes” or “shallowfakes.”
Bill Russo, spokesman for former vice president and current Democratic presidential candidate Joe Biden, agreed, telling the three Post reporters, “Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation, but rather how professionally that disinformation is created.”
The issue is not exclusive to the U.S., as Alex Hern of The Guardian noted a similar event during the elections in the U.K. last November, in which the Conservative party allegedly edited a video of Labour Party Member of Parliament Keir Starmer to make it appear that he didn’t have an answer to a question about Brexit.
Bickert concluded, “As these partnerships and our own insights evolve, so, too, will our policies toward manipulated media. In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact.”