Facebook Live Violations: One Strike and You’re Out

The social network is partnering with 3 universities to better detect manipulated media

A one-strike policy is now being applied to a broader range of offenses that apply specifically to Facebook Live
RBFried/iStock

Chief operating officer Sheryl Sandberg promised tighter restrictions on Facebook Live in late March, in reaction to the spreading of videos of the terrorist attack in Christchurch, New Zealand, earlier that month. The social network took steps to make good on that promise this week.

Vice president of integrity Guy Rosen said in a Newsroom post that the social network’s livestreaming feature will now operate under a one-strike policy, adding that the company is teaming up with three universities to study how to better identify media that is manipulated in order to avoid detection.

Rosen said a one-strike policy is now being applied to a broader range of offenses that apply specifically to Facebook Live, and anyone who violates the most series policies will be banned from using the feature for a set period of time—for example, 30 days after the first offense.

Sharing a link to a statement from a terrorist group with no context is one example of a violation that would trigger action on Facebook’s part.

Rosen said the social network will extend similar restrictions to other areas of its platform over the coming weeks, starting with preventing violators from creating ads.

Prior to this week’s update, posts violating Facebook’s community standards, including Facebook Live videos, were removed, and accounts that repeatedly posted such content were blocked for a certain time period.

Accounts that engaged in repeated low-level violations or a single egregious violation—such as using terror propaganda in a profile picture or sharing images of child exploitation—were banned altogether.

Rosen wrote, “We recognize the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook. Our goal is to minimize risk of abuse on Live while enabling people to use Live in a positive way every day.”

On that note, Facebook is teaming up with the University of Maryland, Cornell University and the University of California, Berkeley, to research new ways to detect manipulated media (images, videos and audio) and to distinguish between “unwitting posters and adversaries who intentionally manipulate videos and photographs.”

Rosen wrote, “One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack. People—not always intentionally—shared edited versions of the video, which made it hard for our systems to detect. Although we deployed a number of techniques to eventually find these variants, including video- and audio-matching technology, we realized that this is an area where we need to invest in further research.”

He added that the company’s work with the universities and other partners was critical in its efforts against deepfakes—videos that are intentionally manipulated to depict events that did not occur—and organized bad actors.

Rosen concluded, “These are complex issues, and our adversaries continue to change tactics. We know that it is only by remaining vigilant and working with experts, other companies, governments and civil society around the world that we will be able to keep people safe. We look forward to continuing our work together.”

Recommended articles