A Breakdown of All the Offensive Content Facebook Removed in the First Quarter of 2018

Platform has never before disclosed details of this nature

Facebook says it removed millions of posts related to violence, hate speech and nudity in the first quarter of 2018. Facebook
Headshot of Marty Swant

In its latest attempt at transparency, Facebook is disclosing—for the first time—exactly how much offensive content it finds and removes on the platform.

Today, the social network revealed how many millions of posts involving nudity, violence, hate speech and more that it removed from January through March of this year. The move comes at a time when the company is increasing its efforts to show users, advertisers and lawmakers that it has the ability to regulate its own services as it continues to face scrutiny in both the U.S. and Europe.

And while it’s no surprise that Facebook would find a lot of unsavory content created and uploaded by 2 billion-plus users around the world, the numbers are staggering to look at. Here’s a look at what Facebook removed during the first quarter of 2018:

  • 3.5 million pieces of violent content
  • 21 million pieces of adult nudity and sexual activity
  • 837 million pieces of spam
  • 2.5 million pieces of hate speech
  • 583 million million fake accounts

In a blog post published today detailing the content, Guy Rosen, Facebook’s vp of product management, said the company’s first Community Standards Enforcement Preliminary Report is intended as a way for people outside the company to “judge our performance” for themselves.

“As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse,” Rosen wrote. “It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important. For example, artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.”

Facebook’s track record of identifying and removing offensive content before it’s reported by users varies based on the type of content. For example, the company said it “found and flagged” nearly 100 percent of spam before it was reported. However, its technology caught 86 percent of graphic violence, 96 percent of nudity and just 38 percent of hate speech. (According to Rosen, for every 10,000 pieces of content viewed, between seven and nine views violated Facebook’s policies banning adult nudity and pornography.)

Despite Facebook’s efforts, the total amount of banned content seems to be increasing. In fact, the company said the total amount of posts displaying graphic violence increased 2.2 million quarter-over-quarter. However, the percentage of problematic content it found before users flagged it also improved, growing from 72 percent from October to December 2017 to about 86 percent during the same period.

The admission comes just a month after Facebook CEO Mark Zuckerberg was confronted by Congress in a two-day hearing challenging the company’s ability to keep its own platform clean of content that might be harmful to users. At one point during the 10 hours of testimony, U.S. Rep. David McKinley, R-W.Va., showed Zuckerberg screen shots of opioid ads recently found on Facebook—even though the ads are banned by Facebook’s policies.

“You’ve said before you were going to take down those ads, but you didn’t do it,” McKinley said. “We’ve got statement after statement about things—you’re going to take those down within days, and they haven’t gone down.”


@martyswant martin.swant@adweek.com Marty Swant is a former technology staff writer for Adweek.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}