These Facebook Documents Show Why Content Moderation Is So Challenging

Moderators often have 'just 10 seconds' to make decisions

Policies and Procedure. Two binders on desk in the office. Business background

Have you ever wondered what guidelines Facebook uses to decide how to moderate content? Wonder no more: The Guardian obtained “more than 100 internal training manuals, spreadsheets and flowcharts” and shared its findings.

Nick Hopkins of The Guardian reported that moderators are “overwhelmed” by the volume of posts they are tasked with processing, often leaving them with “just 10 seconds” to make decisions, adding that they found the guidelines on sexual content to be the most inconsistent, complex and confusing.

Contributing to the overwhelming workload, Hopkins reported that Facebook reviews more than 6.5 million of what it calls FNRP (fake, not real person) reports each week, dealing with accounts that are potentially fake.

Hopkins spoke with Facebook head of global policy management Monika Bickert, who cited Facebook’s monthly user base of nearly 2 billion and said determining what types of content to allow and not allow is difficult, as content may violate the social network’s policies in some contexts, but not in others. She told Hopkins:

We have a really diverse global community, and people are going to have very different ideas about what is OK to share. No matter where you draw the line, there are always going to be some grey areas. For instance, the line between satire and humor and inappropriate content is sometimes very grey. It is very difficult to decide whether some things belong on the site or not.

We feel responsible to our community to keep them safe and we feel very accountable. It’s absolutely our responsibility to keep on top of it. It’s a company commitment. We will continue to invest in proactively keeping the site safe, but we also want to empower people to report to us any content that breaches our standards.

(Facebook is) a new kind of company. It’s not a traditional technology company. It’s not a traditional media company. We build technology, and we feel responsible for how it’s used. We don’t write the news that people read on the platform.

Hopkins shared the following examples from Facebook’s guidelines, complete with links to the documents obtained by The Guardian, where applicable:

  • Remarks such as, “Someone shoot Trump,” should be deleted, because as a head of state, he is in a protected category. But it can be permissible to say: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat,” or, “fuck off and die,” because they are not regarded as credible threats.

  • All “handmade” art showing nudity and sexual activity is allowed, but digitally made art showing sexual activity is not.
  • Videos of abortions are allowed, as long as there is no nudity.
  • Facebook will allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress.”
  • Anyone with more than 100,000 followers on a social media platform is designated as a public figure, which denies them the full protections given to private individuals.

On threats of violence, Hopkins reported that remarks such as, “Little girl needs to keep to herself before daddy breaks her face,” and, “I hope someone kills you,” are permitted because Facebook regards them as generic or not credible, adding that the social network acknowledges that “people use violent language to express frustration online” and feel “safe to do so,” with the company’s documents stating:

They feel that the issue won’t come back to them and they feel indifferent toward the person they are making the threats about because of the lack of empathy created by communication via devices as opposed to face to face.

Recommended articles