These Facebook Documents Show Why Content Moderation Is So Challenging

Moderators often have 'just 10 seconds' to make decisions

Policies and Procedure. Two binders on desk in the office. Business background

Have you ever wondered what guidelines Facebook uses to decide how to moderate content? Wonder no more: The Guardian obtained “more than 100 internal training manuals, spreadsheets and flowcharts” and shared its findings.

Nick Hopkins of The Guardian reported that moderators are “overwhelmed” by the volume of posts they are tasked with processing, often leaving them with “just 10 seconds” to make decisions, adding that they found the guidelines on sexual content to be the most inconsistent, complex and confusing.

Contributing to the overwhelming workload, Hopkins reported that Facebook reviews more than 6.5 million of what it calls FNRP (fake, not real person) reports each week, dealing with accounts that are potentially fake.

Hopkins spoke with Facebook head of global policy management Monika Bickert, who cited Facebook’s monthly user base of nearly 2 billion and said determining what types of content to allow and not allow is difficult, as content may violate the social network’s policies in some contexts, but not in others. She told Hopkins:

We have a really diverse global community, and people are going to have very different ideas about what is OK to share. No matter where you draw the line, there are always going to be some grey areas. For instance, the line between satire and humor and inappropriate content is sometimes very grey. It is very difficult to decide whether some things belong on the site or not.

We feel responsible to our community to keep them safe and we feel very accountable. It’s absolutely our responsibility to keep on top of it. It’s a company commitment. We will continue to invest in proactively keeping the site safe, but we also want to empower people to report to us any content that breaches our standards.

(Facebook is) a new kind of company. It’s not a traditional technology company. It’s not a traditional media company. We build technology, and we feel responsible for how it’s used. We don’t write the news that people read on the platform.

Hopkins shared the following examples from Facebook’s guidelines, complete with links to the documents obtained by The Guardian, where applicable:

  • Remarks such as, “Someone shoot Trump,” should be deleted, because as a head of state, he is in a protected category. But it can be permissible to say: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat,” or, “fuck off and die,” because they are not regarded as credible threats.

  • All “handmade” art showing nudity and sexual activity is allowed, but digitally made art showing sexual activity is not.
  • Videos of abortions are allowed, as long as there is no nudity.
  • Facebook will allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress.”
  • Anyone with more than 100,000 followers on a social media platform is designated as a public figure, which denies them the full protections given to private individuals.

On threats of violence, Hopkins reported that remarks such as, “Little girl needs to keep to herself before daddy breaks her face,” and, “I hope someone kills you,” are permitted because Facebook regards them as generic or not credible, adding that the social network acknowledges that “people use violent language to express frustration online” and feel “safe to do so,” with the company’s documents stating:

They feel that the issue won’t come back to them and they feel indifferent toward the person they are making the threats about because of the lack of empathy created by communication via devices as opposed to face to face.

We should say that violent language is most often not credible until specificity of language gives us a reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design. From this perspective, language such as “I’m going to kill you,” or, “Fuck off and die,” is not credible and is a violent expression of dislike and frustration.

People commonly express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways.

And on what has become a recent hot-button issue for Facebook, videos of violent deaths, the documents obtained by The Guardian state:

Videos of violent deaths are disturbing but can help create awareness. For videos, we think minors need protection and adults need a choice. We mark as “disturbing” videos of the violent deaths of humans.

They go on to say that video content of this sort should be “hidden from minors,” but it should not be automatically deleted because it can “be valuable in creating awareness for self-harm afflictions and mental illness or war crimes and other important issues.”

All screenshots in this post are courtesy of Nick Hopkins of The Guardian, and the complete cache of Facebook documents obtained by the newspaper is available here.

Image of binders courtesy of tumsasedgars/iStock.