Facebook Releases White Paper on the Next Steps in Moderating Online Content

Don’t look for new ideas: Many of the key steps are already in place at the social network

Facebook's white paper poses four questions that are key in the debate over how to regulate online content
Tera Vector/iStock

Facebook released a white paper Monday on ideas for regulating online content in the future, but most of the concepts contained within are policies or initiatives that are already in place or in the works at the social network.

Vice president of content policy Monika Bickert introduced the white paper in a Newsroom post Monday, writing, “Over the past decade, the internet has improved economies, reunited families, raised money for charity and helped bring about political change. However, the internet has also made it easier to share harmful content like hate speech and terrorist propaganda. Governments, academics and others are debating how to hold internet platforms accountable, particularly in their efforts to keep people safe and protect fundamental rights like freedom of expression.”

Bickert said the white paper poses four questions that are key in the debate over how to regulate online content.

How can content regulation best achieve the goal of reducing harmful speech while preserving free expression? Facebook suggested systems including user-friendly channels for reporting content and external oversight of policies or enforcement decisions.

Any Facebook user can already report any piece of content and explain why they are reporting it.

And the social network is well into the process of establishing a global independent advisory board for content.

Bickert also suggested “periodic public reporting of enforcement data,” enabling governments and individuals to get a clear picture of social platforms’ efforts.

Facebook already publishes twice-yearly Community Standards Enforcement Reports.

How can regulations enhance the accountability of internet platforms? The social network suggested requirements for companies such as publishing their content standards, consulting with stakeholders prior to significant changes and creating appeal channels for decisions on removing or not removing content.

None of these would place any additional burden on Facebook, which already publishes its community standards and indicates when they are updated, already consults with a Safety Advisory Board and is putting the new global independent advisory board into place to handle appeals.

Should regulation require internet companies to meet certain performance targets? Bickert suggested incentivizing companies to meet specific targets such as keeping the prevalence of violating content below an agreed-upon threshold.

This is already in place at the social network, as well, as Facebook said in its most recent Community Standards Enforcement Report that its rate of removing hate speech proactively was up to 80%, and its rate of detecting and removing content associated with al-Qaida, Isis (Islamic State) and their affiliates remains above 99%, adding that for all terrorist organizations, that figure is 98.5%.

Should regulation define which “harmful content” should be prohibited on the internet? Bickert wrote in the white paper that governments should create rule to address the complexity over the differences between laws restricting speech and internet content moderation, and those laws should recognize user preferences, the variations among internet services and the ability to enforce them at scale, allowing for flexibility across language, trends and context.

Bickert discussed four primary challenges in the white paper, introducing them as follows: “Private internet platforms are facing increasing questions about how accountable and responsible they are for the decisions they make. They hear from users, who want the companies to reduce abuse but not infringe upon freedom of expression. They hear from governments, who want companies to remove not only illegal content, but also legal content that may contribute to harm, but make sure that they are not biased in their adoption or application of rules. Faced with concerns about bullying or child sexual exploitation or terrorist recruitment, for instance, governments may find themselves doubting the effectiveness of a company’s efforts to combat such abuse or, worse, being unable to satisfactorily determine what those efforts are.”

Recommended articles