Facebook Begins Testing Advanced Crowd-Sourced Content Moderation


By Eric Eldon Comment

In an effort to better remove inappropriate content, Facebook is testing out what it calls Facebook Community Council, according to new Council member and Boing Boing guest poster Andrea James. Like the name suggests, this app gives users the ability to evaluate content for various types of offensiveness, or as James quote the group’s motto: “To harness the power and intelligence of Facebook users to support us in keeping Facebook a trusted and vibrant community.”

We asked Facebook for more details — the company isn’t providing screenshots but a Facebook representative basically describes a crowd-sourced version of admin tools for moderating flagged content report by users.

The Facebook Community Council is a way for users to tell us whether reported content violates our policies.  We’ve found that people aren’t shy about reporting content they come across that looks suspicious, and this is just another way of leveraging the Facebook community to help maintain the site’s trusted environment.  It’s still in an experimental stage, and we’re currently testing the application with only a very small number of users.

The “Council” tools give members only one of the following eight options for each piece of objectionable content they are shown:  Spam, Acceptable, Skip, Not English, Nudity, Drugs, Attacking (“direct attacks against public figures”) and Violence.

Facebook previously asked some users to add a crowd-sourcing application, called Translations, where they were asked to provide their own translations for different parts of the site. With a little help from professional translators, this app has made Facebook available in more than 70 languages. In the Council app, users results are averaged out over time in order to show items that are truly objectionable rather than objectionable in the minds of just a few people.

The new app won’t just give Facebook a better look at what sort of offensive content gets published on its service — it already has technology and a big customer service team working on the problem, after all. The company will also get better information about the sorts of content that people find offensive or not, and how those perceptions change over time.

[Top image via Boing Boing]