Facebook Is Testing an Easier Way for Users to Help Identify Hate Speech

The tool briefly appeared this morning

Facebook has been trying to figure out ways to both manually identify and remove hate speech. Getty Images
Headshot of Marty Swant

Facebook seems to be testing a new tool that would allow users to manually identify whether a post includes hate speech.

For a brief time this morning, every post in a Facebook user’s News Feed displayed a question: “Does this post contain hate speech?” The user was then given the option to click “yes” or “no.” Clicking “yes” then showed various options including “hate speech,” “TP1” and “TP2,” perhaps showing that the feature was launched prematurely.

Asked about the feature, a Facebook spokesperson confirmed the test, which came the same morning as Facebook’s annual developer conference in California.

“This was an internal test we were working on to understand different types of speech, including speech we thought would not be hate,” the spokesperson said in an email. “A bug caused it to launch publicly. It’s been disabled.”

Facebook has been investing heavily in figuring out ways to both manually identify and remove hate speech and other violent or offensive content while also building and deploying artificial intelligence to help scale the efforts to the level of content being created on the platform around the world.

Facebook is also hiring more employees to focus on these efforts, with plans to hire as many as 20,000 more employees by the end of 2018 that will work on everything from offensive content to election integrity.

“One day more of our technology is going to need to focus on people and our relationships,” Facebook CEO Mark Zuckerberg said today during his opening remarks at F8.

@martyswant martin.swant@adweek.com Marty Swant is a former technology staff writer for Adweek.