Facebook Responds to Scathing Letter From 200+ Content Moderators

They expressed concerns about being forced to return to offices during the pandemic and sought full employee status

A content moderator at Accenture’s office in Austin, Texas, generally earns $18 per hour PeopleImages/iStock

Facebook responded to an open letter sent Wednesday by over 200 content moderators who are concerned about their working conditions during the pandemic.

The letter was addressed to CEO Mark Zuckerberg, chief operating officer Sheryl Sandberg and the CEOs of two companies that supply content moderators to the social network on a contract basis: Anne Heraty of CPL/Covalen and Julie Sweet of Accenture.

The moderators expressed concerns of being forced to return to the office during the pandemic, as well as the shortcomings of Facebook’s artificial intelligence systems, requests for better pay and permanent employee status, and access to better healthcare and psychiatric care.

“We appreciate the valuable work content reviewers do, and we prioritize their health and safety,” a Facebook spokesperson said. “While we believe in having an open internal dialogue, these discussions need to be honest. The majority of these 15,000 global content reviewers have been working from home and will continue to do so for the duration of the pandemic. All of them have access to health care and confidential wellbeing resources from their first day of employment, and Facebook has exceeded health guidance on keeping facilities safe for any in-office work.”

The moderators said in their letter that while those with doctor’s notes about personal Covid-19 risks have been excused from working in offices, those with relatives who are vulnerable have not, and they urged Facebook to maximize working from home.

Facebook said the majority of its global content reviewers are working from home and have been since the early days of the pandemic.

The moderators also pointed out that multiple Covid-19 cases have occurred in several offices.

Facebook detailed measures it has put in place, including operating at significantly reduced capacity to enable social distancing; room occupancy limits and direction on complying with those protocols; mandatory temperature checks before entry; mandatory use of face masks; deep cleaning on a daily basis, with all desks cleaned at the end of each shift and high-touch surfaces cleaned multiple times daily; wide availability of supplies including hand sanitizer, wipes and face masks; and improved air filters and more frequent changes and venting building air pressure.

The company also noted that these protocols match those of all global Facebook facilities, including those where full-time employees have returned to work.

The moderators also sought hazard pay for those working on high-risk material, such as child abuse, saying those people should be paid 1.5 times their usual wage, and they demanded an end to outsourcing, writing, “There is, if anything, more clamor than ever for aggressive content moderation at Facebook. This requires our work. Facebook should bring the content moderation workforce in house, giving us the same rights and benefits as full Facebook staff.”

They pointed out that a content moderator at Accenture’s office in Austin, Texas, generally earns $18 per hour.

The signees of the letter said content moderators are offered 45 minutes per week with a wellness coach, and those coaches are generally not psychologists or psychiatrists, and are contractually forbidden from diagnosis or treatment.

According to Facebook, content reviewers have access to healthcare starting on their first day of employment, and there is no weekly limit on wellbeing resources, with reviewers encouraged to use them as needed.

Facebook’s AI systems, which are intended to lighten the burden on content moderators, were blasted in the letter.

The moderators wrote, “Without informing the public, Facebook undertook a massive live experiment in heavily automated content moderation. Management told moderators that we should no longer see certain varieties of toxic content coming up in the review tool from which we work—such as graphic violence or child abuse, for example. The AI wasn’t up to the job. Important speech got swept into the maw of the Facebook filter—and risky content, like self-harm, stayed up. The lesson is clear. Facebook’s algorithms are years away from achieving the necessary level of sophistication to moderate content automatically. They may never get there.”


david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}