Facebook added blackface and stereotypes about Jews controlling the world to its list of hate speech violations, vice president of content policy Monika Bickert said during a press call Tuesday to introduce the sixth edition of its Community Standards Enforcement Report.
Bickert said the social network consulted with over 60 experts including social psychologists, historians, folklorists and groups that represent a diverse range of religious, ethnic and racial communities, adding that those consultations “helped us write rules for the removal of more implicit hate speech, such as content depicting blackface or stereotypes about Jewish people controlling the world. This type of content has always gone against the spirit of our hate speech policies but it can be really difficult to take concepts—especially those that are commonly expressed in imagery—and define them in a way that allows our content reviewers based around the world to consistently and fairly identify violations.”
Vice president of integrity Guy Rosen said during the press call that more than 7 million pieces of misinformation about Covid-19 were removed from Facebook and Instagram during the second quarter, including posts pushing fake preventative measures or exaggerated cures that the Centers for Disease Control and Prevention and other health authorities deemed to be dangerous.
Rosen added that labels were placed on anther 98 million pieces of content on Facebook for other forms of pandemic-related misinformation, as determined by the social networks’ third-party fact-checking partners.
He also addressed the upcoming presidential election in the U.S., saying that from March through July, more than 110 million pieces of content were removed for attempting to mislead people about voting or intimidate them against doing so.
Rosen said, “We’re also building up our Elections Operations Center to continue our work with state election authorities so that we can quickly respond to and remove false claims about polling conditions in the 72 hours leading into election date. Earlier today, we also introduced new ratings and updated labels for our fact-checking program. We’ve heard from fact-checkers that more clarity and distinction between ratings is important and better reflects what they’re seeing on the platform.”
As for the Community Standards Enforcement Report, Rosen cited technology improvements, including the addition of more languages to its automation technology, for progress in the second quarter of 2020, covered by the report, compared with the first quarter.
He added that while Facebook’s content reviewers have been home since March due to the coronavirus pandemic, which affected reporting and actions taken in some of the categories it analyzed across Facebook and Instagram, other categories saw improved results due to the addition of Arabic and Spanish to the social network’s automation technology, as well as improvements in how it processes English.
Rosen wrote in a Newsroom post, “We rely heavily on people to review suicide and self-injury and child exploitative content and help improve the technology that proactively finds and removes identical or near-identical content that violates these policies. With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram. Despite these decreases, we prioritized and took action on the most harmful content within these categories.”
He added that the updates to its technology led to an uptick in its proactive detection rate for hate speech on Facebook, to 95% in the second quarter from 89% in the first quarter, and a surge in pieces of content that were acted upon for this reason, to 22.5 million from 9.6 million.