Facebook Updates Its Efforts vs. Terrorists, Violent Extremists and Hate

The social network refreshed the definition that guides its policies and enforcement

Over 26 million pieces of content related to global terrorist groups were removed over the past two years
South_agency/iStock

Facebook provided an update on its Dangerous Individuals and Organizations policy, which is aimed at thwarting terrorists, violent extremist groups and hate organizations on the social network and Instagram.

The social network said in a Newsroom post that some of its measures pre-dated March’s terror attack in Christchurch, New Zealand, but that event and the resulting Christchurch Call to Action strongly influenced its policies and methods of enforcement.

Facebook wrote, “The attack demonstrated the misuse of technology to spread radical expressions of hate and highlighted where we needed to improve detection and enforcement against violent extremist content.”

Representatives of the social network met with world leaders in Paris in May to sign the New Zealand government’s Christchurch Call to Action.

That same month, Facebook applied a one-strike policy to a broader range of offenses involving its Facebook Live livestreaming feature, and it teamed up with Microsoft, Twitter, Google and Amazon on a nine-point industry plan to fight the spread of terrorist content.

The company revealed Tuesday that it developed a definition to guide its decision-making on enforcement against terrorist organizations, updating how it defines those groups in consultation with experts in counterterrorism, international humanitarian law, freedom of speech, human rights and law enforcement.

The social network wrote, “The updated definition still focuses on the behavior, not ideology, of groups. But while our previous definition focused on acts of violence intended to achieve a political or ideological aim, our new definition more clearly delineates that attempts at violence, particularly when directed toward civilians with the intent to coerce and intimidate, also qualify.”

Facebook’s multi-disciplinary group of safety and counterterrorism experts—tasked with building product innovations and reviewing content—now totals 350 people, and its scope was expanded from strictly counterterrorism to “all people and organizations that proclaim or are engaged in violence leading to real-world harm.”

On the machine learning front, Facebook said that while initially focusing on global terrorist groups such as Isis and al-Qaida over the past two years, over 26 million pieces of content related to those groups were removed, with 99% of that content proactively identified via machine learning and artificial intelligence and deleted before it was reported.

Facebook said it expanded the scope of its detection tools to a wider range of organizations, covering both terrorism and hate, adding that over 200 white supremacist organizations have been banned from its platform.

The social network wrote, “We’ll need to continue to iterate on our tactics because we know that bad actors will continue to change theirs, but we think these are important steps in improving our detection abilities. For example, the video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology. That’s why we’re working with government and law enforcement officials in the U.S. and U.K. to obtain camera footage from their firearms training programs, providing a valuable source of data to train our systems. With this initiative, we aim to improve our detection of real-world, first-person footage of violent events and avoid incorrectly detecting other types of footage such as fictional content from movies or video games.”

In March, Facebook began connecting users who searched for terms associated with white supremacy to resources aimed at helping people leave hate groups, directing them to nonprofit Life After Hate in the U.S.

The company said Tuesday that this initiative will expand to Australia and Indonesia, where it will partner with Moonshot CVE to measure the impact of its efforts and direct users searching for those terms to Exit Australia and ruangobrol.id, respectively.

Finally, on the topic of transparency, Facebook said it will share the fourth edition of its Community Standards Enforcement Report in November, which will include metrics on its efforts against all terrorist organizations for the first time.

The social network concluded, “We know that bad actors will continue to attempt to skirt our detection with more sophisticated efforts, and we are committed to advancing our work and sharing our progress.”

Recommended articles