Starting next week, Facebook and its sister site Instagram will no longer allow white nationalist and white separatist content on its site.
The move, which Motherboard reported and which Facebook subsequently confirmed, marks a major reversal for the social media platform, whose previous internal policies told content moderators that white nationalist and white separatist content should be treated differently than white supremacist content. Those previous policies had been the focus of intense criticism from civil rights and advocacy groups.
In a company blog post, Facebook said it previously allowed for white nationalism and white separatist content “because we were thinking about broader concepts of nationalism and separatism.” The company changed its tune after consulting with civil groups and determining that white nationalism and separatism “cannot be meaningfully separated from white supremacy” and from other hate groups.
“Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism,” Facebook said in its blog post announcing the change.
The company will initially rely on user reports to flag and identify violating content “as this isn’t something that our technology is aimed at identifying just yet,” a spokesperson for Facebook told Adweek in an email. The plan is to use a combination of machine-learning tools, artificial intelligence and human review to enforce the new policies.
Implicit statements about white nationalism and white separatism aren’t covered under the policy update, but Facebook may make additional changes and updates to its policies in an effort to account for coded language and veiled expressions of hate, the spokesperson said.
Additionally, Facebook users who try to search for terms associated with white supremacy will be automatically redirected via a pop-up to the site Life After Hate, a nonprofit organization that describes itself as being “committed to helping people leave the violent far-right.”
The updated guidelines were announced internally on Tuesday during a policy development meeting but have been in the works for months, a Facebook spokesperson said. The spokesperson said Facebook was not sharing the names of the people or groups “because we want to maintain confidentiality and the ability to continue to have candid conversations with external groups and people.”
How the rules will be enforced and how effective they will be remain to be seen once they go into effect. The company acknowledged Tuesday that it needed to continue improving its ability to identify and remove hate speech from the platform.
The move comes after a white supremacist used Facebook to livestream his own deadly terrorist attack in New Zealand and comes amid broader conversations about Facebook’s responsibility as a platform when it comes to misinformation and extremist content.
Kristen Clarke, the president and executive director of the nonprofit advocacy group Lawyers’ Committee for Civil Rights Under Law, which advocated for the new policy, said the change was a step in the right direction.
“While Facebook’s new policies are one step forward in the fight against white supremacist movements, much work remains to be done,” Clarke said in a statement. “Putting in place the correct policy is a start, but Facebook also needs to enforce those policies consistently, provide meaningful transparency around any AI techniques used to address this problem, and adequately retrain its personnel. Without proper implementation, policies will prove to be just empty words, and white supremacy will continue to proliferate across its platform.”