TikTok Outlines 5-Pronged Approach to Eliminating Hate Speech on Its Platform

More than 380,000 videos in the U.S. have been removed in 2020

TikTok periodically trains its enforcement teams to better detect evolving hateful behavior, symbols, terms and offensive stereotypes Wachiwit/iStock

TikTok U.S. head of safety Eric Han shared five ways that the video-creation platform is working to reduce the spread of hate speech.

He wrote in a blog post, “As the head of safety, my team and I work to protect our users from the things that could interfere with their ability to express themselves safely and have a positive experience on the application. In what can feel like an increasingly divisive world, one of the areas we’re especially intent on improving is our policies and actions towards hateful content and behavior. Our goal is to eliminate hate on TikTok.”

The five areas addressed by Han were:

  • Evolving our hate speech policy: Han said TikTok defines hate speech as “content that intends to or does attack, threaten, incite violence against or dehumanize an individual or group of individuals on the basis of protected attributes like race, religion, gender, gender identity, national origin and more.” He added that the company works to proactively detect and remove this content before it reaches the community, carefully reviews content that is reported and consults with experts to help refine its policies.
  • Countering hateful speech, behavior and groups: Han said TikTok has a zero-tolerance stance on organized hate groups and those associated with them, such as accounts that spread or are linked to white supremacy or nationalism, male supremacy, anti-Semitism and other hate-based ideologies. Race-based harassment and the denial of violent tragedies, such as the Holocaust and slavery, are also removed. Since the beginning of the year, the platform has removed more than 380,000 videos in the U.S. for violations of its hate speech policies, banned over 1,300 accounts for hateful content or behavior and removed more than 64,000 hateful comments. And if someone searches for a hateful ideology or group, TikTok removes related content or refrains from displaying results, redirecting those searches to its community guidelines.
  • Increasing cultural awareness in our content moderation: TikTok periodically trains its enforcement teams to better detect evolving hateful behavior, symbols, terms and offensive stereotypes. Han wrote, “If a member of a disenfranchised group—such as the LGBTQ+, Latinx, Asian American and Pacific Islander, Black and Indigenous communities—uses a slur as a term of empowerment, we want our moderators to understand the context behind it and not mistakenly take the content down.”
  • Improving transparency with our community: TikTok recently released a feature that notifies users who react to or Duet with videos that were removed for violating its community guidelines, and Han said the platform is working to improve its appeals process and proactively educate users about its guidelines.
  • Investing in our teams and partnerships: The company added leaders with deep expertise in areas such as hateful and abusive behavior to its product and engineering teams to focus on enforcement-related efficiencies and transparency, and it continues to ger feedback from experts, such as those in its Content Advisory Council.

Han concluded, “We recognize the perhaps insurmountable challenge to completely eliminate hate on TikTok, but that won’t stop us from trying. Every bit of progress we make gets us that much closer to a more welcoming community experience for people on TikTok and out in the world. These issues are complex and constantly changing not just for us, but for all internet companies. We are committed to getting it right for our community.”

david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.