Twitter Users Can Finally Fight Trolls With Tools to Mute Keywords, Phrases and Conversations

Latest battle in a war on online abuse


A Donald Trump supporter shouts at media during a town hall in December in Aiken, South Carolina. GIF: Dianna McDougall; Sources: Getty Images, Shutterstock

For years, Twitter has faced criticism for failing to manage online abuse in a way that honors free speech while still protecting its users from hate speech and bullying. Now, it's finally taking a step further in the fight against digital trolls.

Today, the company says it's rolling out a way for users to not just block a user, but also to "mute" keywords, phrases and entire conversations at the notification level. In a blog post, Twitter said the feature will be rolled out in the "coming days" to fight the "growing trend" of users taking advantage of its open platform.

And while the "mute" feature isn't entirely new all by itself, the move signals that maybe Twitter is finally taking concerns more seriously less than a week after the election cycle led to journalists, celebrities and everyday people being harassed to the point of leaving the platform altogether. (This summer, actress and comedian Leslie Jones quit Twitter after being attacked with racist and sexist remarks.)

"The amount of abuse, bullying, and harassment we've seen across the Internet has risen sharply over the past few years," according to the blog post. "These behaviors inhibit people from participating on Twitter, or anywhere. Abusive conduct removes the chance to see and share all perspectives around an issue, which we believe is critical to moving us all forward. In the worst cases, this type of conduct threatens human dignity, which we should all stand together to protect."

Twitter has had a "hateful conduct policy" for a while, which prohibits certain types of speech targeted at people based on things like race, ethnicity, sexual orientation, religion or age. However, it says the real-time nature of the platform has made it difficult to keep up with—and cut down on—abusive conduct. To hasten the process, it's giving users a more direct way to report abuse when it happens to them or someone else. With the updates, Twitter's reporting flow can now specifically identify hate against race, religion, gender or orientation.

This week, actress, Emmy Rossum said she's received anti-Semitic threats from supporters of Republican president-elect Donald Trump. (The user is currently suspended.)

But some users in the past have said reporting abuse doesn't accomplish anything, and while some have been banned, it hasn't always been a sure way to stop abuse from happening. Twitter said it's improved internal tools to better enforce policies when users violate them with front-end reporting solutions. Support teams also have been "retrained" with sessions focused on cultural and historical context for hateful conduct.

"We don't expect these announcements to suddenly remove abusive conduct from Twitter," according to the blog post. "No single action by us would do that. Instead we commit to rapidly improving Twitter based on everything we observe and learn."

@martyswant Marty Swant is a former technology staff writer for Adweek.