Twitter Is Cracking Down on Trolls and Offensive Tweets With These New Tools

Banned users won't be able to create new accounts

Twitter will filter offensive content.
Illustration: Dianna McDougall; Source: Getty Images

Twitter is rolling out a number of safety updates which the company says will help protect users from online trolls and abusive content.

Today, the social platform announced it will take a three-pronged approach to combating abusive and sensitive content on the platform by introducing a safer search option, collapsing abusive and low-quality tweets and preventing repeat offenders from rejoining the platform. The company has been rapidly trying to get its troll infestation under control, which has been the source of widespread criticism as regular users and celebrities have been attacked and threatened on the platform.

“Making Twitter a safer place is our primary focus,” Twitter vp of engineering Ed Ho wrote in a blog post. “We stand for freedom of expression and people being able to see all sides of any topic. That’s put in jeopardy when abuse and harassment stifle and silence those voices. We won’t tolerate it and we’re launching new efforts to stop it.”

While Twitter has been working to improve policies and processes for reporting abuse, it hasn’t been fully able to get a handle on keeping suspended or banned users from simply creating a new account. However, with today’s updates, Twitter said it’s taking steps to identify accounts that have been put on hold so that they can’t create another one. Twitter can use a banned user’s account history, login history, device history and more to determine whether that banned user is returning to the platform to create multiple accounts.

The company is also introducing a “safe search” tool, which removes content in search results from users that are blocked or muted in order to protect users from seeing offensive content while browsing search results. Along with removing abusive content from search, Twitter is also identifying and collapsing tweets that are “potentially abusive and low-quality.” According to Ho, the feature will arrive within weeks with the aim of making conversations more user-friendly and relevant.

“In the days and weeks ahead, we will continue to roll out product changes—some changes will be visible and some less so—and will update you on progress every step of the way,” Ho wrote. “With every change, we’ll learn, iterate, and continue to move at this speed until we’ve made a significant impact that people can feel.”

Last week, Ho hinted at some of the upcoming features in a series of tweets. (According to VentureBeat, a Twitter spokesperson said the company is taking improving safety with “a sense of urgency.”)

Hateful and threatening comments have plagued Twitter for years, and in some cases, it’s been so problematic that it’s prompted prominent users such as Saturday Night Live star Leslie Jones to temporarily leave the platform. A report even cited Twitter’s abuse problems as a reason for Disney’s decision to back out of a possible acquisition of Twitter last year.