Twitter Reported Progress in Actions Against Abusive Content Before People Report It

The social network’s rules will be updated in the next few weeks

Three times more abusive accounts were suspended from January through March than during the year-ago period bombuscreative/iStock

Twitter provided an update on its efforts to maintain a safe environment without relying solely on its users reporting abusive content.

Vice president of services Donald Hicks and director of product management, health David Gasca revealed in a blog post that 38 percent of the abusive content it has taken enforcement actions against since last March—including for abusive behavior, hateful conduct, encouraging self-harm and threats—has been surfaced proactively by the social network’s artificial intelligence tools for review, rather than having those teams rely on user reporting to discover that content.

They wrote, “People who don’t feel safe on Twitter shouldn’t be burdened to report abuse to us. Previously, we only reviewed potentially abusive tweets if they were reported to us. We know that’s not acceptable, so earlier this year, we made it a priority to take a proactive approach to abuse, in addition to relying on people’s reports.”

There have been 16 percent fewer abuse reports after interactions from accounts people do not follow, according to Twitter.

From January through March, there were 100,000 suspensions of new accounts that were created after previous accounts were suspended, up 45 percent from the same time period in 2018, Hicks and Gasca wrote.

They also touted a 60 percent speed increase in responses to appeal requests following Twitter’s introduction of a new in-application appeal process earlier this month.

Hicks and Gasca said three times more abusive accounts were suspended following reports from January through March than during the year-ago period, and 2.5 times more private information was removed year-over-year due to its rollout last month of a way for its users to protect themselves from doxxing.

They wrote, “The same technology we use to track spam, platform manipulation and other rule violations is helping us flag abusive tweets to our team for review. With our focus on reviewing this type of content, we’ve also expanded our teams in key areas and geographies so that we can stay ahead and work quickly to keep people safe. Reports give us valuable context and a strong signal that we should review content, but we’ve needed to do more and, though still early on, this work is showing promise.”

Hicks and Gasca also previewed more improvements that are on the way.

Twitter promised to continue improving its AI with the aim of reviewing content quicker and before it is reported, especially in cases of compromised private information, threats and other types of abuse.

The social network also plans to enable people who report content to share more specific information when doing so, speeding the reviewing and enforcement process.

Saying that context in tweets is important, Twitter will add more notices for clarity, such as if a tweet is in violation of its rules but is not deleted because its content is in the public interest.

The social network’s rules will be updated in the new few weeks to be “shorter, simpler and easier to understand.”

And in June, Twitter will expand its test of enabling users to hide replies to their tweets, giving them more control over their conversations on the platform.


david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}