WAM Analyzes Twitter Abuse Reports for Patterns, Problems

Women, Action & the Media analyzed Twitter abuse reports and discovered several complications that make validating reports difficult.

Twitter has a problem: Much like other popular social networks, there is a seedy subculture that thrives on trolling and abusing other users. Efforts to address this problem have had unintended consequences, and Twitter CEO Dick Costolo openly admitted that the social network is not great at handling abuse.

In November, Twitter granted a number of organizations “authorized reporter status.” These organizations were to become mediators, collecting abuse reports from Twitter users, evaluating the reports and escalating those in need of special attention.

Among the authorized reporters was nonprofit organization Women, Action & the Media, which conducted a study of the abuse reports it received. During the period of Nov. 6 through 26, WAM analyzed 811 reports of harassment on Twitter, and it recently published its findings in a public report.

One of the problems with reporting Twitter abuse, according to the report, is that users are required to submit evidence in the form of a URL. This makes sense except that screenshots are the primary means of capturing evidence of abuse using the “tweet and delete” tactic.

The report acknowledges that screenshots could be manipulated. It also notes that deleting harassing messages once they’ve been seen is a common tactic across platforms. The whole point is to remove the evidence, including the URL. This makes collecting evidence a challenge, to say the least.

Furthermore, according to WAM:

[E]ven if a URL is reported in time, if the associated tweet is deleted before being seen by reviewers, the URL may not continue to function internally — that is, the evidence may be inaccessible even to Twitter reviewers.

With evidence so easily destroyed, allowing screenshots as a form of evidence could be an essential tool for catching abusers.

WAM also noticed a pattern of false flagging and report trolling. These represent classic cases in which the abuse reporting tool is being used to distract those evaluating the reports from the real cases of abuse. Reports from bystanders and delegates also complicated the reporting and verification process, particularly when those reporting abuse failed to respond to request for additional information.

Twitter isn’t the only network dealing with a harassment problem. Indeed, 17 percent of the reports indicated that harassment was taking place on multiple platforms. Among those reporting abuse across platforms, 31 percent mentioned Facebook specifically.

This report doesn’t provide much in the way of solutions, but rather offers empirical data to further support the call to address a widespread problem of harassment on the Internet. While Instagram hopes to deal with the problem by appealing to its users’ humanity, Reddit has gone another route, defining harassment and how users can report such harassment. Still, the culture of harassment on the Internet won’t be easily solved or eliminated.

Unlike publications like Reuters and Popular Science — both of which both turned off their comments sections in recent years — networks like Twitter and Reddit are dependent on contributions from the community in order to function. The question, then, is what is a social media network to do?

Readers: Have you ever experienced harassment on Twitter or other social networks?