Could Algorithms Be The Future of Troll Catching?

New research indicates that troll behavior is predictable. However, researchers warn that the algorithms are imperfect and moderation includes human input.

Trolling has been a sticky problem for many social networks. Bans don’t always work, real name policies intended to bring politeness to comments have failed, and reporting tools are often underpowered or useless. A new study points to a possible algorithmic solution, which could be applied to any comments section or network.

Researchers from Stanford and Cornell Universities studied the comment sections of three sites: CNN, Breitbart, and gaming-focused news site IGN. The data set included more than 1.7 million users, and almost 39 million posts. The posts were collected between March 2012 and August 2013.

The study identified users who had been banned — “future-banned users” — after making at least five posts, and the analyzed their grammar, spelling, and other factors like attempts to derail the conversation thread. These characteristics were then compared to average users, and “Never-banned users.”

Future-banned users exhibited similar characteristics across the various sites. Many comments had low readability scores and troll comments used fewer positive words than genuine submissions. They also received higher replies and engagement on their posts, because their posts were almost certainly designed to inflame sentiment. They also used more profanity.

Future-banned users see more of their posts deleted as they spend more time on the site, and the writing quality of the comments start worse than other users, and declines over time.

While these are all great indicators, and could very well form the basis of an algorithm to flag trolls, the system is imperfect. One innocent user is flagged for every four trolls, which sounds fine. However, when comments sections are inhabited by millions of users, that could be very detrimental to the user base. The report concludes that the best solution must involve human input at some point, whether from moderators, or from community reporting tools.

Users, entrepreneurs, and social networks are all developing tools to fight back against trolls. Some complex, some more blunt. What’s clear is that trolling has negative impacts on all communities, large or small, and it must be dealt with in some capacity. Whether the solution is community based or algorithmically derived, we can’t continue to ignore the troll problem.

Read the full academic paper here (PDF).

Image courtesy of Shutterstock.