Twitter has an estimated 20,000 automated accounts, known as bots, which are designed to sway public opinion.
The project’s principal investigator Alessandro Flammini said in a news release that the researchers “found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander.”
The researchers believe socialbots threaten democracy by waging deceptive campaigns and spreading misinformation, with the potential to facilitate cyber-crime, hinder public policy and cause panic during emergencies.
The new tool analyzes the “structure of social and information diffusion networks along with linguistic cues, temporal patterns and sentiment data mined from content spreading through social media,” said the release. BotOrNot can determine whether an account is authentic about 95 percent of the time.
New work by researchers in Brazil points to how easy it is for socialbots to infiltrate Twitter. The researchers created around 120 socialbots and let them loose on the social network over a 30-day period. Twitter was able to detect only 31 percent of the socialbots.
The Twitter bot accounts posted synthetic tweets (composed by stringing together common words used around a certain topic), re-posted tweets by others and followed three different groups of humans (randomly selected, topic-related tweeters and one group of socially-connected people).
Generally speaking, the more active bots posting synthetic tweets achieved the most success, suggesting that Twitter users are unable to distinguish between posts generated by humans and by bots.
“This is possibly because a large fraction of tweets in Twitter are written in an informal, grammatically incoherent style, so that even simple statistical models can produce tweets with quality similar to those posted by humans in Twitter,” the researchers told MIT Technology Review.
The bot accounts received a total of 4,999 follows from 1,952 different users and more than 20 percent of them gained over 100 followers (more followers than 46 percent of Twitter users), indicating that socialbots can infiltrate social groups.
The randomly selected software developer group generated the highest number of followers while the socially-connected group of developers produced the least amount of followers.
The female bots were more effective at generating followers among the group of socially-connected software developers. The researchers said “this suggests that the gender of the socialbots can make a difference if the target users are gender-biased.”
The socialbots also received Klout scores at the same rate or higher than “several well-known academicians and social network researchers.”
The Review calls the research a “wake-up call for Twitter.”
The IU project began in in 2012 with more than $2 million in funding by the U.S. Department of Defense, which recognized that increased information flow via social networks and mobile technology represent a threat to national security.
*image via indiana.edu