Twitter announced several updates in its efforts to thwart “inauthentic accounts, spam and malicious automation,” highlighted by requiring email addresses or phone numbers when new accounts are established.
Yoel Roth, who works on application-programming interface policy and public trust for Twitter, and vice president of trust and safety Del Harvey wrote in a blog post that the social network will work with its Trust and Safety Council and other non-governmental organizations to ensure that people in high-risk environments where anonymity is important are not affected.
They also said Twitter’s efforts in developing machine learning tools to identify and take action on “networks or spammy or automated accounts” are paying off, adding that more than 9.9 million potentially spammy or automated accounts were identified and challenged in May, up from 6.4 million in December 2017 and 3.2 million last September.
Meanwhile, the average number of spam reports coming through Twitter’s reporting flow is going down, at some 17,000 per day in May compared with 25,000 in March. Roth and Harvey also said Twitter has seen a 10 percent decrease in spam reports via search, suggesting that users are encountering less spam on its network.
The social network also reported on cleaning up malicious activity via its APIs, saying that more than 142,000 apps were suspended in the first quarter for violating its rules, and those apps were responsible for more than 130 million “low-quality, spammy tweets.”
Roth and Harvey said Twitter has been keeping up that pace, with an average of over 49,000 malicious apps being removed from its platform in April and May. They added that more than one-half of the apps that were suspended in the first quarter had it happen within one week of registering with Twitter, “many within hours.”
Twitter has also begun updating account metrics—including followers and likes or retweets on individual tweets—in real-time, so that when suspicious accounts are placed in a read-only state (meaning they cannot engage with other users or tweet), they are removed from follower figures and engagement accounts until responding to a challenge from Twitter, such as confirming a phone number associated with the account.
Accounts in the read-only state will feature a warning, and other accounts will not be able to follow them until their footprints are restored after passing Twitter’s challenge.
The social network is conducting an audit of existing accounts with the aim of ensuring that every account created “has passed some simple, automatic security checks designed to prevent automated signups,” and Roth and Harvey said the new protections Twitter has developed have helped it to prevent more than 50,000 spammy signups per day.
They explained, “As part of this audit, we’re imminently taking action to challenge a large number of suspected spam accounts that we caught as part of an investigation into misuse of an old part of the signup flow. These accounts are primarily following spammers who, in many cases, appear to have automatically or bulk-followed verified or other high-profile accounts suggested to new accounts during our signup flow.”
In cases where Twitter sees suspicious account activity—such as high-volume tweeting with the same hashtag or use of the same handle without replies from that account—the social network is automating some of its processes, often requiring the owners of those accounts to complete a reCAPTCHA process or reset their passwords.
Roth and Harvey suggested that users enable two-factor authentication, whereby, in addition to passwords, users must enter codes sent to their mobile phones in order to log in. For those who want to take it one step further, FIDO Universal 2nd Factor physical security keys can be used for login verification.
They also recommended that Twitter users regularly review third-party apps that have access to their account information via their account settings on twitter.com. Those who believe they were erroneously flagged by any of Twitter’s automated spam detection systems can appeal here.
In a somewhat related announcement, with Mexico’s general election set for July 1, Facebook head of cybersecurity policy Nathaniel Gleicher said in a Newsroom post that more than 10,000 fake pages, groups and accounts have recently been removed in Mexico and Latin America due to violations of its community standards, adding, “The content we’ve found broke our policies on coordinated harm and inauthentic behavior, as well as attacks based on race, gender or sexual orientation.”