Twitter vice president for public policy Colin Crowell spent Thursday on Capitol Hill, meeting with staff from the Senate Select Committee on Intelligence and House Permanent Select Committee on Intelligence to discuss how the social network may have been misused to tamper with the 2016 U.S. presidential elections.
Twitter Public Policy said in a blog post that Facebook shared some 450 accounts that attempted to influence the elections, and of those, 22 had corresponding Twitter accounts.
Those accounts had already been suspended by Twitter, or that action was taken immediately, in addition to 179 related or linked accounts facing the same consequences.
Twitter also looked into Russia Today and its @RT_com, @RT_America and @ActualidadRT accounts, finding that RT spent $274,100 on Twitter ads in the U.S. in 2016, promoting 1,823 tweets that primarily featured news stories.
The social network said that during the 2016 election period, it removed tweets “that were attempting to suppress or otherwise interfere with the exercise of voting rights, including the right to have a vote counted, by circulating intentionally misleading information,” and it also tweeted reminders that voting could not be conducted by text or tweet, taking action on “thousands of tweets and accounts.”
The Twitter Public Policy blog post detailed several other Twitter initiatives related to the elections.
Twitter said it already has policies and review mechanisms for campaign ads, but it would “welcome the opportunity to work with the FEC [Federal Election Commission] and leaders in Congress to review and strengthen guidelines for political advertising on social media.”
The social network also said its automated systems detect more than 3.2 million suspicious accounts every week, double the rate of this time last year, saying in the blog post, “Russia and other post-Soviet states have been a primary source of automated and spammy content on Twitter for many years. As our detection of automated accounts and content has improved, we’re better able to catch malicious accounts when they log into Twitter or first start to create spam.”
Twitter went into more detail on how it thwarts bots and other malicious accounts, writing in its blog post, “The most effective way to fight suspicious bots is stopping them before they start. To do this, we’ve built systems to identify suspicious attempts to log in to Twitter, including signs that a login may be automated or scripted. These techniques now help us catch about 450,000 suspicious logins per day. Importantly, much of this defensive work is done through machine learning and automated processes on our back end, and we have been able to significantly improve our automatic spam and bot-detection tools, resulting in a 64 percent year-over-year increase in suspicious logins we’re able to detect.”
Among further steps Twitter is taking on this front:
- Working to better identify the true origins of traffic—detecting the use of proxy servers, virtual private networks and other authentication methods—to block suspicious activity.
- Improving how its detects and clusters accounts created by a single entity or suspicious source, saying that doing so helped it to block more than 5.7 spammy follows from a single source last week.
- Building models to detect whether Twitter activity is automated by analyzing signals such as frequency and timing of tweets and engagements.
- Stopping the use of legitimate accounts to spread malicious content by building systems to detect irregular activity by those accounts.
- Previously announced measures, such as accounts being placed in read-only mode until they are authenticated, lowering the reach of suspicious or low-quality tweets, removing content and suspending accounts.
- Enforcing its developer policies to prevent third-party applications—including bots and automated apps—from creating spam and abuse. Twitter said more than 117,000 malicious apps have been suspended since June for abusing its application-programming interface, and those apps were collectively responsible for more than 1.5 billion low-quality tweets in 2017.
- Attempting to ensure that legitimate accounts are not falsely punished by giving them the opportunity to prove their legitimacy.
- Improving the methods it uses to require suspicious accounts to verify their phone numbers in order to regain access to Twitter.
- Taking active measures to ensure the integrity of its Trending Topics by excluding automated tweets and users from its calculations. The social network said it has been able to detect an average of 130,000 accounts per day since June that were attempting to manipulate its Trending Topics.
- Regularly engaging with national election commissions.
- Creating a dedicated media literacy program and sponsoring, contributing to and hosting Media Literacy Week in some of its key markets.
The social network concluded in its blog post, “Over the coming weeks and months, we’ll be rolling out several changes to the actions we take when we detect spammy or suspicious activity, including introducing new and escalating enforcements for suspicious logins, tweets and engagements and shortening the amount of time suspicious accounts remain visible on Twitter while pending confirmation. These are not meant to be definitive solutions. We’ve been fighting against these issues for years, and as long as there are people trying to manipulate Twitter, we will be working hard to stop them.”