Why 2020 Was the Twitter Election

It wasn't because of what happened on Twitter; it was because of what Twitter did

It's Twitter's election now. Trent Joaquin/Getty Images
Headshot of Scott Nover

Joe Biden may have won, but it was Twitter’s election. 

The social media company, which for many years let President Donald Trump post uninterrupted, has done an about-face this year. Amid the election aftermath, it intervened on the president’s tweets about voter fraud, mail-in ballots and baseless claims that he won the election.

Now, the outgoing president is having a hard time posting on his favorite website. Trump’s infamous Twitter account is a shadow of its former self, awash with labels, warnings and interstitials, thanks to Twitter’s latest moves of labeling inaccurate tweets and demoting unproven claims that violate its content policies. (Since election night, 28 of Trump’s tweets—plus a number of retweets—have been either restricted or labeled under Twitter’s policies.)

“[The decisions to restrict tweets are] easy in that there is academic, journalistic and public consensus on the lack of voter fraud, as well as how our elections work,” Shannon McGregor, assistant professor and senior researcher at the University of North Carolina, Chapel Hill, told Adweek. “Compared to more ‘partisan’ issues, these are relatively easier calls to make.”

Twitter kneecapping Trump’s engagement is key to its strategy. It’s also what differentiates Twitter from other platforms like Facebook and YouTube, according to industry onlookers and academics.

Twitter, like other platforms, uses a combination of tech and human verification to moderate content. While Facebook and YouTube affixed contextual labels to the president’s untruthful posts, so users could seek additional information from trusted sources, the companies did not reduce the posts’ algorithmic spread. 

“For social media, the real power comes from the spread of these organic messages,” said Cuihua (Cindy) Shen, an associate professor of communication at the University of California, Davis. “So, in that regard, I think Twitter is doing a much better job than Facebook to create friction, to prevent that organic spread of misinformation.”

Facebook did not immediately return a request for clarification about whether it throttles algorithmic spread for labeled posts. YouTube spokesperson Ivy Choi said it regularly reduces the spread of “borderline content” and regularly removes misleading content about how to vote, though she did not say if the president’s account has been affected this week.

Twitter has been on this path for some time.

Earlier this year, the platform adopted and started to enforce new policies that were incompatible with the American president’s online antics. The issue came to a head in the early summer at the peak of the George Floyd protests. Twitter and Facebook took diverging views about whether the president’s posts constituted real-life threats of violence against protesters. The fallout from Facebook’s laissez-faire approach led to a massive advertiser boycott and caused the platform to adopt approaches that more closely resembled that of Twitter. (Meanwhile, Twitter’s decision making and influence was part of the reason Adweek named CEO Jack Dorsey our Digital Executive of the Year.)

After a summer of labeling and restricting Trump’s rule-breaking tweets, which violated its policies on civic integrity by glorifying violence and spreading Covid-19 misinformation, among other transgressions, Twitter sharpened its blade ahead of Election Day. The company expanded its civic integrity policy to include false claims about election integrity and premature claims of victory.

Despite the criticism for not throttling the spread of misinformation, the contextual labels that social media companies employ may help.

“There is some academic evidence in the field that suggests people are responding to the kinds of fact checks or the kinds of interstitials that [Facebook] put up,” said Emily K. Vraga, associate professor at the University of Minnesota. Vraga said that showing factual information from trusted sources after a user sees misinformation does “reduce their misperception [and] lead them closer to the truth.”


@ScottNover scott.nover@adweek.com Scott Nover is a platforms reporter at Adweek, covering social media companies and their influence.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}