Across Platforms, Politicians Face Scrutiny About Everything But Their Speech

We picked apart the policies in place across Facebook, YouTube and Twitter

A person crosses his fingers behind his back next to a magnifying glass.
Posts from Kim Kardashian or Cardi B will always be more scrutinized than those by Donald Trump. Photo Illustration: Dianna McDougall; Sources: Getty Images

Though the biggest platforms have hand-wrung and rolled out steps to stanch the flow of political misinformation, it appears there’s one way to get around their censorship: paying for it.

In the past two weeks, lies have run rampant across the platforms. First came Donald Trump running attack ads across Twitter and Facebook against Democratic candidate Joe Biden, which sling the oft-repeated conspiracy theory that claims Biden coerced Ukraine into firing a prosecutor targeting his son Hunter (he didn’t). Then, this past weekend, Elizabeth Warren took to similar platforms to claim that Mark Zuckerberg is rooting for Trump’s reelection (he isn’t).

Despite focusing on obviously debunked mistruths, each of these ads was not found in violation of these platforms’ policies. That leaves the question of why, especially when the companies do, in fact, often have firm rules on the ground against misrepresentation.

Adweek delved into the labyrinthian policies of Facebook, YouTube and Twitter to find out.

Facebook

Although the Palo Alto giant has consistently stepped up the requirements needed to qualify as a political advertiser on its platform, and clearly the legitimacy of political media buyers, its language has shirked those standards. Last month, the company’s vp of global affairs, Nick Clegg, outlined in a blog post that politicians are “exempt” from the third-party fact-checkers that the company put into place as stopgaps to stymie the spread of false news or viral misinformation—and in fact, they had been exempt for over a year.

“We don’t believe […] that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny,” Clegg wrote at the time, though he clarified that doesn’t hold over into ads. In the past, the company has made exemptions for politicians’ posts under the umbrella of “newsworthiness”—meaning that they’ll allow content to stand if they believe “the public interest in seeing it outweighs the risk of harm”—even if it violates the company’s community standards.

But according to Clegg, paid-for media must still abide by the rules the platform has on the ground, both from the community standards and through the platform’s ad policies, which plainly prohibit “ads that include claims debunked by third-party fact-checkers or, in certain circumstances, claims debunked by organizations with particular expertise,” according to a company page outlining the policies. Advertisers that repeatedly peddle falsehoods might have “restrictions” put on their ability to advertise on the platform.

“Politicians are still subject to Facebook’s advertising policies (the fact-checking eligibility piece notwithstanding),” a Facebook spokesperson told Adweek over email. “You will notice in the ad library that politicians’ ads that violated our ad policies will appear with a screen over it noting as much.”

Currently, the platform is allowing Democratic candidate Elizabeth Warren to run more than a dozen ads—with hundreds of thousands of impressions among them—headlined by the bogus claim that Facebook CEO Mark Zuckerberg is endorsing Trump for a second term. In response to the ad, Facebook tweeted from its PR Twitter account to say that “broadcast stations across the country have aired this ad nearly 1,000 times, as required by law.”

“There is a clear tradition in the U.S. of limiting censorship of candidates and politicians’ speech which Sen. Warren’s statements ignore,” Facebook’s spokesperson told Adweek, pointing towards FCC guidelines outlining the fact that broadcasters “have no power of censorship over the material broadcast by any such candidate.”

While the company might be more hands-off around the voting public, it’s anything but around consumers. Last month, Facebook debuted tighter guidelines around ads for cosmetic procedures, diets and weight-loss products, saying that ads containing “miraculous claims” or that promote “unrealistic or unlikely results” could be removed from the platform altogether.

Meanwhile, the company also has rules against “misleading claims” more generally: “claims of unrealistic results within specific timeframes,” for example, or “claims of cures for incurable diseases” or even “false or misleading claims about product attributes, quality or functionality.”

Ads like these are increasingly under the ire of the FTC, which opened more than a dozen cases this year against health and fitness products, like supplements, that made misleading or untrue claims.

YouTube

Much like Facebook, YouTube’s parent company, Google, has a litany of rules surrounding what does and doesn’t constitute “misrepresentation” in advertising, and focuses the majority of scrutiny toward consumer-facing products, including ads for “get-rich-quick schemes” or miracle cures. In Google’s eyes, “misleading” content is content that makes false statements about the advertiser’s qualifications, like a law student advertising himself as a qualified lawyer.

“We don’t want users to feel misled by ads,” the company says on a page describing the policy. “So we strive to ensure ads are clear and honest, and provide the information that users need to make informed decisions.”

While weight-loss pills or fad diets might fall under that purview, political ads don’t. Take, for example, the recent Trump reelection ad that ran on YouTube’s platform, which claims that Trump released a transcript of his call with Ukrainian President Volodymyr Zelensky, meaning that the impeachment inquiry is baseless. Despite being blatantly untrue, the ad’s still running—and has been viewed over 900,000 times since it kicked off at the end of September.

As initially pointed out by independent journalist Judd Legum, the company says that “collecting donations under false pretenses” falls under the purview of what Google considers misinformation, which this ad does—it misinforms viewers and then induces them to visit a link that, following a short survey, prompts a political donation.

Google recently rolled out its own verification requirements for people looking to run political ads, specifically proving that they are a “citizen or lawful permanent resident,” compete with providing a government-issued ID and other key information as of 2018. And, much like Facebook, after the advertiser is verified, the company is relatively hands off.

“All ads that run on our platform have to comply with our ads policies,” the company explained. “For political advertisers, we have additional requirements such as verification of the advertiser, a paid-for-by disclosure and inclusion in our political ads Transparency Report.”

Twitter

When Joe Biden sent letters to both Facebook and Twitter asking the company to take down the misleading political ads he was featured in, the answer was a resounding “no” from both sides.

In a statement to Adweek, a Twitter spokesperson said that the ad didn’t violate the platform’s policies. The spokesperson then pointed toward the page of the advertiser that ran the ad, which, as of this writing, is still running. Since its debut on Oct. 1, the ad’s racked up more than 9,000 impressions.

The policies that politicians do need to follow are vague, to say the least. Outside of the required certification for political advertisers—which include having an account with a clear photo in both header and profile and a website that’s “consistent with the handle’s online presence”—there isn’t clarity about what can and cannot be said on the platform.

Meanwhile, on the same page describing the policies for political advertisers, the company describes how “news publishers” that have the distinction to run political content remain subject to Twitter Ads Policies—something that isn’t established for stand-alone political campaign advertisers. These policies ban, among other things, “making misleading, false or unsubstantiated claims during the promotion of a product or service,” and “promoting misleading information.”

Twitter wouldn’t comment on if political ad campaigns are held to these same standards.


@swodinsky shoshana.wodinsky@adweek.com Shoshana Wodinsky is Adweek's platforms reporter, where she covers the financial and societal impacts of major social networks. She was previously a tech reporter for The Verge and NBC News.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}