Why Tech Platforms Have a Responsibility to Reel In Political Ads

These giants have influence and need to recognize that

Inspiration meets innovation at Brandweek, the ultimate marketing experience. Join industry luminaries, rising talent and strategic experts in Phoenix, Arizona this September 23–26 to assess challenges, develop solutions and create new pathways for growth. Register early to save.

It’s hard to recall a more fateful election in the U.K. since the one post-World War II. Yesterday’s election is one where the outcome will have far-reaching consequences outside the U.K. The future of the European Union is at stake, and the world is watching.

A whirlwind of political moves has seen everything from a new prime minister to critiqued manifestos, destroying any semblance of political guarantees. With parties accusing one another of disinformation and the public struggling to separate fact from fiction, it’s the tech giants of Facebook, Google and Twitter that have found themselves caught up in accusations and calls for structural frameworks around how politically focused advertisers use their platforms.

We have never in recorded history seen any industry own such audience reach as displayed by the social media companies. The Cambridge Analytica scandal highlighted the power and data they possessed, alongside the impact microtargeting can have on the voter’s perception of a fair election.

There is also a deeper issue at play around whether the tech giants are in fact the platforms they claim to be or if they should be legally classed as publishers. Both titles come with different legal and ethical obligations, making them responsible for the content they allow.

Facebook, Twitter and Google have resisted calls to regulate certain user content, but the focus has now shifted to how the tech giants handle political advertising on their sites, content they receive revenue from and have more responsibility for vetting. Each has responded in their own way, but the question remains of how well these responses will hold up against the reality of the general election.

The tech giants respond

Facebook, Twitter and Google have resisted calls to regulate certain user content, but the focus has now shifted to how the tech giants handle political advertising on their sites.

Zuckerberg’s straight refusal to regulate advertising has been damaging to Facebook. Accusations of openness to extreme political views has seen Facebook come under huge pressure to introduce stringent checks. The platform has attempted some damage control in recent weeks by enforcing identity checks of any entity intending to publish political adverts. The move, which prevents political parties from buying ads in each other’s names, however, still doesn’t ensure that content is factually correct.

To its credit, and in stark contrast to Facebook, Twitter has taken the bold step of banning all political advertising. While the move was widely praised, a blanket ban may have some unforeseen effects. The Conservative Party has already skirted Twitter’s rules by changing its name and impersonating a fact-checking service, a move criticized but one that received no punishment from Twitter or the Electoral Commission.

Google opted to continue accepting political adverts but instead made clear what content is prohibited. Banned advertisements now include content making “demonstrably false claims” that could “significantly undermine participation or trust in an electoral or democratic process,” it said. Google is also weighing whether to broaden the radius of its microtargeting practices. A step in the right direction, but one that may unfortunately have little effect in the U.K. where very few advertisers used the microtargeting tools in the first place, according to an investigation by The Guardian.

The publisher-platform conundrum

As the tech giants move to fine-tune their policy and digital ad tools to reconcile their relationship with politics, the debate on whether the tech giants are platforms or publishers needs to be addressed. Identifying either holds the companies accountable to different rules and regulations in relation to their users.

To give an example, Facebook maintains that it is a platform. By doing so, it removes legal responsibility for the content on its network, political advertising and beyond. Should it identify as a publisher, it would be held accountable for the advertising hosted.

The more cynical perspective is that being a publisher is costly given the expenses of editorial staff for proper fact-checking and publication decisions, for both legal and ethical reasons.

As for Zuckerberg’s argument that being allowed to publish clearly misleading ads on Facebook is a free speech issue, that is plainly wrong. Free speech doesn’t come without limitations. Legal and ethical limitations have applied to free speech since it was first defined, and it also comes with social responsibilities. Experience tells us that abusing free speech actually undermines it. The rise of Europe’s fascism in the 1930s is a potent example.

In the 1920s, the U.S. government stepped in to break up the oil, steel and rail monopolies. It’s now time for governments across the world to do what they’re supposed to do: be fair arbiters when the magic of the market is not working for the common good and the social media platforms might rapidly poison our liberal democracies we fought so hard for.

For professional reasons, I cannot delete my Facebook account. But I can pledge to not be an active user until the company shoulders its social responsibility as an ethical publisher that enhances our democracies instead of undermining them.