How Personalization and Targeted Ads Enable Election Meddling

Being up front with users can save us from being hacked again

Three years after the 2016 presidential election, America is still coming to grips with the chaotic circumstances that surrounded it. And whatever may happen in 2020, it cannot rival the historical significance of an event that was christened as “the first real internet election” by Politico.

Representing the confluence of technology and culture at a pivotal moment in global politics, 2016 showed Americans just how much sway the digital world holds. Social media platforms have borne the brunt of responsibility for allowing the election to become a spectacle fueled by “fake news,” foreign propaganda and incendiary rhetoric.

As next year’s presidential elections draw nearer and election meddling comes under renewed scrutiny, the industry’s efforts to combat false information have focused wrongly on the message rather than the mechanism that enabled disinformation to spread in the first place.

What did Russia do?

According to Facebook, at least 126 million Americans were exposed to Russian-backed political messages during the 2016 campaign. Between Q4 in 2018 and Q1 in 2019, the company removed more than 2 billion fake accounts within minutes of registration. More recently, it took down more than 1,800 fake accounts and groups engaged in disinformation that targeted users in Thailand, Hong Kong and the Ukraine. Twitter has also had success in its hunt for Russian actors, identifying about 50,000 bots or automated accounts in 2017 and tens of millions of fake accounts in 2018.

Transparency about targeting will break the illusion of objectivity.

Russia used $100,000 in ad spend to target voters through automated accounts, influencing at least 13 million users. All Russia did was plant a seed that grew in the fertile soil of every marketer’s dream: an algorithm perfectly calibrated to deliver exactly the right information to exactly the right people.

No easy answers 

Combatting this problem isn’t simple. Social media platforms aren’t the only unknowing purveyors of questionable content. Other digital content companies, like online news publications, are just as likely to do the same because they too depend on algorithms. With the ad blocking crisis in full swing, no publication wants to risk a high bounce rate.

Dynamically generated content shows a different set of stories to users depending on their internet habits, location and other personal information. Even if the social media giants disable their ad platforms, and thereby cut off a significant source of revenue, the same algorithms that present ads to users would continue to replenish newsfeeds with content, regardless of its authenticity.

Efforts to vet, tag or demote this content, with the aid of human intelligence, are useless. More than 54,000 links are shared on social platforms every minute, excluding memes, text posts or any content passed around in private groups and group- chats. 

A problem as big as the internet 

Third-party code is what makes people so susceptible to the influence of this robotized information landscape. A staggering 80%–95% of media site code is created by third parties. While much of it is useful and benign, the rest is not and is vulnerable to online manipulation. A significant portion creates a personalized web experience for marketers, which is increasingly being used by hackers, nation-state adversaries and populist groups to incite division and spread disinform. 

How to fight fake news 

Fundamentally, this problem cannot be fought by moderating or censoring the message. It is the mechanism that propagates fake news, which is far more insidious because it gathers fake information that would otherwise receive little attention, then spreads it like wildfire. And if this mechanism remains unchanged, the root of the problem will never be addressed.

Fake news can be diminished by recognizing the root problem, which is targeting that is enabled by the widespread collection and exchange of data weaponized to find voters receptive to certain campaign messages. Users should also be reminded as to why they are seeing a particular message in the first place. Transparency about targeting will break the illusion of objectivity.

Finally, publishers must understand what their website users actually experience and protect their platforms from being misused by adversaries. From exposing vulnerable and malicious third-party code to providing better standards for data privacy and the media supply chain through legislation and self-regulation, these platforms can be ultimately safeguarded.