Upgrade your sales funnel with expert insight at Commerceweek, Feb. 28-29. Broaden your audience with improved data technology and a seamless purchase experience. Register now at 50% off.
As Election Day 2016 approached in the United States, thousands of specifically targeted Americans began seeing racially, politically and religiously charged ads on Facebook with headlines like “Don’t Mess With Texas Border Patrol” and “Satan: If I Win, Clinton Wins!” The ads spanned the spectrum of divisive American issues. Some promoted far-right positions while others were radically liberal.
The explosive nature of these ads elicited strong reactions among the people who saw them, supercharging the engagement metrics that Facebook’s algorithms use to surface popular content to ever widening audiences. As the algorithms dutifully did their work, millions of Americans would eventually be exposed to the heated debates, polarized hyperbole and false information that exponentially blossomed around these ads.
On Nov. 1, 2017, the U.S. House Intelligence Committee released samples of these Facebook ads along with the metadata used to target and buy them. All of the ads were paid for in rubles by a single Russian company with known Kremlin ties. U.S. intelligence agencies confirmed that this was a textbook example of how Russian state actors had gamed the algorithms underlying Facebook and Twitter to sow division and chaos in the run up to the U.S. presidential election.
This was not an isolated event. The same techniques were deployed during the Brexit referendum and the recent French presidential election. And Russia is not the only offender. Trolls, spammers, terrorists, and criminals have all found ways to game the algorithms on Facebook, Twitter and YouTube to suit their dark agendas.
Even children are at risk. A particularly disturbing set of YouTube videos were recently cited by James Bridle in “Something Is Wrong on the Internet” to warn how unsuspecting children could be tricked into watching violent footage under the guise of nursery rhymes and cartoons.
While it’s tempting to blame the algorithms, the practical reality is that with more than a trillion pieces of content created on the internet every day, and algorithms have become a necessity for making sense of it all. Sophisticated artificial intelligence algorithms now power our search engines, social media feeds, video streams and notifications. They deliver news and keep our society informed.
That is why content algorithms must be carefully controlled and regulated, not by governments but by trained journalists equipped to ensure these algorithms adhere to the five principles of journalism:
1. Truth and accuracy
There is no algorithm for the truth. This is why it is so important for journalists to be placed in charge of the algorithms and be entrusted with total control of all of the inputs. For example, over the past eight years, the editorial team at Flipboard has painstakingly selected tens of thousands of reliable publishers and blogs from the left and right, domestic and international. This team has also weeded out sources proven to intentionally deceive, churn out spam, or traffic in hate. By putting trained journalists in charge of all of the inputs to the content algorithms, it is possible to ensure that only credible sources are magnified and promoted to audiences.
Editors should ensure that their content algorithms are in service to the audience, not the business model. Extremism, polarization and spectacularly false headlines will always drive more engagement and therefore more ad inventory. Look no further than the bottom of most websites these days to see what happens when you optimize content algorithms for maximum profit. Link-bait, bitcoin schemes and dubious diets may drive clicks and dollars, but they destroy the trust of the audience and undermine whatever good journalism is at the top of the page.
3. Fairness and impartiality
Most stories have multiple sides. Helping people discover and understand the differing perspectives of a story is crucially important, especially in political coverage. Content algorithms should be tuned to feature sources capable of providing different perspectives from the left, right and center. Editors should also be able to manually inject stories that feature fresh or under-represented perspectives, helping people break out of their “filter bubbles.”
Algorithms should do no harm to people. Editors and engineers should collaborate on the design of content algorithms so outputs can be readily monitored at scale with real-time alerts, community participation and regular internal reporting. Levers and controls should be built in from the beginning to allow editors to quickly adjust or disable any algorithm the moment harm is detected.
No system is perfect. Quality publishers readily accept and correct mistakes publicly to preserve trust and integrity. Using algorithms does not excuse social media platforms from these same responsibilities. In fact, the stakes are higher because algorithms can magnify mistakes and unintended consequences, often at lightning speed.
Centuries after the printing press transformed our world, we are in the midst of a new revolution that is once again radically improving how our society is informed and inspired. But the technology companies that are fortunate enough to be shaping this revolution are still grappling with the huge responsibility they have inherited. Now it is time for these companies to recognize that algorithms alone will not save us from a dystopian world of lies, link-bait and hate—only the timeless principles of journalism can do that.