Controlling the Spread of Fake News From Deepfake Technologies

What to consider when it comes to misinformation during the rise of AI-generated audio and video

In a video interview with Access Hollywood, comedian Kevin Hart endorses Donald Trump for president in 2024. The clip goes viral, and his fans are up in arms. Meanwhile, WKCU in Phoenix leads with a story that a second coronavirus has hit the mainland … and it’s time to panic.

Of course, no real need to panic—yet. None of these news stories are real, and these “reports” have not actually happened.

Yet this sort of misinformation outbreak is a distinct possibility in the near future, as artificial intelligence promises to metamorphose just about every industry. That AI revolution will mostly be wondrous—faster processes, advancements in robotics, self-driving cars, trucks and planes.

But it will also surely benefit those who profit from misinformation or those who just enjoy wreaking havoc. Up until now, misinformation has been delivered mostly through text, making it easy to verify if someone actually said or did what was being reported. But with the rise of AI-generated audio and video, this evidence can be easily synthesized. Meaning it’s getting harder to spot fake news.

Brands are not exempt from unregulated AI

You’ve already seen the Tom Cruise deepfakes. Maybe you’ve swapped your buddy’s photo in and made it look like he was singing Smashmouth’s “All Star” just like that bogus Mark Zuckerberg clip. It’s only going to get crazier, more unpredictable and available to anyone with a smartphone.

These are the kinds of things that the media and advertising industries need to be contemplating and preparing for right now—the potential problems resulting from unregulated AI are many and the ethical concerns are just coming into focus.

Here’s the good news: We may be about to stumble into the next booming startup industry. Hey VCs, get ready to hear even more pitches for Deepfake Detector services.

It’s one thing for brands to worry about whether they end up next to “safe” or suitable content on the web. Soon they’ll need to think about whether their ads are adjacent to dangerous deepfakes. Or even worse, what if their brand is featured in one?

So any startup founder who can figure out how to keep ads away from deceptive content should have themselves a huge business.

Before we delve deeper into the marketing implications of this, let’s talk about news.

AI is coming to journalism, for better or worse

We’re already seeing automation technology that helps publishers categorize and analyze content for advertisers or distributors, extracting insights and adding metadata without humans having to review and label every single video and article.

In some parts of the world, “synthetic media” has already arrived. Anchors can record one video report and machines can use their images and voices to churn out hundreds of versions.

We have reached a tipping point where it is nearly impossible for a human to differentiate between an AI-generated synthetic video and one that is ‘original.’

That may sound scary in a “robots are going to take our jobs” sort of way, but overall this is terrific for a beleaguered industry, as long as it is being used in a responsible way. The more that technology can streamline rote tasks, the more time there is for editors and writers to produce great content.

And the more that machines can automate video production, the more content companies can churn out. Time spent, ad inventory and ad revenue should all go up.

That is, as long as the right protections are put in place.

We have reached a tipping point where it is nearly impossible for a human to differentiate between an AI-generated synthetic video and one that is “original.” So who’s to say that the bad guys can’t harness all this coming efficiency tech and use it to weaponize lies?

Of course, the abuse of AI is one thing. Newsrooms also face huge ethical questions in how they will apply it.

For example, should a synthetic reporter’s ethnicity match that of its audience?

Or, as machine learning algorithms increasingly help deliver more personalized content and news coverage, there’s also the possibility that machines will exhibit some unforeseen bias, such as overly featuring minority groups in a negative light. These are thorny questions.

Advertisers are facing a coming authenticity deficit

Getting back to the ad world—here’s hoping that AI can be developed to sniff out potential misuse. We may be looking at a future of good machines versus bad ones.

That leaves many yet-to-be-answered (or even conceived) questions, such as:

  • Whose job should it be to police content for legitimacy? We’ve already seen social media platforms struggle with misinformation when it’s grandma posting bogus vaccine stories. What happens when it’s armies of supercomputers?
  • Do reporters and celebrities have to verify all of their work? Will people start to adopt new forms of digital trademarks?
  • Are publications going to have to constantly “vouch” for their headlines? And tamp down fake ones?
  • Is an AI ethics researcher about to become the most coveted title in publishing? Where are we going to find this talent? Or are we just going to create another AI machine to fight the fake content?
  • Do we need to engage lawmakers now rather than leaning on self-regulation?
  • And lastly, what does this all mean for consumers? Are we going to be able to believe anything anymore?

More likely, we’re about to enter a new era of “Don’t Trust Until Verified.” It’s time to start laying the groundwork now to grapple with this coming reality.