Did you know brand recall increases when viewers see an ad on TV AND streaming? Download "A Practical Playbook for Multiscreen TV" to learn more.
Video, image and text-based memes are passé. Voiced-based memes are the latest hits on social media platforms, especially TikTok and YouTube, and the next format being flooded by content created through generative artificial intelligence.
The majority of these AI-generated videos portray politicians, such as Joe Biden and Donald Trump, propagating misinformation and conspiracy theories, according to aggregated data research by brand safety ad-tech firm Zefr. Recent examples include the pair trash-talking while gaming, Trump claiming “America is tyranny and fascism,” and Biden extolling “Epstein didn’t kill himself.”
As of early 2023, the number of AI-generated videos portraying politicians across social platforms has already surpassed the entire volume of 2022 videos by over 130%, per Zefr research.
These videos have garnered over 11 million views and more than 2 million likes, according to Or Levi, director of data science at Zefr. And over 90% of these videos are monetizable, meaning that advertisers are currently running ads adjacent to such content. The company couldn’t share how much was being spent on ads showing up next to AI-generated audio misinformation content.
“There is a real risk around deep fakes and [AI-generated audio] technology evolving in time for the next election cycle,” said Kieley Taylor, global head of partnerships at GroupM.
Recent advancements in AI, combined with access to text-to-speech models, such as Microsoft’s Vall-E or open-source tools such as fakeyou.com, have made it easy for bad actors to proliferate misinformation on social media. Simultaneously, there is a growing concern among advertisers to combat misinformation despite social media companies investing money into content moderation. However, current misinformation tools like keyword blocklists and static inclusion lists may fall short, sources told Adweek.
The report also found creators using ChatGPT to create scripts for misinformation videos. Text-to-image tools such as Vall-E, despite strict content safeguards, have been used to portray politicians as Reptilians, or Lizard People, as a part of the Illuminati.
AI solutions outside of social
Outside of TikTok and Youtube—where these videos are currently surging—advertisers can use keyword blocklists or AI-driven contextual tools from third-party verification companies like DoubleVerify and Integral Ad Science (IAS) to filter ads that are brand safe and brand suitable.
GroupM’s brand partners rely on both, Taylor said, reserving block lists for extreme or pirated content. Contextual AI tools help refine potential content adjacencies on quality publishers which may have a variety of content themes ranging from suitable to unsuitable and are based on client risk tolerance.
The industry is also being more prudent about the partners it does business with, weeding out junky supply.
“Any attempts around supply path transparency will allow clients to know exactly where their ads will run,” said Deva Bronson, EVP, global head of brand assurance, Dentsu Media.
Still, tools, including keyword blocklists or static inclusion lists, can be ineffective on social platforms, which require content analysis beyond website domains, according to Andrew Serby, chief commercial officer at Zefr.
This means advertisers need to look at more scalable AI fact-checking solutions to help them measure misinformation across social, (Zefr is one company that provides such tools based on training data shared by fact-checking organisations, but ad buyers interviewed for this article weren’t currently using them).
Fears extend across the audio landscape
In order to prevent more brand safety fears from spreading from social platforms into podcasts, audio companies like iHeartMedia are developing new tools. The hope is this won’t stem the flow of ad dollars into podcast’s (relatively small) $2 billion ad market, according to the Interactive Advertising Bureau.
“Podcasting is growing in terms of listenership, audiences and monetization of those audiences,” said Taylor.
However, Bronson points out that podcasts that can be monetized are usually available through partners like NPR or Spotify, whose best interest lies in the content moderation of those podcasts.
“I haven’t had any concerns about misinformation being rampant in podcasts that our clients are currently buying come across my desk yet,” she said.