At the Cannes Lions Festival of Creativity in June, Unilever CMO Keith Weed, a leader in the ad industry’s attempts to tackle a fragmented and opaque ad environment, gave brands a warning: lose trust and risk it all.
“Trust is everything for a brand,” Weed said. A brand without trust, he added, is simply a product.
It’s been another tough year for brand safety. Whether it’s on YouTube or Facebook, within a mobile app, or even alongside a piece of news content, brands must determine how to protect against appearing next to objectionable content when advertising on digital media.
DoubleVerify president and CEO Wayne Gattinella said that the problem is becoming harder to address as video content, mobile advertising and social media have complicated a once-simplistic digital advertising ecosystem. Advertisers are keen on effectively identifying inappropriate content before a brand safety disaster occurs.
“It’s a really complicated and sophisticated environment,” Gattinella said. “It’s far more complex for everyone, including an advertiser, to be able to see through ahead of the environment where their ads are showing.”
Plus, the definition of what constitutes unsafe content for brands has widened dramatically, and industry insiders expect it may continue expanding.
“Advertisers are even having trouble advertising on news,” said Chris Pavlovski, CEO of the video and content management platform Rumble, adding that brands have expressed hesitation about advertising on other charged topics.
Even as the definition of brand safety evolves, marketers are optimistic about automated approaches, but they remain cautious about the effectiveness of the existing technology.
Leaning into AI technology
Gattinella is a big believer in tackling brand safety concerns with technology.
“The idea that we are going to be able to, practically speaking, screen, rank, rate and potentially suppress ads from particular pages and types of content is not a scalable, sustainable solution,” Gattinella said, pointing to a tech-based solution that provides the transparency and trust that brands require online.
There’s an appetite for developing automated brand safety tools. Take the Israeli cybersecurity company Cheq, which received $5 million in Series A funding in June. Chief strategy officer Daniel Avital said Cheq employs the same type of natural language processing technology that has been used by militaries around the world for surveillance purposes. The interest in using AI to address brand safety, Avital said, is because the industry needs a solution, stat.
“One bad screenshot can send a brand into hysteria, or one bad ad placement on a publisher could have millions of exposures,” Avital said.
Cheq is far from the only player hoping to crack the AI code. From GumGum and other ad-tech firms to big players like Facebook and YouTube, companies across the spectrum are all working on AI-driven technologies. But some marketers said fully relying on automated tools wasn’t possible just yet.
“Automation is absolutely imperative,” said Mastercard CMO Raja Rajamannar, stressing that the sheer scale of the market makes human review of every ad placement impossible. “But it is not to a level of sophistication today where we can fully rely on automation.”
Relying on human review (for now)
Rajamannar said that Mastercard takes a somewhat conservative approach to digital, prioritizing human-centric approaches like whitelists and blacklists, as well as working with premium publishers, to ensure brand safety.
“We recognize that there’s a lot of opportunity for us to reach consumers in a hyper-targeted kind of way,” Rajamannar said of the abilities to automatically target customers. “But we have been favoring quality over quantity. … What’s the point of having scale if it’s not high quality and it puts the brand at risk?”
Pavlovski said he’s optimistic about automated tools that can scan for brand safety. Rumble, which helps video creators monetize their content and helps marketers advertise on brand-safe video inventory, uses algorithms to scan videos uploaded to its platform. But until those tools are perfected, Pavlovski said, Rumble also relies on human input to improve the tools and help ensure brand safety from the onset.
The eventual hope industry-wide is that algorithms will improve enough to be able to make accurate calls about certain online content, Pavlovski said.
Jason Kint, CEO of the media trade association Digital Content Next, stressed that automated tools have shown “a lot of failure,” and said that human review and old-school methods—like working with premium inventory and using tools like whitelists—are “still critical for both the user and for the advertisers.”