Brand safety has been a rallying cry at this year’s Cannes Lions International Festival of Creativity.
On Tuesday, a veritable supergroup of brands, agencies, platforms (including Facebook, Google and Twitter) declared they would tackle that issue and others related to harmful content with the Global Alliance for Responsible Media. Also on Tuesday, a coalition of ad-tech names unveiled the Brand Safety Institute to address much the same issue.
Not to be outdone, Reddit and Oracle Data Cloud announced a similar collaboration, also on Tuesday, to provide brand safety controls around a real-time feed of user-generated content on the roughly 140,000 forums called subreddits populating the so-called “front page of the internet.”
Although it has offered programmatic buying opportunities for a number of years, Reddit has attempted to offer brand safety assurances by operating on a direct-buy system in which it works together with media buyers to decide which subreddits would be acceptable to appear alongside.
However, now it is working with Oracle to use its contextual intelligence technology to assess in real time whether or not the subject of a given forum is fundamentally divisive, but also whether the content on that forum is divisive at that particular moment—a handy tool for forums that might be overrun with internet trolls.
“I think brand safety is something that the industry has identified as important, but as a whole hasn’t really made the progress that it wants to,” said Jen Wong, COO of Reddit.
“And that’s something that we certainly heard from clients, especially in a UGC environment,” she said, adding that Reddit’s media buyers wanted one thing: control.
“The way they think about it is that there’s this dial where there’s more sensitivity to our brand at moments and sometimes there’s less,” she said. “If you think about a marketer that’s trying to maximize reach, they might toggle their sensitivity a bit lower versus when they’re trying to sell a really important brand story when they might dial up.”
The news caused ears to perk up for a few reasons. UGC has always been ripe for brand safety issues, and tech companies know this. Even giants like YouTube and Facebook aren’t scrambling to contend with the billions of users shelling out a seemingly infinite stream of content that, undoubtedly, gets either (or both) in hot water with media buyers.
Meanwhile, Reddit has earned a reputation as a hotbed for virulent discussions that can skew anywhere from controversial at best to downright offensive. Last year, Steve Huffman, CEO of Reddit, flatly admitted he found it “impossible” to consistently enforce moderation on the hate speech that appeared to be consuming certain corners of the platform.
While the Oracle partnership doesn’t remove this content, the idea is to keep advertisers and their budgets a little bit removed. But some voices in the ad-tech space are skeptical about how effective the duo will be.
Some sources suggest that an overreliance on keywords to help advertisers steer clear of brand safety problems, an asset imbued by Oracle’s acquisition of British startup Grapeshot in 2018, may undervalue the all-important context in which brands appear.
While a blacklist of keywords might work well if a brand doesn’t want its content showing up alongside content that contains racial slurs or pornographic material, a blacklist of those terms works perfectly, but the lion’s share of brand safety issues occurs in content that’s not overtly offensive.
A 2018 survey led by GumGum, for example, found that the types of unsafe content for brands ran the gamut from natural disasters to divisive politics to “fake news”—all of which exist on Reddit.
Oracle isn’t unique in its keyword-based approach with the majority of popular brand-safety solutions employing keyword searches, but as the subtleties of policing UGC are increasingly exposed, advertisers are asking for manual content curation or more robust use of AI to steer their brands away from divisive material.