Brand safety has been a rallying cry at this year’s Cannes Lions International Festival of Creativity.
On Tuesday, a veritable supergroup of brands, agencies, platforms (including Facebook, Google and Twitter) declared they would tackle that issue and others related to harmful content with theGlobal Alliance for Responsible Media. Also on Tuesday, a coalition of ad-tech names unveiled theBrand Safety Institute to address much the same issue.
Although it has offeredprogrammatic buying opportunities for a number of years, Reddit has attempted to offer brand safety assurances by operating on a direct-buy system in which it works together with media buyers to decide which subreddits would be acceptable to appear alongside.
However, now it is working with Oracle to use its contextual intelligence technology to assess in real time whether or not the subject of a given forum is fundamentally divisive, but also whether the content on that forum is divisive at that particular moment—a handy tool for forums that might be overrun with internet trolls.
“I think brand safety is something that the industry has identified as important, but as a whole hasn’t really made the progress that it wants to,” said Jen Wong, COO of Reddit.
“And that’s something that we certainly heard from clients, especially in a UGC environment,” she said, adding that Reddit’s media buyers wanted one thing: control.
“The way they think about it is that there’s this dial where there’s more sensitivity to our brand at moments and sometimes there’s less,” she said. “If you think about a marketer that’s trying to maximize reach, they might toggle their sensitivity a bit lower versus when they’re trying to sell a really important brand story when they might dial up.”
The news caused ears to perk up for a few reasons. UGC has always been ripe for brand safety issues, and tech companies know this. Even giants likeYouTube andFacebook aren’t scrambling to contend with the billions of users shelling out a seemingly infinite stream of content that, undoubtedly, gets either (or both) in hot water with media buyers.
Meanwhile, Reddit has earned a reputation as a hotbed for virulent discussions that can skew anywhere from controversial at best to downright offensive. Last year, Steve Huffman, CEO of Reddit, flatly admittedhe found it “impossible” to consistently enforce moderation on the hate speech that appeared to be consuming certain corners of the platform.
While the Oracle partnership doesn’t remove this content, the idea is to keep advertisers and their budgets a little bit removed. But some voices in the ad-tech space are skeptical about how effective the duo will be.
While a blacklist of keywords might work well if a brand doesn’t want its content showing up alongside content that contains racial slurs or pornographic material, a blacklist of those terms works perfectly, but the lion’s share of brand safety issues occurs in content that’s not overtly offensive.
A 2018 survey led by GumGum, for example, found that the types of unsafe content for brands ran the gamut from natural disasters to divisive politics to “fake news”—all of which exist on Reddit.