This week’s YouTube brand-safety “furor” came to you via the comments section. And now not even seemingly innocent content on the platform is 100 percent safe for advertisers.
This week again underlined the inherent flaws of user-generated content portals as a sanctuary for advertisers. Major brands like Disney, Nestle and L’Oreal ran ads ahead of “innocuous” videos of children swimming or doing gymnastics, which on the surface is fine. The trouble began when scrolling through the comments, which were infested with pedophiles.
As we’ve seen over the last several years, this brouhaha will likely play out in an all-too-familiar fashion: Brands will raise concerns; buyers will shrug; platforms will continue to print money. YouTube began an extensive outreach exercise this week, hosting a conference call Wednesday with representatives from all major ad agency holding companies as well as several unnamed advertisers to address their concerns.
Adweek obtained a copy of a document YouTube subsequently distributed to concerned parties on Thursday. That memo stated that the company has suspended comments on “tens of millions of videos” that could be subject to predatory comments and is working on fine-tuning “how ads are placed on channels—even innocent ones—that primarily feature minors and could attract bad actors.”
A disappointing performance
One holding company executive with direct knowledge of the matter speaking on condition of anonymity told Adweek that YouTube’s efforts to improve brand safety back in 2017 had failed to meet expectations.
“They said on the call specifically that the volume is too big for them to curate, so they would have to rely on machine learning, using it to elevate good comments into a top comments category and demote other types,” the source said.
This person continued, “Google glossed over the fact that the content and creators need to be protected (it’s where they get the money from), but when you look at [Matt Watson’s] video and the Wired article, there’s a significant amount of content that isn’t just innocent young girls doing yoga and dancing.”
While some observers expressed understanding, given that many of the offending comments appeared beneath legitimate videos, others believe YouTube’s profit motive prevents the company from applying all assets at its disposal to curtail the problem. This fact, they said, is in keeping with a media buying game that rewards quantity over quality.
Media buyers ultimately go through a risk-reward analysis before deciding whether or not to adopt any kind of a presence on social networks, but the recurring chorus of this sad song is that everyone needs to do more—especially when surrounding UGC. Whether that comes from the platform or the agency or the brand depends on who is talking.
Clive Record, head of global media partnerships at Dentsu Aegis, said that risk is unavoidable when it comes to UGC, despite the machine learning-powered curation efforts made by Google following past controversies.
“Advertisers should have high expectations both of Google and their agencies in terms of managing that risk—but should also recognize it cannot be failsafe,” he added.
In other words, this game of Whack-A-Mole will continue despite a plethora of ad-tech intermediaries that claim to provide tools addressing everything from fraud prevention to specific targeting.
Google’s policies prevent third parties from helping out
Ari Paparo, CEO of Beeswax, a demand-side platform that helps advertisers automatically identify where best to buy ads based on audience data, called such cracks in the operations of a multibillion-dollar business “ridiculous.”
“A very important, nuanced point to remember here is that Google refuses to offer an API,” Paparo said. “The result is that they are taking away the ability of [third-party] buy-side vendors to do something about it.”
Many large platforms struggle with the responsibilities of maintaining child safety, said Dylan Collins, CEO of SuperAwesome. His company claims to help brands engage with kids in a COPPA-compliant ways and works with companies that have large underage audiences, such as Hasbro and Lego.
“Fundamentally, the Silicon Valley companies are ill-equipped to deal with the challenges that come with having kids on a platform,” said Collins. And despite the restrictions on platforms such as Facebook, Twitter and YouTube, many were built on the assumption that all users will be adults.
Collins said advertisers at large have lost patience with wave after wave of bad publicity. He also noted that child protection scares are starting to affect brands with a significant number of consumers under the age of 13.
But the question, as always, remains: Will advertisers ever pull their money from these platforms altogether?
Why many buyers soon revert to type
“Until Google can protect our brand from offensive content of any kind, we are removing all advertising from YouTube,” an AT&T spokesperson said yesterday.
Yet repeated flare-ups that highlight the inadequate safety protocols of such platforms appear to have had little, if any, impact on their bottom lines. Many buyers publicly extol brand-safe media environments in the aftermath of such high-profile controversies, but the vast majority come back once the outrage dies down.
It’s hard to justify exiting a major platform when you can reach millions of eyeballs at a cost-effective price. Even AT&T returned to YouTube after a very public two-year absence.
Speaking recently with Adweek, Ana Milicevic, co-founder and principal at consultancy Sparrow Advisers, explained that this is not always at the behest of the primary advertiser and oftentimes it is the act of an intermediary.
“When you get this, it’s clearly a case of advertisers prioritizing targetability over brand safety,” she added.
Ultimately, legislation may be the only way to fully address this issue.
“The next two to three years will help define what those platforms look like,” said the agency executive. “I know, based on conversations with the people designing GDPR and other privacy initiatives, that they want to go much further and feel audiences are being exploited.”
She added, “Just because people opted in doesn’t mean they’ve read 50-page privacy policies. There’s a specter of things happening behind scenes, and we don’t yet fully understand the impact.”
Additional reporting by Patrick Kulp