YouTube Sends Memo to Major Brands and Holding Companies Regarding Latest Ad Safety Controversy

Document addresses 'soft-core pedophilia' comments

YouTube has introduced a number of updates in recent weeks. Getty Images

YouTube moved to address the newest in a string of “brand safety” controversies that could affect its advertising business late yesterday.

According to several parties with direct knowledge of the matter, the Alphabet-owned company held a conference call with representatives from all major ad agency holding companies as well as several unnamed advertisers after Bloomberg and other outlets reported that Disney, Nestle and “Fortnite” maker Epic Games, among others, had halted or delayed their ad buys on the platform due to concerns over child safety.

Specifically, a blogger named Matt Watson posted a 20-minute video on Sunday detailing the ways in which he claimed YouTube is “facilitating pedophiles’ ability to connect with each other” and share links to child pornography in the comments sections of otherwise “innocuous” videos featuring children engaged in gymnastics and other activities.

As the story grew bigger, YouTube held the aforementioned call to address concerns of clients and their agency partners, according to Adweek’s sources. After the call, company representatives sent out the below memo detailing recent and forthcoming efforts on that front.

Spokespeople for all seven of the largest agency groups—WPP, Omnicom, IPG, Publicis, Dentsu, Havas and MDC Partners—have either declined to comment or not responded to requests for comment on this story. It is unclear whether the companies mentioned in these earlier reports were among those on the call.

“Any content—including comments—that endangers minors is abhorrent, and we have clear policies prohibiting this on YouTube,” read a statement from a YouTube spokesperson. “We took immediate action by deleting accounts and channels reporting illegal activity to authorities and disabling comments on tens of millions of videos that include minors. There’s more to be done, and we continue to work to improve and catch abuse more quickly.”

This quote aligns with the initiatives outlined in the memo. According to YouTube representatives, the company has taken several specific steps to address the matter over the last 48 hours, including the termination of over 400 channels based on their comment histories. YouTube has also reportedly collaborated with the National Center for Missing and Exploited Children to identify illegal activity on its platform while disabling comments on tens of millions of videos that include minors and removing dozens more that appear harmless, but may put young people at risk for unspecified reasons.

This is not the first time YouTube has been forced to confront matters of brand safety.

"The reality is … this and many of the other issues that plague YouTube monthly now [are] not easily solved without impacts to the consumer and the product."

In early 2017, several of the world’s largest advertisers—including AT&T, Verizon and Johnson & Johnson—pulled all spending on the platform over concerns about offensive content, specifically ads running over videos claiming to support the terrorist group ISIS. Havas had previously pressed pause on all of its own clients’ YouTube and Google buys in the U.K. after related reports ran in The Times and The Guardian. AT&T announced its return to YouTube exactly one month ago, citing “increased human review of videos and improved machine learning and artificial intelligence,” such as the company’s CSAI Match program.

While this event is not unprecedented for a platform as large as YouTube, one agency source speaking on condition of anonymity remains skeptical of the company’s ability to truly contain the recurring challenge.

“YouTube appearing in the press again for predatory comments and practices of users of its platform is not a surprise,” the individual wrote. “When you’re dealing with a platform that generates 300-plus hours of video per minute, the realities of being able to check and verify the content become daunting.”

"If people are asking for machine learning to solve their problems, be prepared for issues like this to keep appearing."

Regarding potential solutions, he continued, “For anyone saying human vetting is the only way to go, be prepared for a vastly reduced YouTube with ‘waiting times’ and liberal arguments of censorship and free speech. If people are asking for machine learning to solve their problems, be prepared for issues like this to keep appearing. No amount of investment in people or technology will solve these issues for YouTube; it’s ingrained in the very DNA of the platform.”

While the source said the impacted spend “is rumored to be very small,” he said that “does not make it any more palatable or OK.”

“The reality is … this and many of the other issues that plague YouTube monthly now [are] not easily solved without impacts to the consumer and the product, the content producers that populate YouTube and depend on it for an income or the marketers that make such scaled use of it,” he concluded.


@PatrickCoffee patrick.coffee@adweek.com Patrick Coffee is a senior editor for Adweek.
@ronan_shields ronan.shields@adweek.com Ronan Shields is a programmatic reporter at Adweek, focusing on ad-tech.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}