YouTube Announces New Policies in Response to Advertisers’ Child Safety Concerns

Letter goes out to brands, agencies and influencers

Today's letter follows a similar memo sent to partners last week. Illustration: Ron Goodman for Adweek

YouTube took additional steps today to address a controversy that erupted earlier this month over child safety issues, sending a letter to brands, media buyers and top content creators detailing new initiatives to help protect underage users and prevent ads from running adjacent to offensive or illegal content.

The letter, which was acquired by Adweek, went out to a large swath of companies and agencies, many of which will also be contacted by the Google-owned platform on a one-to-one basis, according to parties with direct knowledge of the matter.

Several major advertisers, including Disney, Nestle, McDonald’s, AT&T and Epic Games, pulled or “paused” their YouTube ad buys last week after a 20-minute video posted by blogger Matt Watson demonstrated how pedophiles use comment threads on videos—many of which feature minors engaged in seemingly harmless activities—to network and share links to pornography.

YouTube’s letter is largely seen as a direct response to Watson’s video and advertisers’ subsequent dismay. It also arrives one day after the FTC issued a record fine to red-hot music app TikTok for alleged violations of child privacy laws.

The platform issued a memo to a similar group of partners last week, stating that it had begun disabling comments on “tens of millions of videos” that “could be subject to predatory [behavior]” as well as “reducing the discoverability” of content that had been flagged, terminating offending accounts and “increasing accountability” among the creator community.

Today’s letter states that this effort will continue and expand “over the next few months,” with the ultimate goal of prohibiting comments on all videos featuring “younger minors” as well as those that include older minors but have been labeled at-risk of attracting predatory behavior.

These classifications will be made almost entirely by algorithms, though YouTube did hire 10,000 people last year to manually review content in the interest of more accurately flagging offensive or potentially problematic materials. A “very small number” of accounts handpicked by the company will continue to manage their own comment threads.

Additional measures include the accelerated rollout of an updated machine learning “comments classifier,” which YouTube claims will eliminate twice as many offending comments as before. The company also promises to take a more unforgiving stance regarding creators deemed to have posted inappropriate material related to children. For example, Filthy Frank Clips and similar accounts were recently terminated.

“Despite this progress, threats have evolved and they’re more nuanced than ever,” reads the letter, which is embedded in full below. “Because of the importance of getting child safety right, we announced a series of blunt actions over the past week as we work to sharpen our ability to act more precisely.”

“Over the past week, we’ve been taking a number of steps to better protect children and families, including suspending comments on tens of millions of videos,” said a YouTube spokesperson. “Now we will begin suspending comments on most videos that feature minors, with the exception of a small number of channels that actively moderate their comments and take additional steps to protect children. We understand that comments are an important way creators build and connect with their audiences. We also know that this is the right thing to do to protect the YouTube community.”

In addition to the steps mentioned in the letter, the company published a forum post to update its millions of creators regarding these new policies. Many of those influencers, especially in the beauty and fashion realms, will be particularly affected by the changes as they are disproportionately minors. (YouTube allows users 13 and over to create accounts and upload videos.)

Eventually, the same approach could be applied to other areas of concern, such as anti-vaccine and conspiracy theory videos that prove particularly problematic for advertisers.

The measures were welcomed by the ad industry.

“Today’s announcement is an improvement and goes some way to resolving the issues we saw with predatory comments in the news last week,” said Danny Hopwood, Omnicom Media Group’s president of digital display for the EMEA region. “I think there is more to be done, though, on the actions taken by YouTube when their policies are violated.”

Many continue to note the inherent risks of user-generated content platforms, and ad-tech experts have expressed their frustration at YouTube’s decision not to allow third parties to help prevent ads from appearing next to inappropriate content.

One digital marketing expert speaking with Adweek on condition of anonymity spoke of some of the reasons why a global platform such as YouTube, which is now sewn into the very fabric of early 21st-century culture, may find it difficult to yield to technology demands made by a handful of advertisers.

“YouTube is one of the most important platforms in the world for people sharing information, and that’s a very important utility in human civilization right now,” this person said, adding that providing a “bespoke resolution” for any single advertiser or group of advertisers will be all but impossible.

Bethan Crockett, senior director of brand safety and digital risk, GroupM, EMEA, told Adweek that YouTube has significantly improved its communications with advertisers over such high-profile scares in recent years.

“This has probably come about since the issue was raised [in an investigation by The Times of London editorial team] in 2017 when they were more on the backfoot and weren’t as proactive and engaged,” she said. “But now I think they recognize the need to do so.”

Multiple parties also say marketers are now more prepared to take actions such as suspending their ad efforts on platforms that fail to meet given standards.

“This is part of a risk analysis on their behalf, so they press pause so they can say they’re no longer part of it until they can understand what can be done,” said one source. Another added, “The numbers of impressions involved in these examples are tiny, but it only takes one impression to create a news story, and that’s where the brand risk lives.”

At the same time, the chances of a true YouTube exodus remain extremely low.

“It’s going to be tough for holding company media agencies to divest from YouTube because they need to keep spend numbers up on Google in order get the discounts demanded by the procurement departments of their biggest clients,” said Greg March, CEO of independent media agency Noble People.

One exception on the brand side is AT&T. The company, which had just returned to YouTube after a two-year absence due to brand safety concerns, said last week, “Until Google can protect our brand from offensive content of any kind, we are removing all advertising from YouTube.

Today a representative said AT&T would not be elaborating on that statement. Spokespeople for the other brands that made headlines for pulling away from the platform, including Nestle, McDonald’s, Epic Games and Disney, did not immediately respond to requests for comment.


@joshsternberg Josh Sternberg is the former media and tech editor at Adweek.
@PatrickCoffee patrick.coffee@adweek.com Patrick Coffee is a senior editor for Adweek.
@ronan_shields ronan.shields@adweek.com Ronan Shields is a programmatic reporter at Adweek, focusing on ad-tech.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}