YouTube Addresses the Second of Its 4 Rs of Responsibility

The video site detailed raising authoritative content while reducing borderline content and misinformation

In 2017, YouTube began prioritizing authoritative voices in searches for news and information courtneyk/iStock

YouTube CEO Susan Wojcicki introduced the Google-owned video site’s “four Rs” approach toward responsibility in August, and step No. 1, Remove, was detailed in September. Tuesday brought a blog post outlining the second R, Raise Up.

The four Rs are:

  • Remove: YouTube removes content that violates its policies as quickly as possible and strives to make those policies clearer and more effective.
  • Raise up: The video site raises up authoritative voices when people are looking for breaking news and information, especially during breaking news moments.
  • Reduce: Wojcicki said YouTube reduces the spread of content “that brushes right up against our policy line,” citing the site’s changes to its recommendations in January.
  • Rewarding: She reaffirmed YouTube’s commitment to rewarding “trusted, eligible creators.”

YouTube said in a blog post that it aims to raise authoritative content and reduce borderline content and misinformation, saying of the first, “More and more people turn to YouTube to catch up on the latest news or simply learn more about the topics they’re curious about—whether it’s climate change or a natural disaster. For topics like music or entertainment, relevance, newness and popularity are most helpful to understand what people are interested in. But for subjects such as news, science and historical events, where accuracy and authoritativeness are key, the quality of information and context matter most—much more than engagement. That’s why we’ve re-doubled our efforts to raise authoritative sources to the top and introduced a suite of features to tackle this challenge holistically.”

In 2017, the video site began prioritizing authoritative voices—including CNN, Fox News, Jovem Pan, India Today and The Guardian—in searches for news and information, as well as its “watch next” panels.

YouTube said millions of search queries are currently getting this treatment, which is being expanded to more topics and countries.

The video site also began providing short previews of text-based news articles in search results for breaking news, along with reminders that breaking and developing news can rapidly change. Top News and Breaking News sections were added to the site in 2018.

Finally, YouTube designed several information panels to provide people with different types of context on videos, such as general topics and recent news that is prone to misinformation.


YouTube wrote, “For example, when people watch videos that encourage viewers to skip the MMR vaccine, we show information panels to provide more basic scientific context, linking to third-party sources. Or if people are viewing news videos uploaded by a public broadcaster or a government-funded news outlet, we show informational notices underneath the video about the news outlet. Collectively, we’ve delivered more than 3.5 billion impressions across all of these information panels since June 2018, and we’re expanding these panels to more and more countries.”

The Google-owned video site said in January that it began reducing recommendations of borderline content or videos that could misinform users.

In Tuesday’s blog post, the company said this initiative was expanded to countries outside of the U.S.—including Ireland, South Africa, the U.K. and other English-speaking markets, as well as non-English-speaking markets Brazil, France, Germany, Mexico and Spain—and it explained how it determines which videos to take action on.

YouTube wrote, “We rely on external evaluators located around the world to provide critical input on the quality of a video. And these evaluators use public guidelines to guide their work. Each evaluated video receives up to nine different opinions, and some critical areas require certified experts. For example, medical doctors provide guidance on the validity of videos about specific medical treatments to limit the spread of medical misinformation. Based on the consensus input from the evaluators, we use well-tested machine learning systems to build models. These models help review hundreds of thousands of hours of videos every day in order to find and limit the spread of borderline content. And over time, the accuracy of these systems will continue to improve.”

The company concluded: “Our work continues. We are exploring options to bring in external researchers to study our systems, and we will continue to invest in more teams and new features. Nothing is more important to us than ensuring that we are living up to our responsibility. We remain focused on maintaining the delicate balance that allows diverse voices to flourish on YouTube—including those that others will disagree with—while also protecting viewers, creators and the wider ecosystem from harmful content.”

YouTube David Cohen is editor of Adweek's Social Pro Daily.