Mozilla Shares 28 User Stories of YouTube’s Recommendations Engine Leading Them Astray

The Google-owned video site began cleaning it up in January

Mozilla asked people for their #YouTubeRegrets Mozilla
Headshot of David Cohen

Nonprofit Mozilla unveiled the YouTube Regrets website Tuesday to share 28 instances where the Google-owned video site’s recommendations engine led people “down bizarre and dangerous paths,” as vice president of advocacy Ashley Boyd put it in an email.

Mozilla said it asked people for their #YouTubeRegrets and received hundreds of responses with tales of content about racism, conspiracies and violence showing up in their recommendations after viewing “innocuous” content.

Examples include: A person who watched a video about Vikings saw white supremacist content in their recommendations; someone who watched confidence-building videos that were posted by a drag queen saw clips of homophobic rants; and a YouTuber who searched for fail videos saw grisly footage from fatal accidents in their recommendations.

YouTube spokesperson Farshad Shadloo said in an email, “While we welcome more research on this front, we have not seen the videos, screenshots or data in question and can’t properly review Mozilla’s claims. Generally, we’ve designed our systems to help ensure that content from more authoritative sources is surfaced prominently in recommendations. We’ve also introduced over 30 changes to recommendations since the beginning of the year, resulting in a 50% drop in watch time of borderline content and harmful misinformation coming from recommendations in the U.S.”

Indeed, YouTube began the process of cleaning up its recommendations in January, saying at the time that it would reduce exposure to “borderline content and content that could misinform users in harmful ways,” such as videos that promote miracle cures, claim the earth is flat or make “blatantly false claims” about historic events such as the Sept. 11 terrorist attacks, and adding that fewer than 1% of videos on its platform would be affected.

The video site added that it has made over 30 policy and enforcement updates to its community guidelines over the past year, and steps it has taken to improve its search results and recommendations include elevating content from more authoritative sources—with new features including a Top News shelf with videos from news sources in search results, as well as a Breaking News shelf on the homepage—as well as giving viewers more context on news videos on its platform, including an information panel with links to third-party sources.

Mozilla said in an email, “The stories Mozilla is presenting are anecdotes, not rigorous data, but that highlights a big part of this problem. YouTube isn’t sharing data with independent researchers who could study and help solve this issue. In fact, YouTube hasn’t provided data for researchers to verify its own claim that YouTube has reduced recommendations of ‘borderline content and harmful misinformation’ by 50%.”

The nonprofit added that it met with YouTube in late September and proposed the following three steps:

  1. Provide independent researchers with access to meaningful data including impression data (number of times a video is recommended, number of views as a result of those recommendations), engagement data (number of shares) and text data (creator name, description, transcription and other extracted text).
  2. Build simulation tools to enable researchers to mimic user pathways through the recommendation algorithm.
  3. Change its existing API (application-programming interface) rate limit to empower researchers, and provide them with access to a historical archive of videos. David Cohen is editor of Adweek's Social Pro Daily.