YouTube to Warn Users About to Post Potentially Offensive Comments

The Google-owned video site will debut a voluntary survey for creators next year to help ensure that their communities are treated fairly

YouTube's number of daily hate speech comment removals has skyrocketed by 46 times since early 2019 Anatoliy Sizov/iStock

YouTube released features to make its comments section less toxic and provided details on an upcoming voluntary survey that will help it determine which communities creators belong to and ensure that those creators and communities are being treated fairly by its systems.

The Google-owned video site will test a new filter in YouTube Studio for potentially inappropriate and hurtful comments that have been automatically held for review so that creators can avoid reading those comments should they choose to do so.

YouTube will also streamline its comment moderation tools for creators.

A new feature will warn users when comments they are about to post may be offensive to others, similar to an artificial intelligence-powered warning feature rolled out by Instagram last July.


YouTube pointed to investments it has made in technology to help its systems better detect and remove hateful comments by taking into account the topic of the video and the context of the comment.

The video site said the number of daily hate speech comment removals has skyrocketed by 46 times since early 2019, adding that over 54,000 of the more than 1.8 million channels that were terminated in the third quarter of 2020 were due to hate speech, marking the most in a single quarter and three times more than the previous high, set in the second quarter of 2019.

Vice president of product management Johanna Wright said in a blog post Thursday, “We know that comments play a key role in helping creators connect with their community, but issues with the quality of comments is also one of the most consistent pieces of feedback we receive from creators. We have been focused on improving comments with the goal of driving healthier conversations on YouTube.”

YouTube is also focused on removing unintentional bias from its systems by learning more about the creators of videos and about which videos come from which communities.

Wright wrote, “For example, our systems can evaluate how videos about Black Lives Matter are performing against other content on YouTube regardless of the creator, but we’re currently not able to evaluate growth for Black beauty creators, LGBTQ+ talk show hosts, female vloggers or any other community.”

YouTube will roll out a voluntary survey next year to collect information from creators such as gender, sexual orientation, race and ethnicity, using its findings to determine how content from different communities is being treated by its search and discovery systems and its monetization systems.

Wright wrote, “Our creators’ privacy and ability to provide consent for how their information is used is critical. In the survey, we will explain how information will be used and how the creator controls their information. For example, the information gathered will not be used for advertising purposes, and creators will have the ability to opt-out and delete their information entirely at any time.”

YouTube is consulting with creators on crafting the survey, which it expects to debut in the U.S. in the early part of next year, and it will also turn to civil and human rights experts for guidance.

Wright concluded, “The steps we’re announcing today are part of our ongoing work to ensure that YouTube continues to be a platform where creators of all backgrounds can thrive. We appreciate the partnership of the Black, LGBTQ+ and Latinx creator communities who have consulted with us in these efforts. Thank you for sharing your perspectives with us and helping to make YouTube a better place for everyone.” David Cohen is editor of Adweek's Social Pro Daily.