YouTube outlined some of the steps it is taking to protect minors on its platform.
New classifiers—machine learning tools that help identify specific types of content—were also added to YouTube’s live video products to weed out content that violates this new policy.
YouTube also expanded on efforts it started earlier in the year to limit recommendations of “borderline” content, adding videos that feature minors in risky situations to that list.
The company said in its blog post, “While the content itself does not violate our policies, we recognize the minors could be at risk of online or offline exploitation. We’ve already applied these changes to tens of millions of videos across YouTube.”
These steps follow YouTube’s move in February to disable comments on tens of millions of videos featuring minors, in order to limit the risk of exploitation, and its implementation of a classifier that helped it delete double the number of violative comments.
YouTube also reminded parents that it is not intended for kids under the age of 13, and those younger children should be using its separate YouTube Kids application instead.
YouTube added that it is terminating thousands of accounts per week upon discovery that the account holders are not yet 13.
The company wrote, “Over the last two-plus years, we’ve been making regular improvements to the machine learning classifier that helps us protect minors and families. We rolled out our most recent improvement earlier this month. With this update, we’ll be able to better identify videos that may put minors at risk and apply our protections, including those described above, across even more videos.”