Facebook Pulled 8.7 Million Pieces of Content in Q2 for Violating Child Nudity, Exploitation Policies

The social network is using AI and machine learning in its efforts

Facebook provided some details about how it has been using artificial intelligence and machine learning in its efforts to prevent child exploitation and keep children safe on its platform.

Global head of safety Antigone Davis wrote in a Newsroom post that the social network is using AI, machine learning and other technology to “proactively detect child nudity and previously unknown child exploitative content when it’s uploaded” and report that content to the National Center for Missing and Exploited Children, as well as to find and remove accounts that engage in potentially inappropriate interactions with children.

She

AW+

WORK SMARTER - LEARN, GROW AND BE INSPIRED.

Subscribe today!

To Read the Full Story Become an Adweek+ Subscriber

View Subscription Options

Already a member? Sign in