Facebook Pulled 8.7 Million Pieces of Content in Q2 for Violating Child Nudity, Exploitation Policies

The social network is using AI and machine learning in its efforts

Facebook is finding and removing accounts that engage in potentially inappropriate interactions with children.
diego_cervo/iStock

Facebook provided some details about how it has been using artificial intelligence and machine learning in its efforts to prevent child exploitation and keep children safe on its platform.

Global head of safety Antigone Davis wrote in a Newsroom post that the social network is using AI, machine learning and other technology to “proactively detect child nudity and previously unknown child exploitative content when it’s uploaded” and report that content to the National Center for Missing and Exploited Children, as well as to find and remove accounts that engage in potentially inappropriate interactions with children.

She also pointed out that Facebook’s community standards cover content that even has the potential for abuse, so action is taken on nonsexual content such as images of children bathing.

During the second quarter of 2018, Facebook removed 8.7 million pieces of content that violated its policies on child nudity or sexual exploitation of children, catching 99 percent of those before they were even reported.

Davis said Facebook has specially trained teams with backgrounds in law enforcement, online safety, analytics and forensic investigations, and it also works with safety experts, nongovernmental organizations and other companies, including the Technology Coalition, the Internet Watch Foundation and the WeProtect Global Alliance.

Finally, she wrote that Facebook will join other industry partners, including Microsoft, next month to begin building tools to enable smaller companies to take similar actions.

Some of Facebook's efforts to protect children
Facebook

Recommended articles