hero-image
Helping Protect Brands and People From Problematic Content Online

Did you know more than 60% of the world’s total population is now online?

As the internet has evolved to incorporate more networks and devices, people everywhere have benefited from greater connection and access to information. But some of the same problems that have concerned humanity throughout history—including hate speech and misinformation—have also taken form online.

Fortunately, artificial intelligence is evolving to address these challenges. Some of the latest advancements in AI are helping preserve the integrity of online platforms, prevent harmful and misleading content from reaching people and ensure brand safety for advertisers.

Finding problematic content at scale is an incredibly difficult task, but AI is giving marketers tools to do it quickly and more effectively.

Finding and removing hate speech

To encourage user safety and keep harmful content off our platforms, we’ve established community standards for both Facebook and Instagram. We’re now able to train complex AI networks to scan new posts in just fractions of a second, and evaluate if content is likely to violate our policies.

While slurs and hateful symbols are often obvious, many instances of hate speech are more complex. Hate speech is often disguised with sarcasm and slang, or seemingly innocuous images that can be perceived differently across cultures. As subtlety and context increases, technical challenges grow too. That’s why it’s so important that, in the past few years, there’s been a big leap in AI’s ability to gain a deeper understanding of this content.

As the most recent installment of our Community Standards Enforcement Report shows, AI  proactively detected 96.8% of the 25.2 million pieces of hate speech that we removed during Q1 2021. During this time, we also reduced hate speech prevalence to six out of every 10,000 content views on Facebook. This metric is especially important for advertisers who want to understand the potential risk of an ad being shown adjacent to content that violates our standards.

AI helps us scale content review by automating some decisions and prioritizing the most complex cases for human reviewers. It also relieves reviewers from having to see some of the most harmful content themselves.

Identifying misinformation

When our models predict that a piece of content is likely to be misinformation, it’s surfaced to third-party fact checkers for review. If they rate it as false, we add warning labels and context. These have been found to be highly effective. In our internal research, when people were shown labels warning them of content rated false, they skipped viewing the content 95% of the time.

For each piece of misinformation we find, there could be thousands of copies shared by users. AI is now used to detect these near-exact duplicates and apply labels to each copy, helping the fight against misinformation.

As new challenges arise, we continually leverage AI to help with policy enforcement. For example, since the pandemic began, we’ve removed more than 16 million pieces of false Covid-19 and vaccine-related content on Facebook and Instagram that health experts have debunked, globally.

Preparing for tomorrow’s biggest challenges

Hateful memes and deepfakes—media where a video or image is synthetically altered to include someone else’s likeness—are two of the biggest challenges we’re currently training AI models to combat moving forward.

Memes combine language and imagery in a way that’s often nuanced or ironic. The content of a meme’s image and its accompanying text might not be offensive when seen separately—but in some cases, the outcome is harmful when combined. Our systems are continually improving at evaluating images and text together, understanding context and removing hateful memes.

The wider AI community is proactive about knowledge sharing and creating better internet experiences. Our recent Hateful Memes and Deepfake Challenges were established to source new and open detection models that benefit everyone. By embracing open and reproducible science, AI researchers are holding each other accountable and increasing access to new technologies.

AI is constantly improving online experiences

The AI product management team’s goal is to build accessible solutions that make it practical to address harmful content in a balanced way. On our platforms, we believe people should be free to speak their minds, but not to cause harm to others. And we want businesses to join in the conversation without being associated with harmful or inaccurate content.

Our ultimate goal is to provide the safest communication platform possible. AI isn’t the only answer to problematic content, but it enables us to adapt and scale more quickly and effectively than a human workforce alone. We know there’s still more work to be done, and we’re committed to making it happen.