Storyful and Ad Measurement Firm Moat Are Creating a Database to Track Fake News and Hate Websites

GroupM and CUNY are partners

News Corp. and Moat are teaming up to find fake and hateful content and warning advertisers about it.
Getty Images

A video creation company has partnered with one of the most prominent ad measurement firms to create a database that could help brands avoid fake or hateful content.

Storyful—a division of News Corp.—is teaming up with analytics and measurement firm Moat to launch Open Brand Safety, a database that monitors misinformation and extremist content by tracking web domains and video URLs. OBS, which was unveiled on Tuesday, will include a list of websites that the companies suggest advertisers avoid to keep their brands away from unsavory content.

According to Storyful CEO Rahul Chopra, the companies will distribute it to agencies, brands, ad-tech companies and others in the next few weeks. He said the goal is to create a “human-scale” list that’s easier to validate and verify when buying media across digital channels. The companies will also collaborate with students and professors at the CUNY School of Journalism, along with outside nonprofits and academic groups that have experience with fact-checking and studying online extremism.

“Once they start leading with their pocketbooks, there will be a flight or a race back to quality,” Chopra told Adweek in an interview.

On the one hand, Storyful has experience discovering and verifying user-generated content. On the other, Moat has a history of helping advertisers measure and verify ads on various platforms and publisher websites.

The plan is to first attack 'low-hanging and rotten fruit' such as fraud, hate speech and propaganda.

However, Chopra said Storyful itself doesn’t want to be the final word on fact and fiction—which is why it’s pairing with third parties like CUNY and others.

In a Medium post explaining OBS, CUNY journalism professor Jeff Jarvis said the plan is to first attack “low-hanging and rotten fruit” such as fraud, hate speech and propaganda. He said the project was born out of a conversation he had a few weeks ago with Dan Fichter, Moat’s CTO, who told Jarvis the company had a way to identify harmful content.

“My hope is that we build a system around many signals of both vice and virtue so that ad agencies, ad networks, advertisers, and platforms can weigh them according to their own standards and goals,” Jarvis wrote. “In other words, I don’t want blacklists or whitelists; I don’t want one company deciding truth for all. I want more data so that the companies that promote and support content — and by extension users — can make better decisions.”

While the groups behind OBS hope to make the initiative available to a broad group, GroupM and Weber Shandwick will be the first companies to use the service, with other advertising and marketing agencies added in the future. The project also has ties to Facebook, thanks to funding from the News Integrity Initiative, of which Facebook is a founding member. AppNexus is also partnering on the initiative.

The database is just one of several that have popped up in recent weeks. According to The New York Times, a Carnegie Mellon University computer scientist and a former Google researcher have been working with teams from around the world to create an algorithm to tell what’s fake and what’s real. Another, created by PolitiFact, includes a list of more than 150 websites that have published “deliberately false or fake” stories.

Google and Facebook—two of the tech companies most commonly blamed for the proliferation of fake news—are also working on ways to identify and remove fake and offensive content. Last week, Google announced it’s made changes to its search algorithm, while Facebook this week hired its first head of news products to be its fake-news fixer.