New AI Can Detect Fake News With Unprecedented Accuracy—and Generate Its Own

Test readers found the AI's fake news more credible than human-written fake news

The system can even mimic the style of publications just like this one. Getty Images
Headshot of Patrick Kulp

“A team of artificial intelligence experts have developed an algorithm that can detect fake news—and remarkably convincing fake news too.”

That was the lede sentence formulated for this article by said algorithm—prompted with just a headline, author, date, and intended publisher website (Adweek.com). (The bot neglected to mention it doubles as a text generator.) While it might not be exactly what we would have written, its copy is serviceable and, in this case at least, accurate.

The system that generated this lede is the creation of researchers at the University of Washington’s Allen Institute for Artificial Intelligence, who outlined the neural network program in a paper posted this week to moderated pre-print repository ArXiv. The study’s authors claim that the tool, which they call Grover, can distinguish fake news generated by AI from real, human-written articles 92% of the time—a substantial improvement from the highest-performing detectors previously available, the accuracy of which the researchers peg at 73%.

But the by-product of the training required to condition Grover (short for “Generating aRticles by Only Viewing mEtadata Records”) to recognize fabricated news stories is that the program is also able to generate impressively convincing articles of its own. The researchers decided that capability could also be of use in modeling potential threats from bad actors who might exploit similar technology for nefarious purposes—namely, the production of increasingly sophisticated AI-manufactured misinformation.

Driving the growth of this underbelly are the recent huge strides in the fields of natural language generation and generative media. New possibilities are emerging for content creation and design fields—but these advances could further blur the lines between fact and fiction online.

“Our work does suggest that there is an arms race between the adversary and the verifier,” said Rowan Zellers, a UW Ph.D. student who co-authored the study. “It’s possible that there could be models even better than Grover out there that would evade our detection system.”

One of Grover's more confused attempts to write this article.

In the natural language processing realm, one of the biggest breakthroughs has been a shift towards more dynamic, context-savvy models that require minimal input data, such as a revolutionary text generation system called GPT-2. That program made headlines earlier this year after its creators, the nonprofit/for-profit hybrid group OpenAI, deemed it too dangerous to release to the public.

Grover’s framework is modeled after that of GPT-2, but the authors iterated on the design by adding settings that shape the writing style and content of the text according to publisher domain, author and date inputs. For instance, if you prompt the system to generate a column from New York Times columnist Paul Krugman, the resulting text might imitate his writing style, depending on how much of his real work was included in training data.

The program can also write period-specific content. If you set the date to, say, 2008, the output may even include a mention of Barack Obama’s inauguration or the financial crisis, according to Zellers.

Grover's impression of the NYT's Thomas Friedman is not bad.

Like OpenAI’s researchers, the Grover team also grappled with whether or not to release the tool publicly, Zellers said. But they ultimately decided that the $35,000 setup was easily replicable enough that the benefits of mapping possible dangers outweighed the risk of their particular system being hijacked. (The tool is available online here.)

“If you’re one of these actors that is producing misinformation right now, [$35,000] is next to nothing,” Zellers said. “While we were working on this, that [decision] was the elephant in the room…But what we found in the paper is essentially that the best defense against generative models like Grover is the model itself, however counterintuitive that might sound.”

Another intriguing discovery of the project was that a sample set of readers actually found Grover’s articles more credible than fake stories authored by humans. Zellers attributes that to the overblown, caps-lock-happy rhetoric style popular among human purveyors of fake news, a stark contrast to Grover’s imitations of reputable news sources.

“It all comes down to style,” he said. “My co-author Hannah Rashkin found that you can actually use stylistic information to predict whether a news article is real or propaganda, and by stylistic information, I don’t mean the meaning of the text, but whether a word is capitalized in a weird way, for instance.”

Ultimately, Zellers thinks research like this may suggest a need for an automated text-content moderation system analogous to the kind that platforms like YouTube use to root out copyright infringements or explicit content in videos.

“I think we might need to have something like that on other social networks, too, but for text,” Zellers said. “Maybe in the next couple of years—or maybe sooner, I have no idea—we might need to make it so that AI is in the loop for helping people do this content moderation.”

 

 

 

 

 

 

 

 

 

 


@patrickkulp patrick.kulp@adweek.com Patrick Kulp is an emerging tech reporter at Adweek.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}