Artificial intelligence researchers have made strides in teaching machines to understand and produce natural-sounding language in recent months, suggesting the potential for vastly improved predictive text tools, chatbots and writing aids—but also stoking fears of fake news and spam on a massive scale.
Those concerns spurred research group OpenAI to initially dumb down the public release of one of these models, the text generator GPT-2, when it was unveiled in February. But the creators recently reconsidered and published the full version, satisfied that there was “no strong evidence” of its misuse in the intervening months.
With that in mind, we took a look at how these models fare at performing our own jobs here at Adweek—or at least passably imitating our work. We used an online fake news tool called Grover created by the University of Washington’s Allen Institute for Artificial Intelligence. It’s built on the same framework as GPT-2, but with the added ability to match output to the style of a particular website within the millions on which it was trained—in this case, adweek.com. With every other field left blank, we had it spit out what it deemed to be generic Adweek headlines.
Roughly three-quarters of these were nonsensical, sometimes comically so. The bot strung together a hodgepodge of brands, trade terms and other topics that appear often in Adweek coverage into otherwise solid imitations of digital headline syntax. Some gems included “These Evian Bottles Are Actually Bags of Ecstasy” and “KFC Has Taken Toilet Paper Selfies, and It Looks Better Than You.”
But when the generator was on point, it could be eerily realistic. See if you can tell the difference between them and real Adweek headlines below:
The difficulty offers a window into where this new technology could lead once programmers are better able to control it—a world where you might never know whether the copy you read was written by a human or a machine (or some combination of the two).