Artificial intelligence researchers have made strides in teaching machines to understand and produce natural-sounding language in recent months, suggesting the potential for vastly improved predictive text tools, chatbots and writing aids—but also stoking fears of fake news and spam on a massive scale.
Those concerns spurred research group OpenAI to initially dumb down the public release of one of these models, the text generator GPT-2, when it was unveiled in February. But the creators recently reconsidered and published the full version, satisfied that there was “no strong evidence” of its misuse in the intervening months.
With
WORK SMARTER - LEARN, GROW AND BE INSPIRED.
Subscribe today!
To Read the Full Story Become an Adweek+ Subscriber
Already a member? Sign in