When we launched Adweek’s AI-powered Super Bowl Bot, designed to create ad pitches for the Big Game, we truly didn’t know what to expect. Would it be funny? Insightful? Disturbing? Offensive?
The answer to all of the above definitely proved to be “yes,” but the process of training an AI on such a specific task was also fascinating in ways we couldn’t have expected. The bot—which you can find at @SuperBowlBot on Twitter or @adw.ai on Instagram—has generated more than 200 ad ideas so far and continues to evolve as we expand its data set.
During the brief calm before the storm of (actual) Super Bowl ads, the bot’s creators decided to touch base on how it’s going, what we’ve all learned and where this project seems to hint that AI is headed, at least in the realm of creativity.
Here’s the conversation between Adweek emerging tech reporter Patrick Kulp and creative and innovation editor David Griner:
David Griner: Patrick, thanks again for co-parenting Adweek’s Super Bowl Bot with me. Much like with my real children, I like to take 50% of the credit despite my partner doing 95% of the effort in actually creating them.
Patrick Kulp: I don’t think you’re giving yourself enough credit. You really sell the bot, for one thing.
Griner: I love this bot. It’s my Baby Yoda.
Kulp: Baby Yoda would actually be an excellent concept to feed the bot. [Editor’s note: Patrick followed through and actually generated the Baby Yoda pitch.]
Griner: So we’ve written quite a bit about how the bot generally works, but I’m curious about a few things. Namely, how much work was it to build and get up and running? You handled all that on the back end after we came up with the concept.
Kulp: It was a gradual process, building up the dataset. I used web scrapers to download a ton of descriptions of ads in bulk from various sources and just kept adding to it over time.
As far as the training of the bot goes, I found it surprisingly easy. I used a guide from a BuzzFeed data scientist named Max Woolf and ran it in a free cloud program called Google Colab, in part because I don’t have the hardware needed to handle the processing demands of AI—typically graphics processing units (GPUs). That backend guide is written in Python code, but you don’t have to understand much Python at all (I don’t) to make it work. The training process itself takes about an hour or two depending on how extensively you do it.
Griner: We chose not to automate this bot, meaning you hand-feed it prompts, or it just spits out a random Super Bowl ad idea based on the data you’ve fed into it. But I assume we could have made this an automated bot that shoots things directly onto Twitter without us moderating or curating it?
Kulp: Yeah, there is also an API section in the guide that explains how to build a web app interface for the bot. And I’m sure the Twitter automation would have been doable from there.
That would’ve taken a bit more figuring out, but the main reason we chose not to do it was that this bot is wildly unpredictable. It takes quite a bit of curating to sift out something presentable. To be clear, that’s selecting which generated output to post—I never actually adjust any of its text.
Griner: You get to see all the raw content it creates from a prompt. You and I originally worried that, without a moral compass, the bot might create some truly unfortunate or problematic ad concepts that we wouldn’t feel good about putting out in the world. I’m sure the word “Tay” still haunts AI folks from Microsoft’s failed 2016 experiment with a Twitter AI bot. But now that you’ve seen what it actually produces, were we right to worry? How often does it get legitimately … unpublishable?
Kulp: We were absolutely right to worry. I don’t have concrete stats on how often its thoughts are unpublishable because they’re gibberish versus downright disturbing, but the latter definitely tends to stand out.
You and I have had conversations about the topics it touches on surprisingly often that are absolutely off-limits: domestic violence and racism, mostly. At other times, it gets over-the-top lewdly sexual. The problem is that the bot takes these topics from ads and expands on them out of context. So for example, it will take the theme from an NFL PSA on domestic violence and apply it to, say, a Doritos commercial in some horrible way.
Aside from those line-crossing ones, I do enjoy the bot’s dark sense of humor, though.
Griner: And to be clear, the AI already started with a knowledge base that included some, what, 8 million web pages? Lots of weird shit on the internet. I never know how much to blame Super Bowl ads or just, you know, humanity, when it cranks out weird connections.
Kulp: Yeah, that also contributes to it for sure. The underlying knowledge base, GPT-2, teaches the bot the mechanics of language and how context works and allows it to riff on subjects when there isn’t anything available in our limited training data. So who knows from which dark corners of the internet it gets some of these ideas.
Griner: Let’s talk about trends we’ve noticed in the ad concepts it creates. If I were to list the Top 3 that come to mind, it’d be dystopia, military and… I guess “people being forced into awkward confrontations”?
Kulp: I would say that’s a good summary. It definitely makes you appreciate how much supporting veterans or active-duty troops and patriotism have been huge themes throughout Super Bowl ad breaks.