Using Emotion AI to Spark a New Type of Brand Experience

One that resonates with consumers based on whatever they’re feeling

“Affective computing is like nuclear power. We have to be responsible in defining how to use it,” said Javier Hernandez Rivera, a research scientist at MIT Media Lab, in 2017. In the two years since, there has been a global proliferation of biometric and affective APIs and off-the-shelf hardware designed to power experiences in millions of stores, apps and homes. But unlike trained researchers—or actual nuclear engineers, for that matter—the new operators of these emerging technologies will be ambitious brand and product managers and agency teams, racing to be first-in-category, to claim awards or to hit revenue goals. That’s how meltdowns happen.

In 2019, we’re becoming painfully aware that abuses of digital data and privacy can be radioactive to civil society and democratic institutions. Sure, we’ve been fluent in the scary tropes that show up in the movies, but we now recognize that the immediate AI risks look less like Ex Machina and more like what you see on C-SPAN.

The idea sounds simple enough: If I can sense your frustration, boredom, joy or fear, I can deliver an experience that’s more relevant and resonant. The danger is that emotional feedback can be used equally to inspire or to de-motivate, prevent or provoke. Maybe you’ll buy more if you’re feeling confident, or maybe you would just. The same theory powers both the defensible and abhorrent use cases, just like the same underlying physics produce abundant energy or an uncontrolled reaction. We marketers didn’t invent these technologies any more than control room operators built the reactor, but we’re the ones pulling the levers with the power to avert disaster.

Neuromarketing itself is not new. The academic foundations are 20-plus years old, and today the largest providers run more than a dozen labs and deliver many thousands of studies. But the keyword here is “lab.” These tools are employed by trained researchers working in nondescript, suitably-beige facilities, equipped with cameras and one-way mirrors and stocked with bottomless bowls of mass market snacks. In that relative safety, one can strap on EKGs and GSR sensors and point a camera at anyone with a release form and an hour to spare in exchange for a $100 Amazon gift card.

Maybe you’ll buy more if you’re feeling confident, or maybe you would just pay more if you felt helpless.

But what’s happening next is that these technologies, which were designed to be interpreted by expert operators in labs, are becoming embedded in software with humans taken out of the loop. Such is the story of AI’s inevitable disruption of so many industries. But in this version, we’re not trading stocks or restocking shelves—we’re shorting emotions and potentially undermining identity. By accumulating enough audio, video and sensor data, I can conceivably profile what’s in your head and not just what’s in your wallet. When emotion AI becomes a competitive advantage in the marketplace, we risk a runaway reaction, an arms race we can’t afford to win, let alone lose.

Before marketers race to adopt these new tools as though they were just the latest ad-tech buzzword, now is the time to engage in thoughtful articulation and education around informed consent, data privacy and ethical application of emotion AI. 

Give a voice to end users

It is critical to engage in early testing at the concept stage (not post-development) to identify the level of comfort or concern with different use cases and applications. It’s not enough to rely on surveys or syndicated research: The emotion AI value exchange requires users to understand what they are giving up and how they would benefit.

Interdisciplinary working groups

Plan for input from a diverse set of stakeholders, spanning technologists, creatives, attorneys, analysts and strategists. There are many different ways to implement emotion AI sensors, act on the data or retain or erase it. Some ethical or privacy concerns can be addressed through technical means and others through deliberate design decisions. But oversights happen when critical perspectives are overlooked.

Public and industry dialogue

Learn from individuals and organizations doing research to map the risks and rewards of AI. It’s tempting for brands to keep their efforts secret in a desire to shock and awe at launch time. Avoid that temptation and instead invite external partners and academics who are not incentivized to minimize risks in order to sell a product. You’ll need help to look around corners and anticipate unintended consequences of design, data and technological choices.

This technology allows marketers to now be more equipped to consider the implications of embedding and applying this technology. It’s through this type of dialogue and experience with users, colleagues and experts across the industry that we can contain the risks of an emotion AI meltdown.