4 Ways to Avoid Having AI Release Consumers’ Inner Sociopath

Voice assistants encourage too much abuse from users

“Alexa, you’re ugly. Alexa, you’re stupid. Alexa, you’re fat.”

This barrage of abuse came from my friend’s children, who were shouting at his Amazon device, trying to prompt a witty comeback from the AI assistant. What was just a game to the kids looked a lot like the worst kind of playground bullying, and as my friend unplugged the device, he scolded, “We don’t talk to people like that.”

But unfortunately, we do talk like that, especially to AI assistants and chatbots that are unable to establish the boundaries that humans do. After all, if you hit someone, they may hit you back. If you call your barista ugly, you should expect them to spit in your latte. In their inability to push back, virtual assistants and chatbots shield us from the consequences of bad behavior.

With more than 30% of enterprises planning to use VAs this year and around 77% of consumers using AI already, this dynamic has the potential to turn us into what I call “digital sociopaths”: overstressed, strung out technology addicts who can no longer tell the difference between right and wrong. It’s true that AI assistants aren’t people. But the problem isn’t the devices, it’s our behavior.

If virtual assistants encourage daily abuse, do we really expect to contain that abuse exclusively within our devices? Kids especially find it difficult to discern when one type of behavior is appropriate at home and not on the playground. If it’s OK to call Alexa fat and ugly, why is it bad to call a classmate the same things?

The only way to avoid [destroying humanity] is to ensure that AI doesn’t make humanity inhumane.

These problems are compounded when you consider that AI assistants are usually given female personas. According to a recent UN report, these designs bolster the patriarchal notion that women are better suited to servitude⁠—and abuse. Until things change, we are free to enter into relationships with our (mostly female) virtual assistants in which we never have to account for how our words and actions might make someone feel.

What can marketers do? Here are some ways to keep the sociopath at bay with good AI etiquette.

Set boundaries

AI assistants might not (inherently) need to feel respected, but it’s crucial for users to see your brand positively. If you use chatbots, make sure to give them boundaries that mirror the ones you’d expect in the real world. If users are rude or inappropriate, there’s no reason your AI assistant can’t redirect behavior or pause the conversation. It’s possible to be liked and respected.

Be consistent

One way to encourage good behavior is to be consistent across your in-person and digital experiences. Your AI experiences should project the same brand voice that customers experience in your marketing, social media accounts and physical locations. The more cohesive your brand personality is, the more likely consumers are to treat it with respect.

Be cognizant of gender and racial bias

Prevent your AI assistants from perpetuating stereotypes. If your virtual agent is a woman, design a personality beyond the flat, subservient helper consumers are used to. Rather than defaulting to the same old standard, try extending your bot persona into a character that represents your brand image and diverse customer base.

Raise the bar

Due to consumer feedback, Alexa has already added major improvements that make her persona more assertive and less accepting of misogynistic and bullying behavior. When customers demand better, it’s on us to deliver it.

Elon Musk famously warned that AI could destroy humanity, by which he meant destroy the human place and primacy in the world. The only way to avoid that is to ensure that AI doesn’t make humanity inhumane. If we can keep technology focused on humanity’s heart, it can do more than make life easier—it might actually make us better people.