Drive-Thrus, AI … and Empathy

Implications behind replacing human interactions within the consumer journey


In 2019, McDonald’s acquired the startup Apprente, which specializes in voice ordering, and created McD Tech Lab. This meant ordering via bot vs. human, reducing staffing needs and during the pandemic, used by some fast-food chains to reduce employee-to-customer interaction.

There is a significant shift to digital, with 20% of orders in McD’s 6 top markets made through apps, kiosks or via delivery apps, accounting for $13B in sales, per Restaurant Dive. I bask in the warm glow of the efficiency and increased public safety.

That said, especially as my first job was at a Dunkin’​ drive thru, I personally find it endlessly sad that robots are replacing humans in customer service functions. I refused to use the self-check-out at CVS for a whole year before giving in.

Why? The overall societal implication of less human-to-human contact.

The evolution of consumer interactions … or lack thereof

In 1917, Clarence Saunders opened the first grocery store, a Piggly Wiggly in Memphis, Tenn. in which customers could take items off the shelves [gasp] themselves. He called the concept a “self-serving store.” Sixty years later, David R. Humble created and patented a self-service register in 1984. Now, many of us already get our groceries delivered versus stepping foot into a grocery store.

For the record, I personally work hard to go to a physical location for the interaction but also because Amazon Fresh’s replacement choices are downright concerning. An eggplant is nothing like celery, for the record.

In 2013, I did a TEDx talk about the implications of digital overload on the brain. One of those effects was a potential decline of empathy as eye-to-eye contact activates the social brain, the neural regions that orchestrate our responses to other people.

With less and less of those interactions, it would stand to reason that our ability to read those signals would diminish. Specifically, eye-to-eye contact has been shown to activate the part of the brain called the cerebellum, which helps predict the sensory consequences of actions. It also stimulates the limbic mirror system which supports our ability to recognize and share emotion, or in other words, empathy.

The decline in empathy is not anecdotal. One study of American students from the Personality and Social Psychology Review demonstrated that empathy levels declined in this cohort 48% between 1979 and 2009. Honestly, that’s not even hard to gut check.

The consequences of reduced empathy

What would the effect of reduced empathy be en masse? In the World Economic Report from 2019, the finding was that the decline in human empathy created global risks in the “age of anger.”

Anger you say? Hmmm, that sounds like violence at the Capital and a surge in violent outbursts in-flight with 3,100 reports of unruly behavior reported to the FAA between January and July. OK, OK, you might say that most of those in-flight incidents were a result of disagreements about mask policies and you’d be correct.

76% of those 3,100 reports were identified to include a failure to comply with federal mask mandates. That said, going further, I would say a lack of empathy is additionally at play because for the record, we don’t know whether each of those flight attendants agree with those mandates, but they are all just trying to do their jobs.

With a little more empathy in the mix for those folks doing their jobs trying to get by, would this behavior continue? We’ll never know. Overall, a decline of empathy seemingly leads to a culture of increasingly disconnected, furiously polarized, and isolated folks.

To bot or not to bot

For brands delving into the AI space, I would encourage businesses to ensure they are fully educated about the potential ethical ramifications of the technology. “Companies have to think seriously about the ethical dimensions of what they’re doing and we, as democratic citizens, have to educate ourselves about tech and its social and ethical implications—not only to decide what the regulations should be, but also to decide what role we want big tech and social media to play in our lives,” said Michael Sandel, professor of political philosophy at Harvard.

Then, I would encourage qualifying the tasks that could be replaced by a bot with limited implication and the tasks that present a higher risk in doing so. These risks should be assessed not just by negative implications to profit margins but brand perception and further, cultural implications.

Utilizing AI to suggest book recommendations in an online environment is low risk as the expectation was not to have human interaction in the first place. Replacing a human with voice AI in a drive-thru could have higher risks not only from empathy concerns but potentially from privacy concerns and bias as well. What is the long-term brand perception shift demonstrated from a shift to 40% AI utilization, 30% and so on and so forth? Where can the technology improve customer satisfaction scores versus hinder them?

In the race to be more efficient and thereby robotic, I’m worried that our social fabric will continue to erode. I call out McD’s simply as an example. That company’s job is to continue increasing profits delivering value for investors.

They are doing just that. I’m just presenting the other shoe drop regarding the potential negative implications from all these wins in efficiency. As Coach Taylor said in Friday Night Lights, “Clear Eyes, Full Hearts, Can’t Lose.”