Avoiding These 3 Brand Risks That Originate From Personalization at Scale

And how machines can help you do this

AI and machines’ ability to learn introduces a host of new opportunities for brands and content producers, most notably hyper-personalization at unprecedented scale. This ranges from big, unstructured data analysis to advanced segmentation, from anthropomorphized virtual agents to all manner of biometrically enabled recognition, such capabilities enable altogether new modes of personalization but also come with a host of new risks.

Companies have always had to avert technology-related fumbles and contend with PR threats or disgruntled customers, but the rise of automation introduces new categories of issues most companies have yet to confront. The good news is that whether deploying machine learning in narrow, experimental pilots or developing enterprise-wide AI programs, there are best practices companies can apply to mitigate the following three risks.

Automating your brand persona

Companies are beginning to leverage AI across myriad brand-to-consumer interactions. Many are using chatbots to extend the brand as a stylist or concierge, easing pressures to buy by developing such bots with personality, advice and the ability to engage far beyond the scope of sales or customer support. While this introduces new ways to scale insights and personalization, it also means companies suddenly find themselves depending on AI to communicate and convey the character of their brand.

Design AI-driven workflows with a clear sense of what should remain a human interaction.

Automation may drive scale, but brands risk sacrificing quality for quantity. Off-brand content can be an issue, particularly given machines’ struggle to grasp personality, humor, values, morality, empathy or simple nuance. Ill-timed content is as well because even when tone and quality are right, machine learning models struggle to account for temporal, news-sensitive, cultural or local considerations. And sensitive content, such as relying on code-based avatars to communicate brand policies, expired memberships, breaches or bad news doesn’t always land.

What does AI leadership look like for your brand? Many leading companies are developing cross-functional AI-innovation groups dedicated to optimizing the very use of AI across the organization.

Meanwhile, marketers and communications should double down on defining brand purpose and persona. What qualities does the brand stand for? How will AI support and serve these ends? Design AI-driven workflows with a clear sense of what should remain a human interaction. Dedicate resources to proactively monitoring AI instead of just reacting.

Achieving transparency

AI suffers from an introspection problem. Understanding why and how an outcome was reached remains an opaque endeavor. Which factors, layers and nodes carried the greatest weight in decision-making is useful context to have.

In an age of eroding trust, the need for transparency is also evolving alongside political and cultural adaptations to automation. For example, GDPR outlines new requirements for user consent as well as how personally identifiable data can be used for automated profiling. Further, brands must realize that the need for transparency in AI models isn’t just an internal issue but has customer ramifications that get to the heart of trust. Some 81 percent of consumers expect companies to tell them when and how they are using AI. Meanwhile, brands’ targeting tools are getting sharper, and their capacity to compile and inference on extremely intimate data on individuals amplifies the potential for creepiness.

There are many steps companies can take to demonstrate trustworthiness. As a general rule, err on the side of transparency and inform users when and where they are interacting with AI. Develop all AI workflows with triage and escalation plans. Institutionalize knowledge-sharing, employee education and collaboration around AI, and dedicate resources to monitor advancements in AI transparency emerging across vendors, academia and open-source communities.

A new class of liabilities

AI brings a seismic shift to enterprise legal questions. To date, all legal systems regulate human behavior and focus on intent. AI isn’t human, but its application is entirely about understanding and reproducing human cognition. That artificial systems are incapable of forming criminal intent and that they self-learn complicates all matters of fault and foreseeability.

We are just scraping the surface of emerging liabilities brands must consider when applying AI to media, product and processes traditionally delivered with a human touch. What happens when AI makes a decision or recommendation that turns out to be dangerous, disenfranchising or just wrong? Who is liable and how do we define harm? What happens when users communicate potentially sensitive information to a branded bot, such as threats, crimes or suicidal thoughts? What liabilities should employees understand? As brands rely on a value chain of data to train AI, they expose themselves to inadvertent risks such as bad or biased data, manipulation or falsification of content. Who is accountable, and when?

The most immediate action companies can take to mitigate such nebulous liabilities is to lean into AI governance practices with the rigor of a highly regulated industry. Log events associated with AI training, monitor and iterate roll-outs slowly, conduct internal audits, document testing, metrics and how to improve.

This past year also saw the leading tech companies invest in AI ethics and principles. Companies like SAP are building AI ethics advisory boards to advance decision-making and responsible use while others like Apple and PayPal are joining industry consortia like the Partnership on AI for leverage.

When competing for relationships in an age of information onslaught, deploying AI to scale personalization is synonymous with introducing new risks. As arbiters of customer experience, marketers play an active role in helping balance the rewards AI promises, with safeguards and mitigation needed for success.