How to Manage 6 Generative AI Weaknesses That Impact Brand Experiences

A lot can go wrong if the proper infrastructure measures aren’t taken

Mark your calendar for Mediaweek, October 29-30 in New York City. We’ll unpack the biggest shifts shaping the future of media—from tv to retail media to tech—and how marketers can prep to stay ahead. Register with early-bird rates before sale ends!

No CEO wants to lose $100B of market value because of an AI mishap. Is now the right time for brands to jump in and fully invest in ChatGPT and other generative AI experiences?

To make that decision, brands should consider the six weaknesses of generative AI and to what extent these disadvantages impact brand goals.

The 6 Weaknesses

Misinformation

ChatGPT has the potential for misinformation, depending on the data source used or the topic in question. One example may be due to a lack of data freshness (i.e: a stroller recall). This information may not be factored in if the LLM is trained on 2021 data sources. For brands where consumer safety is dependent on current information, this can be a deal-breaker.

Hallucinations

In AI, a “hallucination” refers to information that the LLM perceives to be true, but in reality fabricated or nonsensical due to the bot’s lack of real-world understanding. One example is with targeted advertising, where an AI algorithm may incorrectly assume a user’s interests based on their online behavior or search history.

For instance, an AI algorithm may associate a person’s online searches for camping gear with an interest in hunting, even if the person has never searched for hunting-related content. As a result, the algorithm may display ads for hunting equipment, which may not be relevant…and could even be offensive.

Questionable ethics and legal liability

Generative AI has the potential to revolutionize the way content (text, images, videos, computer code, legal contracts and architectural drawings) is created, but it also comes with several risks, including plagiarism and infringement of copyright, which is particularly important when it comes to intellectual property (IP) rights. While plagiarism is an often-cited concern with regard to generative AI tools, another area of questionable ethics is misleading vulnerable customers.

Information may be influenced by biases that are present in the training data, resulting in a customer purchasing items that don’t align with their beliefs or philosophies. Think misleading responses to product questions on sustainability, animal rights/testing, etc. Whether these responses could be made as purposely misleading and blamed on AI remains a gray area.

Expensive training costs

Training AI models is time-consuming and costly. The cost of a single training session for GPT-3 is estimated to be around $1.4 million and, for some larger LLMs, the training cost ranges from $2 million to $12 million. Even in basic scenarios, ongoing training of AI models requires infrastructure that supports ingesting and feeding the right context into LLM models so they can generate high-quality answers while remaining cost-effective at a large scale.

This can be a challenge, especially for smaller brands. Think about how often your product SKUs are updated, the new iPhone launches and the new line of Nike’s. Keeping up isn’t as easy as you may think.

Lack of personalization

LLMs are not designed to get to know customers better over time. In an in-store scenario, a sales clerk can “read” the customer’s intent based on the choices they make and what items they are gravitating towards in a store. Some of these can even be nonverbal signals, such as the shopper’s facial expression or body orientation during the selection process.

Generative AI tools may be stumped by complex problem-solving situations, such as a distressed customer, but beyond that, they also cannot factor in the history of the customer’s interactions to make contextually relevant recommendations. LLMs have a vast amount of linguistic knowledge, but they lack behavioral data, which can result in opportunity costs or be off-putting. Unless a shopper specifically shares their preferences, there is bound to be a lack of personalized interactions, which can negatively affect the brand experience.

Privacy and security vulnerabilities

Customers may be sharing sensitive information, such as personal or financial information in an ecommerce interaction. Where is the shared information stored? Where might it pop up next?

A lot can go wrong here if the proper data protection measures aren’t taken. A company would need to have the right security infrastructure in place (i.e.: a cloud play with a common data source, ingestion and secure framework).

A cautious balance of pros and cons

So, how should brands weigh these downsides in the context of making decisions to leverage what generative AI has to offer their customers?

Consider the sensitivities in your brand’s space. If your brand has to contend with strict standards, regulations and other elements where sudden changes can impact customer safety (i.e.: medication, safety equipment, etc.), it is important to consider the lack of data freshness seriously in your decision-making about generative AI tools. An antique jewelry company may find the freshness of information less critical than an aircraft or car parts manufacturer, though both could theoretically utilize the LLM’s linguistic abilities for customer service without relying on their internal memory.

Identify your ideal use cases for generative AI interactions. Whether generative AI will be used largely in customer support, commerce, website, workplace search or in some other way, is a key determining factor for whether it makes sense for a brand to jump in early or wait.

If the use case can tolerate the potential for hallucinations and fabrications, perhaps with a layer of human-guided reinforcement learning applied post-op to teach the model and/or coerce it into compliance with your business rules and objectives, then this use case is worth considering.

Assess the risk of your IP or other sensitive information leaking out into the public domain. There is always a possibility that sensitive information (i.e.: IP, personally identifiable customer data) could emerge online via security breaches, but now there’s another avenue for that with the use of company content to “train” chatbots. LLMs are certainly susceptible to data extraction attacks but also there’s a degree of uncertainty with regard to responses that may deviate from the company’s official position on various matters, despite their access to extensive repositories through training data.

Brand decision-makers may not feel comfortable experimenting with AI until the unknowns have been addressed. And this technology is evolving daily, most recently with OpenAI adding new privacy options to avoid information being used to train their public models.

It’s no wonder that there has been so much buzz around generative AI; it has created a true paradigm shift, providing the business world with incredible potential. As more brands and technology companies dabble in this realm, it’s entirely possible that we will find new ways to address the brand experience challenges that are currently inherent in generative AI. This will open up the arena to more brands in industries where the challenges are presently prohibitive.

Ultimately, a focus on relevance, accuracy and security will empower brands to take the leap.