What's AI Actually Good for Right Now?

Beyond text and images, here are the ways generative AI is changing the business of marketing

It’s been almost a year since the hype surrounding generative AI was unleashed onto unsuspecting creatives across industries. Since then, the questions we’ve needed to ask about the technology have naturally evolved: Rather than debate whether to use it or what we stand to gain, we must now ask, where is it heading? Where does it add value? How much is too much, or not enough? 

Gen AI’s rise in popularity comes at an interesting time, when economic forces find many organizations cash-strapped. With some companies undertaking a trial by fire while others remain on the sidelines, the overall theory-to-practice timeline this year has been staggered and stunted. But that doesn’t mean the advertising and marketing worlds haven’t advanced.

We asked marketing leaders in our network to reflect on what has actually worked, what’s ready for primetime, and where competitive edge has been gained from their implementation efforts.

Much attention has been paid to the creative enhancements offered by gen AI, with thought leadership focusing mostly on its theoretical implications. What we’ve solicited from our contributors aims to answer: Beyond creative, how is the technology transforming business on a practical level? Every organization will, and should, take a different path, but these responses present a sample of the use cases, solution-oriented processes and governance approaches at play in the gen AI trenches. 

Planning and validation

In our engagement with clients, we see predictive AI models being complemented with generative AI, with agencies prioritizing use cases that have the biggest impact on efficiency, precision and automation of time-consuming tasks. Besides applying it for heavy load stages to win and service clients, like creating pitches and campaign assets, translation and personalization, we see agencies starting to apply gen AI for planning and validation stages. 

Part of long-term planning is understanding trends and making predictions about what consumers’ expectations will look like in the future and if brands can meet them. Interpreting data that’s publicly available through retailers’ websites, gathering insights on trending inquiries in Google searches, and social media mentions provide valuable information about consumers’ interests and sentiment about brands and competitors, serving as crucial input for retailers’ long-term product portfolio strategies.

Gen AI complements the validation process by comparing audience engagements with previous, similar campaign assets and recommends ones that can resonate better. It’s essential to establish guidelines to safeguard user data, ensuring inputs and outputs of gen AI are stored on the organization’s private cloud and not utilized for training publicly available solutions like ChatGPT or DALL-E. —Stephen Noble, business development director of ad tech, Star

Social listening 

Traditional first-party data collection is plagued with low response rates and fake responses from bots and survey farms. Gen AI is helping marketers develop questions more likely to engage people and ensure responses are authentic. Many have created “wrappers” or synthetic personas to infuse UX into the technology. And that’s just the front end: Marketers are also using gen AI to parse responses in record time—especially the open-ended questions, since LLMs can find common themes in consumers’ qualitative data and highlight those findings. 

Most data sets LLMs are built on—publicly available internet chatter, for example—comprise the views of a relatively small number of people who discuss brands on social media. It’s especially under-representative of people in marginalized groups. Gen AI can find people in specific demographic groups for first-party data collection: For example, the National Foundation for Credit Counseling reached 2,000 low- and middle-income renters to learn about their experiences with housing insecurity and eviction. Only about a third of respondents felt they fully understood their rights and opportunities, but the analysis of open-ended questions showed a strong undercurrent of hope and a commitment to achieving home ownership, particularly among communities of color. —Neil Dixit, CEO, and Adam Bai, chief strategy officer, Glimpse 

Human impact 

The measurement of outcomes in AI often goes beyond technical metrics, particularly when we bring democratization, empathy and compassion into the conversation. In this context, it’s essential to assess human impact, such as how well AI applications are received and used by diverse sets of users. Tools like user satisfaction surveys and open channels for feedback play a critical role in understanding this dimension. 

One potential solution is to build this framework on core ethical principles such as fairness, inclusivity, transparency and accountability. These form the pillars of regular audits where these principles are used to evaluate the impact of AI systems. As part of our own audit, we assess whether our systems, especially those used in design or content generation, are promoting diverse perspectives and not inadvertently perpetuating stereotypes. We look at the output and judge whether it reflects the tapestry of experiences and ideas we aim to represent. —Joy Fennell, founder and CEO, The Future in Black 


Explainable AI is the laudable aim of creating models that humans can interpret easily. Without explainability, oversight of models is challenging and ethical governance is almost impossible.

Consider a situation where understanding the rationale behind a decision may change the decision itself. Taking an extreme example, if you fed a model the details of every law in the U.K. alongside every case and the associated ruling in the past, it could make a more consistent and efficient judge in court rulings—but without a clear understanding of the rationale, its rulings are insufficient. Ultimately, explainability is a core tenet of accountability: If you can’t explain why a decision is right, how can you be held accountable for that decision? 

It’s important to consider how much performance degradation you are willing to accept to make the model explainable; you only know a model works by checking results against pre-agreed criteria, which requires sufficient volumes of assets to sense-check the output and can be time-consuming to the point that it negates efficiency savings. We can learn from areas where AI has been successfully implemented over time for automation, like the autonomous driving industry, which uses clear categorized levels of automation—broadly, Levels 0-5, from “no automation” to “steering wheel optional.” If we think about gen AI in marketing with a similar maturity curve, we can start to shift tasks without undue risk—James Addlestone, chief strategy officer, Journey Further 

Automated testing 

Game-changers like ChatGPT can reduce content production requirements for new campaigns or variations by suggesting engagement strategies, refining drafts or even authoring entire campaigns from outlines. However, it is one thing to hand the keys over to an AI to optimize ads that render on a third-party website; it is quite another to relinquish content control for surfaces inside your own product or in messages to existing customers. This will remain true until gen AI is provably able to respect brand safety, maintain brand voice and cease to hallucinate (an excellent property for creative inspiration but a liability if left unsupervised). 

Efforts to address this challenge are underway, including WPP’s collaboration on a hybrid engine that harnesses Nvidia’s Omniverse technology to render 3D models of real products within settings and backgrounds created automatically with gen AI. We also recently launched a QA tool that aims to streamline new content production for marketers, using prompt engineering on top of OpenAI’s GPT-4 model to check for incorrect grammar, inappropriate tone, offensive language or accidental content in both human and gen AI content. For a global audience, it also flags cultural insensitivity and unintended religious connotations. —Bill Magnuson, CEO and co-founder, Braze 


After decades of investment and innovation, programmatic has helped solve engineering challenges and provide solutions for targeting, transparency and attribution. What it hasn’t done—that generative AI has the power to fix—is deliver on its initial promise to tell personalized stories at scale.

There is a novelty aspect with generative AI, and some of the effects are new: Aside from the visible styling elements, hundreds of invisible elements are being generated, deciding which variation to show to the user, who only sees the winning design. For example, Cadbury’s ad campaign in India cloned and synthesized Shah Rukh Khan’s face and voice to include local shop names and towns, delivering personalized versions of the ad based on user location.

The application of gen AI, invisible to the consumer, can be implemented in the programmatic ecosystem in several ways, the simplest of which is engineering creative for the runtime using the size of the slot (given screen size and shape), background color, environment and other contextual signals available on the OpenRTB pipe. This is a very repeatable exercise and can be done at scale in 1-3 seconds, but the question is whether variants have the same impact as the original. Abhay Singhal, co-founder, InMobi Group; CEO, InMobi Marketing Cloud 


AI chatbots in customer support are only as good as the number of inquiries they can automatically resolve. It’s as simple as that. The North Star measurement should be automated resolutions (AR), which we define as a conversation between a customer and a company that is relevant, accurate, safe and does not involve a human. 

Oftentimes, customer support leaders vetting different AI solutions can get lost in a list of features and capabilities, but these details mean nothing if they aren’t helping you automate more resolutions. Our advice is for them to ask, “How does this product help me increase AR?” —Mike Murchison, CEO and co-founder, Ada


AI-powered text-to-speech systems have been used to convert written content into natural-sounding audio for some time. It’s being used to create voice clones for podcast hosts and “guest” features; AI-based algorithms can enhance audio quality by reducing noise, eliminating background disturbances and improving overall sound production; platforms can leverage gen AI to deliver recommendations to listeners based on preferences, listening history and behavior. We now leverage tools to produce accompanying transcriptions for every episode, promoting the accessibility of our clients’ podcasts and minimizing human error while boosting SEO performance.

When it comes to value, the three main areas are time-saving, personalization and enhanced creativity. During editing and post-production, gen AI ad insertion saves podcasters time and effort, and personalized recommendations enhance these advertising efforts as well as the listening experience. And gen AI has enabled podcasters to experiment with unique voices, soundscapes and storytelling approaches not previously feasible. —Fatima Zaidi, founder and CEO, Quill 


Recent data from TuneCore indicates more than half of artists are aware of the influence of gen AI, with approximately 27% integrating it into their creative process. Innovations like Google’s TextFX and ChatGPT are just a few of the many tools providing artists with an enriched way to manipulate language, discover new sounds and create supporting content to market their music. While this could lead to an influx of new songs—further diluting the chances of a single track achieving global prominence—it’s worth noting that it will likely pave the way for more artists to achieve regional and local hits. 

I’m particularly enthusiastic about the prospect of licensing artists’ voices for AI-generated music; reported partnerships between tech powerhouses like Google and labels like Universal Music Group highlight this emerging trend. These potential collaborations could open lucrative licensing channels and, more importantly, hint at a future where fans are deeply involved in the music-making process. —Clayton Durant, director of emerging media and platform strategy, MikeWorldWide 


Much of people’s apprehension when it comes to gen AI stems from two factors: Firstly, the fact that AI has been positioned as something dissociated, machine-focused and artificial; and secondly, how perceptions around the technology are being shaped by conflicting voices and opinions within the industry. To bridge this gap and foster acceptance, it is essential to humanize and centralize the communication of gen AI through a strategic, considered and collaborative framework.

An example of where AI has been successfully positioned can be found in Amazon’s Alexa. Deliberately branded with femininity that creates a sense of proximity, Alexa exemplifies the potential for incorporating human-like qualities. While it could be argued that the product has fallen short in capitalizing on its potential, people have embraced Alexa into their homes, not just as a mere AI assistant but as a companion. The value Alexa adds to people’s lives underscores their openness to this particular AI implementation.

Crucially, to ensure the positive integration of gen AI, positioning plays an equally significant role as regulation. Where regulations will govern the ethical and responsible use of gen AI, positioning focuses on shaping public perception and acceptance. Both aspects are crucial for gen AI’s successful integration as a trustworthy technology into society. —Ashleigh Steinhobel, strategy director, FutureBrand