Anthony Bourdain Deepfake Controversy Points to Pitfalls of AI-Generated Content

As synthetic media grows more popular as a creative tool, creators must navigate new ethical ground

Leaders from Glossier, Shopify, Mastercard and more will take the stage at Brandweek to share what strategies set them apart and how they incorporate the most valued emerging trends. Register to join us this September 23–26 in Phoenix, Arizona.

A new documentary on the life of Anthony Bourdain is facing backlash after the director revealed he used machine learning techniques to clone the late celebrity chef’s voice and narrate select quotes of which no actual audio recordings exist.

Morgan Neville, whose film Roadrunner: A Film About Anthony Bourdain hit theaters last week, admitted in interviews with GQ and the New Yorker the production team had trained an artificial intelligence model on hours of recordings of Bourdain’s voice in order to bring to life at least three lines of his writing. The director claimed to have secured the blessing of Bourdain’s family and close friends to do so.

The revelation nevertheless set off a flurry of social media backlash from viewers who felt duped by the interspersing of fabricated audio without disclosure, fans who felt the imitation was antithetical to the travel star’s famously authentic persona and others who simply bristled at what the use of such techniques might portend for the future of trust in media.

Controversies like these are likely to become increasingly common as the machine learning tech behind deepfake and voice cloning becomes more advanced and more accessible to both content creators and bad actors looking to use it for nefarious purposes. While documentary filmmaking of course comes with its own set of journalistic ethical considerations, brands and agencies are also already starting to experiment with this sort of synthetic media as a tool for everything from remote video production to de-aging and bringing historical figures to life.

These recent advancements in generative AI have the potential to unlock new creative possibilities in the long run, but they also face the risk of creeping people out or burning trust with consumers if used in a way that feels deceitful or exploitative. Creators will therefore have to navigate new ethical terrain and establish rules for what are acceptable and responsible ways to use this technology as they go.

“The generative capabilities of AI—both deepfakes and synthetic media—pose ethical issues for marketers,” said Gartner Research analyst Nicole Greene. “Disinformation, or the lack of information, can cause damage to brands. It will be important for marketers to build trust with customers and provide transparency around efforts, either with complementary technology or through direct communication with customers—or both.”

A deal with the devil

Controversies around the use of celebrity likenesses after their death stretch at least as far back as a 1997 commercial in which Dirt Devil digitally edited footage of a dance routine by Hollywood icon Fred Astaire, who died in 1987, to appear as if he was hawking the vacuum company’s latest product. The move received widespread outrage at the time, and Astaire’s daughter said she was “saddened that after his (Fred’s) wonderful career he was sold to the devil,” according to a 1997 Variety article.

The uproar was an early example of how digital video editing technology and CGI was already beginning to allow for the editing of video footage in ways consumers could find deceptive. But the technology has accelerated in the past few years—especially in recent months—as AI has allowed for video and audio imitations of almost anyone more realistic-seeming and easy-to-make than ever before.

In the past year or so alone, Hulu, State Farm and Spotify have all used machine learning to artificially recreate celebrities and their voices in some form. Each of these efforts has been either clearly disclosed in creative ways or obviously fake enough to not need an explicit disclaimer—State Farm and ESPN’s fake vintage news footage of sportscaster Kenny Mayne and Hulu’s creative workaround to Covid-19-related production challenges both made winking references to their own fake-ness, while Spotify’s recreation of The Weeknd was not quite real-seeming enough to fool anyone into thinking it was the R&B singer himself.

It helps synthetic media is still at a stage where it is seen as a novelty gimmick and brands and agencies thus naturally like to show off their use of this cutting-edge tech. But as these tools become more commonplace and ingrained in existing creative tools—like in the case of this new startup that allows advertisers to create a deepfake sales pitch by simply inputting a text script—disclosure might become a trickier issue.

Gartner’s Greene said brands should still make a clear effort to disclose when they are using artificial media of some sort; consumers can actually be receptive to such practices when creators are transparent about their use and people don’t feel as if they are being duped in some way. “As we’ve seen, when brands have partnered with engineered social media influencers or when films have used generative AI to bring back characters, as long as consumers are aware of what is happening they are receptive,” Greene said.

Bad actors

Content creators using synthetic media will also have to contend with the widespread bad reputation deepfakes and voice cloning have as a tool for nefarious misinformation and cybercrime.

Deepfakes are already powering a dark web industry of non-consensual pornography, according to reports. Meanwhile, the technology is reaching a point where its use in phishing scams, biometric identity theft and impersonation for extortion purposes are likely to start becoming much more common, according to David Britton, credit agency Experian’s vp of industry solutions, global ID and fraud.

“These attacks are not prevalent in the market today—there’s not a huge number of widespread attacks around this,” Britton said. “But the technology is finally at a place—particularly driven by the advancements on AI and machine learning—that they can start to develop really sophisticated, deeply technical techniques and technologies.”

While the fact this technology can be used for criminal purposes doesn’t necessarily detract from its creative potential, it does mean content creators must tread more carefully in how they use it and be wary of the reputation it might gain as such cyberattacks become more common. For his part, however, Britton still believes it’s possible to use deepfakes and voice cloning in a responsible way.

“[There needs to be] full disclosure around the fact that they’re leveraging some synthetic capability around voice or video—I think that’s gonna go a long way,” Britton said. “And there are creative ways to do that. I think in the articulation out to the consumer market, that will help establish precedent. And then by doing so, if the market does that, those that no longer do it, I think are going to be held to higher standards and higher scrutiny and be called out for it.”