Convergent TV Summit returns March 21-22. Hear timely insights from TV industry experts virtually or in person in NYC. Register now to secure your early bird pass.
As artificial intelligence’s ability to produce realistic imagery and video, the television and movie industries are starting to wrestle with questions about what this new technology means for the future of video production.
At the Consumer Electronics Show in Las Vegas this year, Hollywood insiders and AI experts discussed how emerging tech like voice cloning, more realistic-looking language dubbing, de-aging and content generation could change how movies, television shows and commercials are made. The conversation came as Silicon Valley investors have come to see generative AI as a bright spot in an otherwise economically challenged tech landscape, likely to reach a turning point in terms of its commercial feasibility in the coming year.
“The reason this is so important for Hollywood, is because AI can create content in any medium right now—images, text, film, 3D—and generative AI will increasingly be used as a foundational building block for all content creation in the future,” said Nina Schick, author of the book Deepfakes: The Coming Infocalypse.
The pace of AI innovation in the past several months has far outstripped the time needed for policymakers and trade groups to set rules and norms around how artists and content creators should be compensated. For instance, the rise of image generators has raised questions around copyrights and credit for artists whose work served as the training material for the algorithms.
Concerns about compensation
Duncan Crabtree-Ireland, national executive director and chief negotiator of the SAG-AFTRA actors guild, said the union is working to create guidelines around how its members should be paid when their AI-generated likeness is used in video content. The panelists brought up a recent example of James Earl Jones licensing his voice to a Ukrainian AI startup that will recreate it in future work with royalties paid as a way that AI can benefit actors.
“What SAG-AFTRA is eager to do is help channel the benefits of AI into into a future that’s really beneficial for our members and for the public in general,” Crabtree-Ireland said. “You need the consent of the people who are involved and you need to compensate those people appropriately.”
Joanna Popper, chief metaverse officer at talent agency Creative Artists Agency, echoed some of those concerns. The firm recently invested in a deepfake production company Deep Voodoo from South Park creators Trey Parker and Matt Stone, which she said helps actors better do their jobs.
“We believe that each talent has the right to decide when and where their name, image and likeness is used and to get compensated for that,” Popper said.
How generative AI is already being used
Film director Scott Mann discussed how Flawless, the startup he co-founded that helps sync the lips of actors to dubbed languages using AI, could help expand the reach of TV shows and movies internationally, a key concern for an industry that’s come to rely on an increasingly global market.
“Hollywood for a long time has been broken, it’s kind of slowly dying, in a sense,” Mann said. “And a lot of that is to do with distribution and actually being able to share content globally and make global films.”
Despite the flashier image and text generators that can create original content from whole cloth, most of generative AI’s current uses in video production are more subtle. Visual effects firm Marz has used AI de-aging and cosmetic work on actors in titles including Stranger Things, The Umbrella Academy and Spiderman: No Way Home with a tool the firm created called Vanity AI.
“The benefits are obvious. Our visual effects artists can do the work 300 times faster than the current state of the art,” said Marz chief operating officer Matt Panousis.