Documentary Producers Are Adopting Shared AI Guidelines. Will the News Networks Be Next?

By Ethan Alter 

Once exclusively a feature of science fiction storytelling, generative AI has rapidly become a bug complicating how real-world stories are told. Journalists, documentary filmmakers and others in the non-fiction space are increasingly navigating around AI intrusions, whether it’s fake images of President Joe Biden or former President Donald Trump ahead of their looming election year rematch, misinformation-filled websites or supposedly authentic archival materials that are actually anything but.

So far, a unified approach for how truth tellers across the larger media industry can separate fact from AI generated fiction remains elusive. But at least one group is planting a flag for a shared set of guidelines. Earlier this month, the Archival Producers Alliance—which represents over 300 documentary professionals—released a first draft of proposed best practices for how producers and researchers can employ AI in their work. Their initial draft includes such recommendations as greater transparency both in the production and release phases, as well as extensive legal reviews if and when AI tools might be used to alter a subject’s likeness. The APA hopes to publish a finalized set of guidelines later this year.

And the organization’s founders tell TVNewser that they would love to see the major news networks follow suit with a set of shared best practices. “We do believe that TV news could use something similar,” APA leaders Rachel Antell, Stephanie Jenkins and Jennifer Petrucelli note in a joint e-mail interview.

Advertisement

“Our guidelines are the result of conversations with experts from many parts of the documentary world as well as AI ethicists who brought unique thoughts and insights,” the trio add. “We think it would be invaluable for TV news folks to gather in a similar way to assess how to best re-affirm the values of their field in light of this new technology. The process of creating the guidelines was invaluable for honing our understanding of the power, potential and risks of this new technology and we imagine it would be for the networks as well.”

‘We do believe that TV news could use something similar’ 

Right now, TV news outlets largely have their own approaches to managing the ever-encroaching presence of AI. TVNewser reached out to multiple networks to ask about their current policies about using generative AI tools in news reporting. The overall sentiment from those that responded suggests that generative AI has a potential role to play in non-reporting duties like tagging articles and writing headlines.

“We are not using AI for any reporting, content creation or writing of articles,” remarks Porter Berry, president and editor-in-chief of Fox News Digital. “That said, we have been experimenting with AI and are very excited about the possibilities and potential when it comes to helping our journalists enhance their research and assisting on the backed in tasks like article tagging.”

“We’ve encouraged our journalists to familiarize themselves and experiment with the tools,” says Christina Hartman, vice president and head of Scripps News, who cites metadata generation, transcription and translation and story idea generation as some of the network’s permitted uses of AI—albeit always with oversight from flesh-and-blood journalists.

“Scripps journalists are always responsible for the facts they gather and report, so the use of generative AI tools is strictly prohibited for scriptwriting and article composition,” Hartman emphasizes. (Scripps has published its current policies about AI at the Scripps Media Trust Center.)

Asked whether they would consider collaborating with other news networks on devising official AI best practice guidelines in a similar fashion as the APA, Porter strikes a noncommittal note, saying: “We are always discussing best practices.”

Hartman, meanwhile, says that Scripps is already a participant in the Online News Association’s AI Innovator Collaborative, a monthly small-group gathering dedicated to putting AI tools through their paces. “We would welcome further collaboration with other news networks,” she adds. “It’s important to move both thoughtfully and nimbly, and we’re happy to share our learnings as well as to learn from others.”

‘We risk losing our audience’ 

Speaking of learning, the documentary community recently received another crash course in why industry-wide AI guidelines increasingly seem like a necessity. Just as the APA shared the first draft of its proposed set of best practices, Netflix found itself accused of using AI manipulated photos in their buzzy true crime documentary, What Jennifer Did.

The ensuing outcry quickly went viral on social media, with critics taking the streamer to task for not disclosing the use of AI in the credits. Speaking with The Toronto Star on April 19, the documentary’s executive producer Jeremy Grimaldi insisted that the photos were authentic, although photo editing software was used to anonymize the background.

Asked about What Jennifer Did, the APA leaders decline to comment having not seen the film. But they do consider the controversy it generated to be a case study in why a shared set of guidelines are needed as AI tools become more and more common. “It shows that people want some form of transparency when generative AI is used,” the trio say. “And it indicates that we risk losing our audience—and soon—as trust erodes. As soon as people start questioning authenticity, it’s a slippery slope for them not to believe much of anything at all.”

 

Advertisement