Getty Images Is Using Artificial Intelligence to Help Newsrooms Choose Better Photos

The photo agency worked with 20 newsrooms and an AI firm to create Panels

Getty Images is embracing artificial intelligence, starting with a way to help publishers pick photos.

Today, the photo agency debuted a tool that uses AI to analyze a story and suggest photos that might go along with it depending on the text and content. The tool, called Panels, uses natural language processing—a term for how computers can learn to “read” human words, phrases and sentences—to then match a story based on keywords, images, captions and other criteria. Publishers also will then have access to custom filters and a self-improving algorithm to move around keywords or select images through a more human-driven process.

Here’s how it works: When someone enters in the URL for a story or copies and pastes in the text, Panels will analyze the words before suggesting people, places and things that appear in the story after weighing different options based on frequency and relevance.

According to Andrew Hamilton, senior vice president of data and insights at Getty Images, the idea is not to replace photos editors but to help increase efficiency in an increasingly fast-moving newsroom while also providing more ideas for longer feature stories. (Panels will be free for premium subscribers to Getty Images.)

“The interesting thing for me is—look, publishers need ad revenue,” he said. “And ad revenue is driven by the eyeballs, right? And so from our perspective, the best way to drive those eyeballs is through compelling imagery, telling really good stories and getting user engagement. And we’re leaving the editor to tell that story. We’re by no means picking the best image or saying, ‘this is the image you have to use.’”

To develop Panels, Getty Images and its partner Vizual.AI–a cloud-based image optimization platform—spent about six months working with a group of 20 of the largest newsrooms. The companies then created algorithms to weigh certain words, such as their frequency, recency, or whether they are known people or places. They then added caption data from Getty’s repository of 100 million photos to match that to the text.

“We’re not trying to be the editor picking that visual story you’re trying to tell here,” Hamilton said. “We’re leaving that control up to them, but it just gives them this array of choice.”

On Tuesday morning in a conference room at the Getty Images headquarters in lower Manhattan, Hamilton showed how he envisions newsrooms using Panels based on the news of the day. It was the first day of the trial of Paul Manafort, President Donald Trump’s former campaign chairman, who has been charged with bank and tax fraud as part of the ongoing investigation into Russian interference in the 2016 U.S. presidential election. Hamilton typed in the names of Manafort and Rick Gates, a former political consultant and lobbyist who’s already plead guilty to conspiracy against the U.S. as a part of the same investigation.

Inserting the text brought up many photos of Manafort as well as some with Gates. He then dropped the keyword “Gates” to a second row to separate the two, and then looked for photos of Trump and Manafort together—without typing in the names. However, making sure the photos fit the context is where the editors come in.

In another example, Hamilton dragged in a link to a story about former Dallas Cowboys wide receiver Terrell Owens criticizing Jerry Jones, the team’s owner, for being a “bully” in comments criticizing players’ choice to kneel during the National Anthem to raise awareness about police shooting unarmed black men. Keywords that popped up included both men, along with other topics like “National Anthem,” and “NFL” along with photos for each. Photo editors then choose which is more meaningful to the story among those suggested—a press conference shot, or a photo of Owens on the sideline with his arm around Jones.

“It gives you this starting point, this starting point of the creative process for how’d you want to tell the story of the visual story that goes with that text,” he said. “It’s giving them this flexibility, it’s recommending these keywords, these different combinations that they can play with pretty easily. It’s like a playground, a place for them to go and find the best visual story that they want to tell rather than reading an article, thinking about ‘okay, what story do I want to tell?'”

While this is the first product from Getty Images that uses AI, it also raises the question of bias that often comes with analyzing text and images with computer programs. Just last week, a report from the ACLU found that Amazon’s visual recognition AI software misidentified members of Congress—especially people of color–and wrongly associated their Congressional photos with mugshots of people that had been convicted of a crime. (Amazon responded with a blog post that said the ACLU “misinterpreted” the results.)

Hamilton said Getty Images isn’t using image recognition in this way yet. However, he said the company has been working with video surveillance companies and social media platforms—which he wouldn’t name—to help other companies’ software have a better understanding of how to identify images accurately. He said the company is thinking about the importance that accuracy has, also in terms of brands and licensed imaging, adding that a number of companies have sought Getty’s help in solving their own issues around bias.

“Platforms struggle with this, in general, in their thinking about ethnicity and bias in data,” he said.