Abstract jargon like “deep learning” and “neural network” often fails to capture the mind-boggling complexity of the self-learning systems they describe.
That awe-inspiring quality was part of what Turkish new media artist Refik Anadol sought to spell out—in a medium of appropriately grand scale—with his new exhibition. The installation, put on by experiential art organization Artechouse, is a two-story empty room in Lower Manhattan bathed from floor to ceiling in high-resolution, laser-projected video.
Through reality-bending graphics that ripple across the walls, Machine Hallucinations traces Anadol’s own process of training a machine learning system, from data collection to image recognition to a point where the neural network can create its own art, of sorts.
“We use this algorithm to narrate the story,” Anadol told Adweek. “My personal challenge was, ‘How can we learn what machines learn?’ So this was a way of putting a camera in the mind of a machine and finding the memory points and connecting them to create a dream.”
Founded by art advocates Tati Pastukhova and Sandro Kereselidze, Artechouse opened its first exhibit in Washington, D.C. in 2017 followed by another in Miami last year. The group chose Anadol for its latest project based on his previous work with data and AI, which he first began to explore about five years ago.
Machine Hallucinations opened in early September in a 6,0000-square-foot space below the Google-owned Chelsea Market. It was set to close in early November, but organizers may keep it running through Dec. 1 due to its popularity.
Anadol first used an algorithm to scrape more than 100 million publicly available photos of New York buildings from social media, search engines and various archives. Faces and other signs of people were scrubbed for privacy and aesthetic reasons.
“I’m super inspired by cities as a whole; they are like living institutions, living entities—I feel like a city literally has veins, has a heart,” Anadol said. “That’s why I’m always focusing on cities more than personal or global data.”
The images dissolve into numbers indicating the color of every pixel, then flow through what is essentially a maze of random equations, modeled after brain neurons in a structure called a neural network. The system estimates the odds that a given image is a real photo of a New York skyscraper, tweaking the equations each time until it forms an image-recognition algorithm.
At the same time, a second neural network tries to fool the first by feeding it fake images of New York skyscrapers mixed in with the real ones. The attempts begin as random blobs of pixels but morph into more building-looking shapes as the second network learns from the first network’s feedback.
This whole setup is called a generative adversarial network (GAN), and it’s responsible for most AI-generated art—as well as the production of misinformation called deepfakes. The particular model Anadol used was an open-source code from chipmaker Nvidia called StyleGAN.
Once the second network could successfully deceive the first, Anadol used it to generate imaginary skylines for New York.
Like much of the output of GANs—including a painting that sold at Christie’s last fall for nearly half a million dollars—the synthetic skyscrapers have a surreal quality. Anadol said he hoped to accentuate that otherworldly look in his exhibit.
“I’m always feeling that machine dreams should be very fluid—like in flux, in motion, like water,” Anadol said. “They feel like something that doesn’t fit inside a physical world. It’s very watery and dreamy.”
Anadol isn’t the only one tapping into the creative side of neural networks. Agencies and brands have also begun to experiment with AI generation, including AI-designed chairs, deepfake ad campaigns and AI-created sports logos.