A number of breakthroughs in the past few years have given artificial intelligence newfound creative abilities, with implications ranging from an internet underbelly of deepfakes to a burgeoning AI-generated art scene. But because of the complexity of the neural networks involved and the sometimes-arduous coding and training process, such capabilities can also be hard for non-coders to master.
A web platform called Playform is one of a few tools to attempt to make AI’s artistic side more accessible to creatives. Much like similar programs, such as RunwayML, Nvidia’s AI Playground and Deep Dream Generator, Playform adds a slick interface to AI-powered visual editing functions, like stylizing images and transforming simple doodles into photorealistic pictures.
Most recently, the platform added a feature that allows users to turn sketches into art in the style of various famous artists, based on a method outlined in a research paper from Playform founder Ahmed Elgammal, a Rutgers University computer science professor and director of the school’s AI & Artificial Intelligence Lab.
But perhaps most notably, Playform claims to be the only creative machine learning suite to offer AI that can generate wholly original art from a training set of as few as 30 images. Other platforms mostly use pre-trained models or allow for more limited types of training on prefab data, according to Elgammal.
“We are the only platform that allows you to train your own AI model from scratch using your own images,” Elgammal said. “Artists I talk to don’t like [the pre-trained models used by other platforms], they want their own project with their own unique data.”
The platform has already attracted the attention of a couple brands. T-Mobile is partnering with it on a project to use AI to complete Beethoven’s unfinished tenth symphony for the composer’s 25oth birthday this year and HBO’s Silicon Valley featured a piece of art created with the platform in an episode. Work created with the tool is also beginning to proliferate in various museums and galleries around the world.
The technology at play in the tool—and the backbone of most visual creative AI—is a system called a generative adversarial network (GAN). A GAN consists of two neural networks locked in a game of sorts with one another; one, the “discriminator,” is training to recognize certain images, while the second network, the “generator,” attempts to produce an image realistic enough to fool the discriminator.
The concept of a GAN was first laid out by a then-Google intern in 2014, but they’ve yet to see much commercial use, despite a recent surge in patent applications from major brands that suggest they are beginning to experiment with the technology. Ad agencies have similarly dipped their toes into the field with various GAN-centered campaigns, but some execs say the tech won’t gain major traction until it’s integrated with easy-to-use software.
Another hang-up is the somewhat garbled quality of many of the images AI creates, which can make them less useful for any purpose beyond novelty campaigns and experiments. But Elgammal, who has a unique view on the future of the technology from his post at one of the few academic programs devoted solely to artistic AI, said the image quality is improving rapidly.
“Soon, we will be able to have a GAN that can generate prolific images in a variety of genres with very high resolution and quality,” Elgammal said. “We are close to that.”