Deepfakes Are Much More Prevalent in Porn Than in Politics, Study Finds

Around 96% of actual deepfakes are pornographic, according to a report

Illustration with XXX connoting porn
The finding comes as California signs two bills into law this week banning political and nonconsensual pornographic deepfakes.
Photo Illustration: Dianna McDougall; Sources: Getty Images

While much of the mainstream discussion of so-called “deepfake” footage has focused on its potential to spread misinformation about public figures, it turns out that the much more prevalent use of the tech thus far has been fabricated pornography targeting women in the entertainment industry.

That’s according to a report released this week by cybersecurity firm Deeptrace, which found that 96% of the nearly 15,000 deepfake videos it found online consisted of pornography. The finding comes as California signs two bills into law this week banning political and nonconsensual pornographic deepfakes respectively, joining a growing number of lawmakers beginning to crack down on the nascent threat of artificial-intelligence-powered fakery.

The researchers also found that the number of deepfakes online overall has doubled in the last seven months, as the means to make them have become more accessible to the average person. The top four deepfake porn sites alone have garnered more than 134 million video views.

The report outlines a budding cottage industry of free and paid tools and services that enable deepfake creation: open-source neural net scripts hosted on GitHub, web and desktop apps that add a graphical interface to the code and even businesses and individuals who provide customers with bespoke fakes for a price.

Such underground enterprises charge between $3 and $30 per deepfake video—depending on the quality of the job—and $10 per 50 words of voice-cloning tasks. One service portal asked clients for 250 images of the intended target and took about two days to generate a video. All in all, Deeptrace was able to identify 20 forums and websites dedicated to the creation of deepfakes with a total membership of more than 95,000 web users.

The report also attributes the explosion in deepfakes to the simultaneous spread of the neural network model used to make them within the AI research community. Deepfakes are typically created by way of a deep learning setup called a generative adversarial network (GAN), in which an image-classifier algorithm trains another neural net to generate images until the former can no longer tell the difference between the latter’s fakes and the real images.

The number of research papers mentioning GAN has ballooned in the past two years from 469 in 2017 to 1,207 this year (projected through the end of 2019), the report states. Whereas experts had previously expected deepfake targeting to be limited to public figures because of the immense amount of photo and video footage of the subject needed for training, advances have emerged that make it possible to train a deepfake on as little data as a single image.

Still, such more sophisticated methods aren’t yet widespread, and thus 99% of the victims of deepfakes on porn sites were found to be women in the entertainment industry (the remaining 1% were in news media). Scarlett Johansson, for one, has spoken out about AI-generated porn videos using her likeness, telling the Washington Post it’s “useless” to fight back.

Deepfake videos found on YouTube were more diverse, with 81% of subjects in the entertainment industry, 12% politicians, 5% news media and 2% business owners. The report also cites instances in Gabon and Malaysia, where political deepfakes had a significant impact, as well as a viral video twisting the likenesses of Jordan Peele and Barack Obama.

Regulators and lawmakers have begun to grapple with how to deal with deepfakes. The House Intelligence Committee held a hearing on the problem in June, and three states—California, Texas and Virginia—now have laws in place banning deepfakes of political or pornographic natures, or both.

“The speed of the developments surrounding deepfakes means this landscape is constantly shifting, with rapidly materializing threats resulting in increased scale and impact,” Deeptrace concludes in its report. “It is essential that we are prepared to face these new challenges.”

Recommended articles