Why You Should Be (Very) Worried About the Tom Cruise Deepfakes

If you didn’t create an image, you have no way to authenticate it

Sports marketing leaders from State Farm, the Golden State Warriors, the NBA and more will join ADWEEK Brand Play on May 9 to unpack the trends, techniques and tools you need to break into the space. Register for your virtual pass.

After creating a convincing viral series of Tom Cruise deepfakes on TikTok, VFX specialist Chris Ume told The Verge, “You can’t do it by just pressing a button. That’s important, that’s a message I want to tell people.” He went on to say that each clip took weeks of work using the open-source DeepFaceLab algorithm as well as established video editing tools. The key takeaway, and the title of the article, was Tom Cruise deepfake creator says public shouldn’t be worried about “one-click fakes.”

I “strenuously object.” You should be extremely worried about deepfakes, the technology that empowers their creation and the exponential speed of innovation.

Ume says he’s not worried

The Verge article concludes:

“Ume, though, says he isn’t too worried about the future. We’ve developed such technology before, and society’s conception of truth has more or less survived. ‘It’s like Photoshop 20 years ago; people didn’t know what photo editing was, and now they know about these fakes,’ he says. As deepfakes become more and more of a staple in TV and movies, people’s expectations will change, as they did for imagery in the age of Photoshop. One thing’s for certain, says Ume, and it’s that the genie can’t be put back in the bottle. ‘Deepfakes are here to stay,’ he says. ‘Everyone believes in it.'”

He’s wrong. Deepfakes tech should have your undivided attention

Ume says, “It’s like Photoshop 20 years ago; people didn’t know what photo editing was, and now they know about these fakes.” Know about them? In 2021, it is safe to assume that every image you see has been manipulated or enhanced or altered in some way (Photoshopped, Instagram filtered, cropped, colorized, compressed, deepfaked, face-swapped, etc.). If you didn’t personally create an image, you have no way to authenticate it.

The future of deepfakes is clear

Imagine a world where our inherent mistrust of still images has to be applied to video and audio as well. A world where you cannot trust that anything you see or hear is real. A world where “fake” sights, sounds and motion are indistinguishable from what you have learned to accept as accurate or “real” two-dimensional representations of your three-dimensional reality. This world of alternate reality is the ultimate destination for deepfakes technology. When should we worry about it? Right now. Immediately. This second. At your earliest convenience. Why?

The accelerating pace of change is almost unmanageable

At this moment, the rate of technological change is the slowest you will ever experience for the rest of your life. And while it may not “feel” like it, the rate of change is accelerating exponentially.

There is plenty of historical data to back up this statement. My favorite writing on the subject is Ray Kurzweil’s 2001 essay titled “The Law of Accelerating Returns,” which begins:

“An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to the singularity—technological change so rapid and profound it represents a rupture in the fabric of human history.”

Kurzweil’s words are prophetic. We are not alone. In this century, we coexist with nascent AI systems. These relatively young, computational partners are evolving quickly. They are better than humans at doing very specific things that humans cannot do. Importantly, they are capable of learning independently and, while narrow purposed, are capable of doing certain kinds of cognitive, nonrepetitive work.

I strongly disagree with Ume’s thesis as reported in The Verge, because every reason Ume gave to support his belief that “you can’t do it by just pressing a button” can be reduced to a task that can be taught to a machine. There is nothing about the process of creating deepfakes that won’t be fully automated very soon. How soon? It could be months; it could be a few years. I’ll make a prediction. More than three years, less than five. Give me a six-month grace period on all sides of the prediction, because it could be tomorrow or next week too.

Worry now and later!

We need to start worrying about how we are going to deal with deepfakes now. We will need to make a fair number of adjustments to live in a world where we cannot trust our eyes or our ears. We will need to deal with the technology’s impact on our politics. People already believe what they want to believe, and this is going to make proving a point much, much harder. How about proof in a court of law? Can we figure out a way to properly protect security camera footage? Get a handle on revenge porn? Work out the subtle differences between identity and identification? There’s a lot of work to do now, because with regard to deepfakes, tomorrow is going to feel exactly like today—until it doesn’t. And contrary to Ume’s statement, a very new world is just one click away.