Former world chess champion Garry Kasparov knows firsthand what it feels like to compete against artificial intelligence.
In 1997, Kasparov played two rounds of chess against IBM’s Deep Blue supercomputer–winning the first before famously losing the second—and has since then gone on to write a book about the future of AI and what it means for humanity. (The book, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, 1st Edition, was released in 2017.)
During South by Southwest 2019 in Austin, Texas, Kasparov was in town for a discussion series hosted by Sony about AI and creativity. Afterward, he sat down with Adweek to talk about those topics, along with the future of data privacy and what it means for artists, governments and everyone in between.
“I called it in 2015 that the Facebook business model—after learning what I knew about Putin and the troll factories and the fake news industry—I said the Facebook business model was like a beehive for a Russian bear,” he said. “Because it was tempting to just eat. And I think we should learn more about it. We’re dealing with a real existential threat, while dictators and quasi-state organizations are going after us and using technology created in the free world to undermine the very foundation of the free world.”
This interview has been edited for brevity and clarity.
Adweek: Do you think AI can ever be truly artistic?
Kasparov: It’s a slippery ground of semantics. Because the answer to your question is of course yes, because artistic means what? Will you find enough people that could enjoy it? Absolutely. I think a majority will not, but it’s like a movie production. What makes it really artistic? Spending a lot of money? I don’t think so. Casablanca was shot in a couple of rooms and it’s one of the best movies ever made.
And you have a few movies that spend hundreds of millions of dollars and they just didn’t last. So you always have people who might enjoy it, and if you have 1 million people who enjoy it and 1 billion people who didn’t and who think it’s garbage, does it make it artistic? I think yes. The moment we talk about something that doesn’t have a very clear scale relation or that’s widely acceptable, the answer is yes. So AI could come up with a movie and maybe someday this movie will win an Oscar because people think ‘Oh, it’s sensational. It’s fine. Let’s do it for the sake of diversity.’ An AI can’t operate in the area of human creativity because it’s so subjective. I think it will not be Shakespeare, but it will always have its audience.
With computers so reliant on creating based on data, can an AI ever be creative from scratch on its own?
I don’t think so, because machines should know the odds. You know, it brings us back to another philosophical issue that was raised by Joseph Weizenbaum, one of the founding fathers of AI, in his 1976 book Computer Power and Human Reason. When he talked about the subtle difference between choosing and deciding, he argued that deciding was computational, because always you know you go deep, deep, deep, down to the bottom and find “I was told,” when choosing is “I want it.” And I think creativity, human creativity, is “I want it.” There’s no reason for that; it’s just “I did it.” I always like quoting Picasso, who said machines are useless because they only give you answers.
Creativity starts with a question; questions are the beginning. So if you want to invent or reinvent yourself as an artist, you have to ask questions. And machines can ask questions, but I’m not sure they know which questions are relevant. So, again, people say it’s ‘you say, they say,’ but it seems human creativity is where knowing the odds is counterproductive. Creativity doesn’t mean that you’ll be successful. You can be creative and fail, and if you knew the odds, you’d never do that. Creativity means you don’t know the outcome. It’s something new, something disruptive. It may fail. And a machine will never do that because it’s just the odds will never be in its favor.
You’ve talked in the past about how AI will help us to be human. What does that look like to have computers help us know our own humanity?
It’s not about humanity. Most of the jobs we do today are repetitive jobs. But people think ‘Oh, it’s a repetitive job, so it must be physical.’ No, there are so many intellectual jobs, white collar jobs, that are repetitive. … So it’ll force us to do something that requires us to do pure human creativity. I think it’ll be a great push, because we’ll start talking risky things. In the last few decades, we always looked for some lame choices, avoiding risks, being complacent—not abandoning things like space exploration or deep sea exploration because it’s too risky. I think AI will push us back to do things that will be opening the unknown—and that’s good.
It seems like art often comes from existential crisis, so if machines don’t have emotion…
Sometimes something I bring in to support my argument is let’s say you have AI, and it’s running your finances and is connected to your e-wallet. So it has all of the data—your salary, your bonus, your mortgage, your expenses and everything—and you’re at the store, and you’re looking for a gift. And the AI, let’s say the voice of Siri, says “Ah ah ah ah, it’s off limits and is not recommended.” Does the machine know the odds? Absolutely. Is it good advice? Yes, because it thinks about your well-being and knows exactly how you should spend your money.
Now, let’s make a slight change. You’re in the same store and looking for the same expensive gift, and your son or daughter is next to you and it’s a birthday gift. Does it change anything about the AI’s perspective? No. For you, it changes everything. By the way, it might not be a happy family; it could be a broken family and it could be weekend visitations or whatever. Knowing the odds will not give you the right answer. It’s good that you know the odds; it helps you to make a more conscious decision because you know the consequences. But sometimes an emotional response is more important because there’s no value on your relationship with these kids. … How can you explain it to AI that this unique situation beats all the odds that the machine knows perfectly?
How does this relationship with data change with the ongoing issue about data privacy? What do you think about that these days?
I also wear a hat as the chairman of the Human Rights Foundation, and, for me, it’s very painful to see Google, Microsoft, Facebook treat people differently depending on geography. People who are lucky to be born in the free world—in Europe, the United States, Canada, Japan, Australia—they are protected by the law of the land, and there are still problems. I’m not happy about anything that’s happening or that’s not happening. But I always tell people there’s a difference between Google data collection and KGB data collection. In the former’s case, you end up with tons of unwanted advertising. In the latter, it’s a matter of life and death.
The fact is these corporations are so stubborn in resisting the pressure of releasing data to the states so they do it behind closed doors. That’s another story. But a least Apple was adamant of not opening the iPhones of terrorists. The same corporations will give everything about their customers in Russia, China or Turkey, endangering millions and millions of people. So I think one of the things that must be imposed on them is they must treat all of their customers with the same respect, no matter where he or she lives. And also, I think we have to recognize the fact that if data is being produced, it will be collected. So when people complain about it, I ask them whether they bought Alexa or downloaded facial recognition on their iPhone. Data is there. And we should simply recognize that we are trading our privacy for convenience.
Privacy is the currency we pay. It looks like we get it for free, but, no, we pay for it with our privacy. But we should look for the benefits that we receive in exchange, and we also as citizens of the free world should demand our governments to be far more punitive by dealing with corporations that are very loose with this data. I think Facebook was off the hook wrongly after what happened with the elections and Cambridge Analytica. And the problem is that even after that’s happened and they confessed their wrongdoings, they haven’t fixed it.
So then do you think regulations need to happen at an international level?
You cannot do international regulations because Russia and China don’t give a damn about the regulations, but these companies are paying taxes in America or in Europe. I think they should be pushed to follow the same regulations and not to change their behavior depending on the law of the land and whether they do it in China or Russia. Clearly, these nondemocratic countries are trying to bend these rules to their favor, and they want to use these multinational giants to spy on their citizens.
There’s still a lot of misunderstanding with people about AI, but, as someone who’s well versed in this, what are the big questions you have right now?
The short-term concern is it’s not about AI, per se. It’s not about killer robots or The Matrix. The short-term concern is bad people using AI to hurt us. Humans still have a monopoly on evil, and that’s where we should be concentrated. We have so many threats now knocking on our door and talking about “Oh, general AI in 25 or 30 years or 40 years, quantum computing, Terminator, Skynet, ahhhhh.” Let our kids worry about that. But right now we have challenges that are coming from Russia, China, Iran and other countries that have access to this technology but who don’t respect the same rules or value human life the way we do. And we just have to recognize that in a globalized world, there are globalized threats, and how do we deal with them?
We don’t have the sense of necessity. We know these threats exist, but it’s a global challenge, and I don’t think the free world is yet up to this challenge. We don’t have even proper debates on how to respond. It’s the 21st century. There’s no iron curtain. It doesn’t exist anymore. And we have problems within our society that are being enhanced by the use of our own technology by bad actors.
One last question: What has playing chess—both against humans and also against computers—taught you about creativity?
What I learned from my own experience—mostly useful, sometimes painful—is that we’re dealing with a closed system. And every game is a closed system. Even Texas Hold ’em is a closed system. The system is based on rules created by humans where we have a framework or what I could also refer to as narrow intelligence. We should also recognize that machines will do better. It will outperform us. There’s no tragedy. I think the best way for us to learn is to look at data generated by machines within this framework and understand how we can use these experiences to apply it elsewhere.
I think the human role and what I believe is the future of creativity is to actually connect these different elements of narrow intelligence to one big picture. Machines, contrary to what you have heard from some other experts, I don’t think will be able to transfer knowledge from a closed system of narrow intelligence to general intelligence in an open-ended system. That will be the human role. It sounds easy, but it will be great art. And those who are able to find what exactly this machine needs to address this specific issue will be the best intellectuals, and something similar to that of a Formula One pilot.