Forget what you’ve heard about automated paradises or dystopian robot economies. Avi Goldfarb argues that the biggest game-changer of the current wave of artificial intelligence technology will be its rote ability to predict outcomes.
Goldfarb, a marketing professor at the University of Toronto’s Rotman School of Management, is the co-author of a new book called Prediction Machines, which breaks down the far-reaching effects of AI’s predictive capabilities in basic economic terms. Tech luminaries and big-name economists have praised the book for its pragmatic approach to the technology and its breezy readability.
But just because Goldfarb is skeptical of some of the more fantastical visions of AI’s future doesn’t mean he doubts its transformative potential. On the contrary, Goldfarb and his co-authors argue that AI’s reduction of market uncertainty will yield efficiency benefits that will ripple through the entire economy.
Goldfarb spoke with Adweek about the myths around AI, what the technology means for the ad industry and the relevance of the 2004 film I, Robot.
The following interview has been edited for length and clarity.
Adweek: What is the biggest misconception about AI?
Avi Goldfarb: In some sense if you read—maybe not Adweek but the rest of the press—that this is true intelligence. So the optimistic view is something like C-3PO, where the machine can really do everything that a human could do except for one thing that humans don’t really do, which is the machine listens to humans … Or the pessimistic view is that machines are going to take over the world and that’s scary true intelligence like The Terminator or The Matrix or something like that.
That’s not the current technology; that has nothing to do with the reason we are talking about AI in 2018. The reason we’re talking about AI today is because of its prediction technology. A particular aspect of computer science called machine learning has gotten much, much better, and that’s prediction technology—it’s the process of filling in missing information. What prediction technology does is it helps reduce uncertainty. That’s really important to business and society, but it’s not true intelligence in the way we fear in our imaginations. It is transformative, but we’re not moving toward The Terminator.
In your book, you talk about how that prediction technology will have a domino effect over the rest of the economy. How does that work?
In economics, you learn on day one that the demand curve slopes downward—what does that mean? When the price of something falls we do more of it. So if coffee’s cheap we buy more coffee; when prediction’s cheap we do more prediction. The second thing you learn is that when coffee’s cheap, you buy less tea—it’s a substitute. And so when machine prediction is cheap we’re going to do more machine prediction, and the aspects of human jobs that are prediction are going to be increasingly done by machine–those are substitutes.
And then the third and most hopeful aspect of this is that just as when coffee gets cheap, we buy more cream and sugar—those are complements—the big question is what are the cream and sugar for prediction? What’s becoming more valuable? Broadly speaking, there’s three categories. There’s action, which is, there’s no point in making a prediction if you can’t do something with it. There’s … data, which essentially feeds the prediction machine, so data’s going to become more valuable. And the third, the key role for humans going forward, is what we call judgment. And that’s essentially knowing which predictions to make and knowing what to do when we have them.
And to understand what I mean by judgment, [there’s a scene in the movie I, Robot] where Will Smith and this little girl are driving, and they go over a bridge and end up sinking into a river. A robot comes along and rescues Will Smith and not the girl, and that’s why he hates robots. The robot predicted that he’d have a 45 percent chance of survival and the girl would only have an 11 percent chance. But a human being would have known that 11 percent was more than enough. So that’s judgment. When you use a prediction machine, you actually need to decide what you care about, what you value. That judgment is inherently human.
You’ve written before about how important it is to be cognizant of people’s sense of privacy in digital advertising. What is the best way for the industry to implement AI without creeping people out?
There is an optimal level of privacy protection—both from a society point of view but especially when you think about customer strategy. If you are too permissive with data, you’re not going to have any customers because they’re not going to trust you. But if you’re too restrictive, then you can’t build an AI. For every company, in every context, there’s a sweet spot. That sweet spot depends on how private the information you’re collecting is, and what benefits the consumers are getting from you using your data.
So if you’re in health or financial services, then that data’s relatively private compared to, like, cola … the other aspect is how much better are you making your customers’ or patients’ lives—health data’s really private, but if you’re going to save somebody’s life, then they’re going to give you a lot of data. Whereas information about whether I buy Coke or Pepsi may not be that private, you may not really care, but at the end of the day, my life isn’t getting that much better because you’re using that data. So both aspects of the tradeoff hold there too.