Chasing Deep Blue

When I started in this business 40 years ago, at A.C. Nielsen, we counted everything on computer punch cards and Frieden calculators. Audiences for national and local television, radio, magazines, the movement of food and drugs were all checked and then double-checked with a slide rule. Work was done in a large, open bullpen of 40 people sitting at side-by-side desks. I still have friends from those days, and we fondly remember speculating about how we would ever be able to measure advertising’s effect on sales and when a computer would beat the world’s greatest chess player. Well, Deep Blue polished off Gary Kasparov in 1997, but we still haven’t figured out the advertising-to-sales relationship. Relationships can be difficult.

At a recent conference held by the Association of National Advertisers on the subject of marketing return on investment, much time was devoted to how frustrated CEOs have become with the marketing function’s lack of accountability. Gordon Wade of the EMM Group, who heads the 20-company ANA team working on the issue, said that more than 85 percent of CEOs and COOs do not come from a marketing background, so they believe that the discipline lacks processes and measurement. In other words, they see the discipline as undisciplined.

Given corporate malfeasance, Sarbanes-Oxley and the incessant pressure for a better performance on Wall Street, it is understandable that marketing directors are under increasing pressure to prove that advertising works by yielding a fair return on investment. Since their average tenure is two years, they need to prove this quickly. For my money, given our industry’s dependence on questionable numbers to begin with, I find it troubling that Deep Blue has succeeded, but eight years later we still have not.

Here’s my problem. As an industry, we spend $50 billion in television and depend on Nielsen numbers to prove that we made smart choices. As you now know, I started my career at Nielsen and have nothing against them, but everybody knows that TV ratings are fragile. No matter how carefully Nielsen draws its sample and monitors the outcome, the accuracy of ratings can be not just questionable but downright nonexistent. A rating for a program in Louisville, Ky., against men 18-24 in late fringe can carry an error level that actually exceeds the rating itself. Let me say that again: The statistical error on a rating can actually exceed the rating itself.

Now, there are statistical rules and there are business rules, and the two are quite different. No one wants to pay more for Nielsen to bolster and polish its sample even more to limit error levels, so we continue to agree that this is the best we can do. Yes, there are dissenters, oversight committees and congressional hearings. Then there is the everyday business necessity of buying and selling time on television programs and measuring how intelligently we are doing that. So Nielsen is very usable to “prove” that we are making smart decisions.

Here in Adweek, Steve McClellan recently described the Nielsen and Arbitron ROI joint venture (with more than just encouragement from Procter and Gamble) known as Project Apollo. Despite a high price tag, the delays and scaling back of the project and the heavy burden of impending responsibility on project respondents, the same guys who make a living skating on thin statistical ratings ice are willing to put their reputations on the line to advance the science of ROI.

No one has yet codified the chain of marketing events that could serve to measure ROI. Each event, like brand sales, share of market, copy awareness and persuasion, media efficiencies, brand history, competitive pressures and whatever, has a different set of metrics. So the ROI problem is more difficult to solve.

Another recent survey of ANA members showed that half those polled thought measurement was the hardest part of marketing and that they were dissatisfied with the current tools. In fact, about the same number felt that defining ROI was the first barrier, so if you don’t know what you’re measuring, how can you measure it? Nevertheless, while nearly two-thirds of respondents thought that measuring marketing’s impact on sales was important, only 2 percent categorized themselves as being at the highest level of understanding and process to effectively measure ROI.

I think that there is a research attitude and a business attitude toward solving problems. Metaphorically, researchers can seem to move at a glacial rate from no to maybe, while business practitioners can move at the speed of light, looking for approximations and coincidences to call fact. I have high hopes for the ANA’s 20-company task force and their work on this subject. Ultimately, though, we are going to have to accept the scientific method and create an inspired, testable hypothesis, test it, use it if it works and refine it as time passes. We really need to put a stake in the ground now, because with every passing year I can hear Deep Blue laughing a little louder.