Crisis Management

The current economic crisis has rocked the media industry. But there is another looming crisis that, if not addressed now, could have a detrimental effect on the media business long after the current recession fades. It’s the crisis in measurement. You can’t sell what you can’t measure, and unfortunately our measurement systems are not keeping up with technology and consumer behavior.
This isn’t just about television — the problem extends across all media platforms. And it’s not about the lack of data. We are virtually drowning in data. A couple of years ago, Nielsen delivered a single TV rating data stream. Today, Nielsen routinely delivers over two-dozen, and countless more streams are available for any client willing to pull the data. Moreover, as set-top box second-by-second data evolves, it will produce a staggering amount of new data. The same can be said about growing amounts of Internet and mobile metrics. It’s not the amount of data that is the problem; it’s the quality and utility.
Currently there are a number of data suppliers offering new metrics for all platforms. Most of the significant activity is centered on the TV set-top box. On paper, STB data looks great. It’s based on hundreds of thousands, or even millions, of homes, so it’s vastly larger than Nielsen’s national sample. It’s precise, down to the exact second; it’s passive, continuous measurement; and it’s anonymous, so we can say goodbye to shrinking response bias, which is as low as 17 percent in some markets measured by Nielsen. That’s the good news.
The bad news is that most STB data is riddled with challenges. These range from capabilities we assumed we could take for granted, like the ability to know when the TV is on or off, to requirements like reporting time-shifted tuning and household demographics. Even the ostensible second-by-second precision does not really exist due to technical issues with the box. There are no easy answers to these issues.
For starters, the box is always on. How the STB data provider determines the sequence of clicks that means the TV is on or off is a well-guarded secret. Since none of the vendors will divulge their special recipe, the STB data from any one source (such as Charter L.A., Dish or TiVo, to name a few) looks different depending on who processes it. If the industry depends on each research provider to decide the “edit rules,” we will never achieve a standard metric.
The problems of standardization go beyond TV. There is currently no industry-wide standard for the “tag” which must be placed on all Web pages and video. Content publishers must place multiple proprietary tags on every page or they won’t be included in a particular research vendor’s report. Another issue is that, unlike Nielsen TV ratings, where the same data can be used for both planning and for currency, there are different digital metrics for planning, traffic and currency. Wouldn’t it be great for the industry to have an agreed-upon metric that could serve all of these roles and a single universal tag hat offers all providers access to publishers’ Web sites?

And finally, the “holy grail” of research: the ability to measure cross-platform behavior. That’s still a distant dream, but something we must develop if we are ever to exploit the three screens. It’s not that efforts aren’t being made; it’s that there is no industry consensus with respect to what we want from research providers. Part of the problem is that the industry hasn’t clearly articulated the “research ask.” I am sympathetic to the plight of vendors who are talking to everyone, listening to their requirements, but finding it hard to put all this client feedback into some meaningful, actionable execution. It’s complicated by a recession that is taking a toll on fledgling research companies. It would be unfortunate if, after emerging from the recession, we lost the opportunity to develop some new metrics because the companies that are developing them didn’t survive.