IAB Rules for Measuring Effectiveness of Digital Advertising | Adweek IAB Rules for Measuring Effectiveness of Digital Advertising | Adweek

IAB to Effectiveness Researchers: You’re Doing It Wrong

Industry group outlines new best practices

It’s no secret that digital ad measurement routinely results in a messy, convoluted data glut. The Interactive Advertising Bureau wants to change that.

This morning the IAB released its guide to best practices for conducting online ad effectiveness research. In plain English, it’s a set of rules for the industry to follow when measuring whether an online ad campaign actually worked.

The problem isn’t a simple one. Effectiveness research, which typically consists of analyzing the results of audience surveys, is plagued by “serious methodological limitations and irresponsible study management,” says Marissa Gluck, founder of Radar Research and author of the guide. Media buyers within agencies aren’t properly trained; they request effectiveness studies from publishers without consideration to their value or appropriateness, says Sherrill Mane, svp of industry services for the IAB.

Survey sizes are too small, publishers are given last-minute data requests, and agency people aren’t properly trained to recognize sloppy data. All of this adds up to no one really knowing how effective online ads actually are.

“Everyone years ago decided it was OK if it were merely good enough,” says Mane. As digital media matures, so should the industry’s attitude. Or so the thinking goes.

The consequences are dire, the IAB warns. While Mane won't draw a direct line between bad research and the slow flow of ad dollars into digital, she will say that advertisers’ questions about the quality of digital ad research have led to a “lack of confidence” in online advertising’s ability to build brands.

Of course, there is good research out there. Custom, proprietary effectiveness studies done by third-party vendors often yield reliable, more accurate results, Mane says. But unfortunately that’s not the most commonly used practice, in part because it’s more expensive to advertisers.

So what can be done? Standardization, for one. Mane notes that the best practices guide is merely an “interim” band-aid for the problem while more scientific methods are developed. Until those emerge, IAB urges vendors (the middlemen), publishers (those serving the ads and providing surveys), and advertisers (the ad buyers and evaluators of the surveys) to raise the bar on the quality of data.

In its guide, available for download at IAB.net, the industry advocate suggests standards like minimum survey sizes (200 respondents), a minimum number of impressions to an ad before measuring its effectiveness (15 million), surveys of 20 to 25 questions, which take less than 10 minutes to complete. Do it for the data.