Efforts to root out ad fraud have clearly failed as we’ve continued to see legal issues for agencies and media partners. Brands are now on high alert, and it’s a particular threat for public companies who have a shareholder responsibility to resolve these issues. That’s on top of previous legal actions and recent alarm bells sounded by the FBI and Congress. The failure is rooted in an overreliance on multi-touch attribution (MTA) systems.
In fact, many efforts to address fraud directly like ad-pricing transparency and viewability controls or publisher whitelists have been partly undermined by an attribution system that induces fraudulent behavior.
Previous fraud revelations have resulted in promises to improve attribution, but media partners know that attribution is fundamentally a correlation game, not a causality game. And they have become skilled at rigging it.
MTA studies gained prominence as a way to allocate media budgets to maximize conversions in direct-response campaigns. They assign credit by reusing models from other social sciences that have similar issues attributing credit for outcomes based on observational data (e.g., health or education). Those who created the models keenly understand they are not causal and will often include caveats and/or cautions. However, that’s not the case with ad tech, where correlation problems are largely ignored.
Media partners know they must look good in advertisers’ MTA studies and they know how MTA studies work, revealing which media partners’ ad exposures and clicks are most correlated with conversions. The shortcut solution is to predict who will convert better related to other platforms, regardless of ads, and saturate the most likely to convert with impressions and clicks.
Saturating likely converters is easiest on inexpensive, non-viewable and fraudulent inventory. Once a media partner wants credit for a conversion happening anyway—and not on driving incremental conversions—then ads don’t need to be viewable or displayed in an appropriate context. They just need to show up in a log as served to or clicked by the converted consumer.
However, vigilant marketers can root out fraud by changing the game played by partners, relying less on attribution and more on incrementality and flipping the lop-sided infrastructure investment.
The ad-tech industry has massively invested in the correlation game and its attribution players—Google bought Adometry, Oath bought Convertro, Nielsen bought Visual IQ, Neustar bought MarketShare—but the investment in incrementality infrastructure has been relatively small. Platforms introduced some incrementality studies, but the larger shift hasn’t been enough. Why does incrementality work?
Incrementality relies upon a control group not exposed to a campaign to demonstrate true causal effect of media on outcomes. Media partners can only game incrementality when the outcome is fraud, but true outcomes that drive a business like actual purchases or lifts in brand attitudes are more difficult to manipulate.
Brands can drive greater accountability by demanding incrementality across their media plans and by rejecting infeasibility claims from media partners and measurement experts.
The central objection brands hear is that multichannel incrementality measurement isn’t achievable, and media partners can only measure incremental lift from a single platform (e.g., a DSP or social network incrementality study).
In reality, there are two paths forward.
The first path is for marketers to force media partners to share control groups. There must be a non-exposed group to measure incrementality, and incrementality studies administered by social networks or DSPs use randomized control groups within their platform. In other industries reliant upon randomized testing, like pharma, companies can share placebo groups to quickly test drugs. Big pharma didn’t voluntarily share controls; it was demanded by patient advocacy groups and regulators. Similarly, ad-tech partners could share control groups (there’s no technological barrier), but marketers will need to specifically call on their media partners to share them for each campaign. With enough demands from marketers to share control groups, media partners will begin to coordinate their incrementality tests.
A second path for marketers to achieve incrementality across their media plan is to identify people not exposed to a campaign and create what statisticians call “synthetic control groups.” This is a technique commonly used for measuring brand, sales or visit lift and functions the same as a typical control group. Synthetic controls are in-target and were not exposed to a campaign (though they could’ve been) for reasons unrelated to the outcome being measured.
This approach wouldn’t require media partners to share and coordinate control groups because control groups are modeled based on the traits of those exposed and not exposed to a campaign. Marketers wanting to create synthetic controls need to be familiar with sophisticated statistics known as causal machine learning developed by social science researchers who’ve moved on from attribution. Marketing leaders also need to know their measurement teams can use causal inference with synthetic controls to measure incremental lift across media plans.
There are still real challenges to measuring incremental lift across media plans, but they’re no less real than MTA challenges, which also struggles with cookie dependency and multicollinearity. It’s time for CMOs to set the agenda for more incrementality, which will lead to lower ad fraud.