It’s been several years since brands and agencies were made aware of the size of digital advertising’s fraud issue. Despite increased scrutiny from ad buyers and efforts across the industry to filter out nonhuman traffic, fraud persists. While many are aware of the problem, very few have a deep knowledge of fraud, much less about what to do beyond the fact that they should avoid it at all costs.
One major reason why fraud is still misunderstood is that fraudsters are constantly finding new ways to sidestep detection mechanisms, creating an arms race between them and the fraud prevention tools deployed on advertisers’ behalf. While fraud techniques haven’t necessarily changed, the way fraud is executed is constantly evolving. Fraud can take different forms and look quite different from case to case, requiring new ways of thinking.
Here’s how brands need to think about fraud so that they can not only keep up with the changing landscape but actually get one step ahead of the fraudsters.
Part of the issue with fighting fraud is that, as difficult as it is to visualize something that happens in a digital environment, many members of the advertising industry have an image in their mind of what a fraud operation looks like. Many probably picture a server rack in a data center or a system of phones laid out in an empty office space, constantly pinging websites to generate nonhuman traffic and the resulting ad impressions.
While that’s how some fraud schemes still operate, it’s not how they all work. For all of the money a fraud ring can bring in, fraudsters can be incredibly sloppy. Operations like MethBot have created this perception of fraud schemes being conducted in sterile office spaces by smart individuals, possibly with affiliations to organized crime. But sometimes fraud is as simple as one person looking to drive more traffic to a series of websites to drive up ad revenue and the sites are simply recreations of the same site, but with different URLs.
What’s most important for advertisers isn’t what fraud looks like in the real world but what it looks like in the digital realm, where cookie activity is the biggest giveaway of suspect activity.
Recently, we’ve seen suspect activity take on remarkably different shapes. In one case, a dramatic increase in suspect devices was the result of nearly 1 million cookies all visiting the same set of sites. Some of the sites in the set were high-quality, well-trafficked news sites that you’d expect real people to visit. Others were poor-quality and not exactly brand safe. In all likelihood, the traffic going to the quality sites was an attempt to mask the suspect behavior, but it ultimately failed because there’s no way that 1 million users would have such similar browsing habits every single day. This “cover your tracks” approach is a sign of a sophisticated operation.
It’s also important to know that fraudulent traffic can come from devices that are regularly used by real people, but the consumers simply don’t know that their device is visiting sites against their will. One trend we’re seeing is browsers infected by some kind of virus or malware that drives a cookie to sites where fraudsters can profit. These are often smaller in scale than the larger operation, and the goal of the operation is likely to drive as much traffic to the set of sites to bring in as much revenue as possible before getting caught.
Tackling the problem
Both of these forms of fraud raise questions. How do you handle a legitimate site with high-quality content when it appears to be the target of nonhuman traffic? How do you separate activity from a real consumer from nonhuman traffic coming from the same device?