Ad fraud is often described as a cat-and-mouse game, with networks’ best efforts at detection a few short steps behind bad actors’ best efforts at defrauding customers and advertisers without getting caught. (And those mice are eating a lot of cheese. Last year, the ad verification company Adloox predicted ad fraud could cost advertisers $16.4 billion in 2017—or about 20.5 percent of total global digital ad spend.)
As social networks and advertising platforms increasingly deploy artificial intelligence to model customer behavior and improve performance of their campaigns, they’re also using it to stop ad fraud. With machine learning and increasingly sophisticated algorithms in the field, where does the cat-and-mouse game stand?
Consider Facebook, a leader in both advertising and sophisticated research and deployment of AI. The company last fall increased its team fighting ad fraud, combining 1,000 workers, including engineers, managers, human reviewers as well as local language and culture experts, with multiple means of algorithmic detection to catch common types of fraud.
Facebook’s ad fraud team primarily focuses on Facebook’s end-user experience, which pushes the unit towards prioritizing fraud that affects users: scams, nuisance ads, web page spoofing and clickbait.
“We have over six million advertisers,” said Rob Leathern, Facebook’s product management director. “Most of them are well meaning, but there are a small number of fraudsters who put in a lot of time to get around or game our system.”
But critics say Facebook could do more.
Even though Facebook and Google are using sophisticated AI to improve their fraud detection processes, Joe Barone, managing partner of digital ad operations at GroupM, wants them to “expand their nascent relationships with third-party measurement providers to increase the level of transparency and maximize the assurance and trust required to support continued investment.”
In the next few months, Facebook will also overhaul its advertising measurement system to wean marketers off vanity metrics in favor of more business objectives. The social network announced it’s removing nearly two dozen different metrics in July—including action, time spent, button click and more. And while these updates don’t improve fraud directly, the move could increase transparency on the platform.
Publishers say Facebook isn’t fixing the problem fast enough. Jason Kint, CEO of Digital Content Next, said Facebook, Twitter and Google should collectively combat the problem—otherwise, smaller players in the ad-tech world would be competing on a different level than those who already have a near monopoly. “The general conventional wisdom is that as a walled garden they’ve gotten away with not being held to the same standards as the rest of the industry,” Kint said.
Facebook, naturally, disagrees, saying AI has had some success at fighting fraud. One example: fake play buttons superimposed over still photos, misleading users into thinking they’re clicking on a video, but instead takes them to a landing page full of ads. Facebook’s neural networks identify elements of fraud, which are always changing in size, shape and display, to keep them off the platform.
Using machine learning to attack ad fraud makes sense, especially as a majority of ad buys on Facebook come from machine-driven, automated ad buys.
Rebecca Lieb, co-founder of the research analyst firm Kaleido Insights, said it wouldn’t surprise her if Facebook and other platforms implement other emerging technologies such as blockchain, which she mentioned could help verify the validity of click streams, identify advertisers and track supply chains.
“Ad fraud is always going to be an arms race,” she said. “Spam is always going to be an arms race. Any digital maleficence will be an arms race—the bad guys will implement new technology, the good guys will implement new technology, and this will go on.”
Leathern stresses the importance of algorithms and humans working together; not just reviewers, but Facebook users. “We’re expecting people to find bad ads on the platform. We want them to find them, we want them to report them. We want our techniques and algorithms to get better at finding bad ads,” he said.
According to David Sendroff, CEO of Forensiq, there is an “inherent disincentive from the Facebook side” to cracking down on some of the most common forms of ad fraud—click fraud, like fraud, fraudulent user profiles—that only rarely cross the path of the average user. His solution? Leave the fraudulent accounts intact, but quarantine them as much as possible from the rest of the system while using them to map fraud networks and train fraud algorithms.
That could create a kind of Potemkin City of fraud, where only the machines are watching.