Fraud floods the internet—fake users generate fake traffic to fake websites giving fake clicks and sending fake impressions. And don’t forget fake reviews, which have persisted for more than a decade. Just last month, the Federal Trade Commission fined an Amazon retailer more than $12 million for buying four- and five-star reviews for its branded diet pills. It was the first time the FTC took a company to task for buying fake reviews.
“People rely on reviews when they’re shopping online,” said Andrew Smith, who spearheads the FTC’s Bureau of Consumer Protection, in a statement at the time.
And he’s right; as digital ad spending surpasses traditional ad spend for the first time—and with digital poised to swallow two-thirds of all media by 2023, there’s more incentive than ever for bad actors to try to game the system—the cat-and-mouse game between fraudsters and the companies trying to outsmart them will heat up.
“I think if consumers knew the extent of the abilities of businesses to manipulate online reviews, then they’d they’d think twice before buying in,” said Zachary Pardes, director of communications at Trustpilot. “They would also rightfully rethink the brands that they choose to work with.”
The brands that choose to buy reviews don’t only have consumers’ ire to worry about. In the March FTC case, the company was caught buying fake five-star reviews on Amazon intended, per the FTC’s complaint, to increase the company’s 4.2 Amazon rating to a 4.3. The proposed settlement included a $12.8 million judgement, which the FTC agreed to suspend in exchange for a roughly $50,000 payout.
Despite the fact that people are increasingly skeptical of the reviews they read online, they still rely on them. Last year, eMarketer found that the lion’s share of U.S. consumers reference online reviews before making a purchase. Nearly a quarter mentioned always referencing them, while more than 40 percent mentioned doing so often. One such review site, Trustpilot, which recently secured $55 million in funding, built a review platform where customers that have engaged with a particular business or brand can review those companies.
According to Pardes, a good chunk of businesses partner with Trustpilot because of the company’s partnership with Google and access to the elusive Google seller rating, the series of stars that display underneath the URL of a particular Google Adwords ad. Though “Google’s notoriously a bit of a black box,” according to Pardes, companies that “do things the right way earn better clickthrough rates on those paid ads.”
By “the right way,” he means that people shouldn’t be engaging with any sort of review fakery that plagues the rest of the web. For example, Pardes said that in the past, he and the Trustpilot team have seen businesses and brands “overflagging” negative reviews on their page, hoping the Trustpilot compliance team would find something wrong with the reviews and pull them from the site.
Since then, Trustpilot’s introduced a “transparent flagging” system to show how often a brand flags a review on their page and what happens to those reviews once flagged.
“Consumers now have a portal to see exactly what your behavior is on the platform,” Pardes said. “So if they see that you’ve flagged 1,000 one-star reviews, and 999 of those flags were overturned … it’s a very clear signal that you’re full of crap.”
Trustpilot has rolled out other roadblocks for fraudsters, too, as have other review platforms like Yelp and TripAdvisor. For the most part, this involves plucking out the fake reviews via a combination of machine learning and manual review.
While these companies were tight-lipped on the specifics of their algorithms, the fraudsters themselves aren’t. There are some shady alleys of the internet that offer the promise of reviews that jump these algorithmic roadblocks for roughly $5 to $10 a pop.
Hundreds of posts across forums like BlackHat World are dedicated to the shady practice of “online reputation management,” the unofficial name for gaming these systems. These posts reported that Trustpilot and others were detecting fake reviews not just by monitoring IP addresses, but also operating systems and even the screen size and browsers.
One fake reviewer reported that accessing a particular page via direct link, rather than organic search, could get him flagged as would having an account that’s too “fresh,” or created the same day as the review.
“Some fake reviewers are easy to spot,” said one poster dedicated to gaming Yelp’s review algorithm. “If they don’t have a profile picture, that’s a red flag. If someone is actually eating at 12 different restaurants a week and leaving reviews for all of them, they’re either going through a really rough time in their personal life or it’s a fake account.”
Yelp, for its part, wouldn’t acknowledge the specifics behind its review recommendation software, aside from the fact that its algorithm automatically checks “dozens” of signals grouped into various measures like user activity and review quality.
“Our systems aren’t static; they’re constantly learning new information and improving our systems,” said Alex Haefner, Yelp’s group product manager for content and community. “Even a review that was posted two, three years ago can be flagged.”
Pardes compared the battle against fake reviews to an “endless game of whack-a-mole.”
“No matter how tough the FTC gets, no matter how many of these fly-by-night companies get fined—if there’s a dollar to be made, they’re gonna go for it,” he said. “I think that eventually, customers are going to punch back.”