2014 was the year advertisers collectively asked the question, “How much?”
As in, viewability aside, how much ad fraud is actually out there?
Some of the numbers bandied about, often by the fraud vendors themselves, were alarming to say the least. A report released by cyber-security startup MdotLabs in October 2013 before it was acquired by comScore, examined publisher reach across 10 pay-per-view networks, coming to the conclusion that fraudulent display impressions account for as much as 30% of all online traffic.
The 30% number stuck and it’s been cited ceaselessly ever since.
But while no one disagrees that the problem is acute, not everyone’s convinced that the fraud problem constitutes more than a quarter of all Internet traffic.
As Rubicon Project chief scientist Neal Richter pointed out at AdExchanger’s Programmatic I/O conference in September, “When someone gives you a number like that, the scientist in me asks, ‘Where’d you get the numerator and where’d you get the denominator?’ There could be a great deal of selection bias in this particular exercise.”
In fact, new numbers published earlier this month in a joint study conducted by ad security firm White Ops in collaboration with the Association of National Advertisers (ANA) paint a picture that, while far from rosy, are lower than previously believed.
The White Ops/ANA report, which tracked 36 major advertisers over a 60-day period, including AB InBev, Kellogg’s, Kimberly-Clark and Walmart, found that 11% of display ads were being viewed by bots. The numbers jumped noticeably when it came to video, hitting 23%, which makes sense considering the higher CPMs that exist for video inventory.
Digital media valuation firm Integral Ad Science was pleasantly surprised by the White Ops findings. Integral has been vocal about what company CEO and president Scott Knoll has referred to as an “exaggerated” fraud problem, noting in a recent blog post on the IAB’s website that “it’s not a big as many of the pundits say. It definitely won’t ruin the industry.”
“We were encouraged to see that the new study by the ANA reports fraud levels in line with our published quarterly reports,” said Avi Goldwerger, Integral’s VP of marketing. “As with every battle, it’s important to know what you’re up against, and the inflated levels of fraud we’ve grown used to seeing, some as high as 40%, are simply wrong and harming the industry.”
DoubleVerify concurred with Integral – according to chief operating officer Matt McLaughlin, the “White Ops numbers align with the range of bot fraud that DV identifies on average” – but with a caveat.
Just as White Ops CEO and cofounder Michael Tiffany once observed, ad fraud is a bit like a cholera – “the problem is not evenly distributed.”
“Bot fraud is super-dynamic and changes its identity rapidly [and] we typically see a range of overall fraud rates that vary by channel,” McLaughlin said. “On a particular campaign, depending on the media mix, we see rates as high as 50%, but actual experience will differ based on the quality and mix of digital acquisition.”
But David Sendroff, CEO and founder of ad fraud detection company Forensiq, is more than a little skeptical of how extrapolatable White Ops’ numbers actually are. Although he applauded White Ops and the ANA for their efforts, calling their joint report “probably the most comprehensive study” to date, he was less ready to agree that the percentages uncovered were representative of fraud across the ecosystem.
For one, White Ops and ANA publicly announced their intentions, giving bad actors more than enough time to curb their less-than-kosher activities for the duration of the study. The resultant report did acknowledge that there was a rather considerable dip in bot traffic while the study was running, from 41% to 4%. When the study was over, bot traffic crept back up to 38%.
“I don’t have a statistic to share myself and I’d rather not just make one up, but my feeling is that these numbers are quite understated,” Sendroff said. “White Ops even showed in their report that there was a significant dropoff when the announcement came out about their study. …This is a good example of how the results could have been skewed.”
Sendroff also felt like the premium nature of the study’s big budget subjects might have also lent a little bias to the results.
“Premium buyers likely have more refined targeting rules than non-premium buyers, so the study may also have been skewed a bit towards more premium publishers and the ad space they’re considering,” he said.
But Tiffany is pleased with how his work with the ANA shook out.
“We’ve been rigorously scientific and these numbers are extrapolatable as industry-wide figures,” he said. “It’s important to understand that we didn’t cherry pick brands that had a lot of fraud or no fraud. We took 36 major brand advertisers and studied the actual rate of fraud after they’d already employed every fraud vendor and smart buyer and analytics tool you’ve ever heard of.”
That said, Tiffany admitted that, all things being equal, it might be better not to tip off the fraudsters next time.
“Maybe we should be more covert about announcing our next study,” he quipped.