Recent months have seen an uptick in shady display ad practices – or at least media coverage of them. These incidents often take the form of fraudulent impressions generated either by bot traffic or browser plug-ins that manipulate ad space on a webpage. Others are simply not viewable by design, generating ad calls below the fold or in hidden iframes.
Whatever specific tactic generated it, fraudulent ad inventory spawned by these practices is often dumped on real-time bidding exchanges, creating a burden of responsibility on the operators of those marketplaces to clean things up.
Among the top inventory platforms, Google is widely regarded as the standard-bearer of policing ad fraud. A big part of Google’s strategy is a manual review process involving the efforts of hundreds of people.
In a blog post detailing its ad quality efforts, Google gives some insight into what these individuals do: They “review web pages, test our partners’ downloadable software, and prevent ads from showing on sites that violate our policies. Depending on the severity and persistence of the offense, they may stop ad serving on that page or site, or across the publisher’s entire account.”
Of course, human review is only one method employed by Google – and other ad exchange operators – to combat impression fraud. Automated tools also play a big role, allowing Google to monitor clicks and impressions for suspicious activity. These scanning tools will improve over time as Google implements machine learning that can detect bad practices.
Google says its efforts are paying off. In 2012 it identified 17% fewer “bad actors” than it did in 2011; this happened during a period of greater enforcement, suggesting either that the number of parties attempting to exploit real-time bidding had declined (unlikely), or that Google’s rigorous approach has driven RTB manipulators to seek out less vigilantly guarded auction environments.