“Data Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today’s column is part one of a two-part overview of the technology issues associated with ad viewability (Read part 2). It is by Jeremy Stanley, Chief Technology Officer, Collective & Co-Chair, IAB 3MS/Viewable Impressions Implementation & Implications Working Group.
For the last year, the digital advertising industry has been buzzing about viewability, and for good reason. Simply put, the goal of advertising is to change the minds and behaviors of consumers, and an ad that is not viewable can have no impact. No other measurement approach can claim such a simple and irrefutable link to advertising value creation.
Further, many ads delivered today are not viewable. The recently released Viewable Impression Advisory from the Media Rating Council (MRC) found viewability rates for live campaigns ranged from a high near 80% to a low just under 10%. This means that in the best case, 20% of advertising spend is wasted today on impressions that have no hope of influencing consumers – with many campaigns faring far worse. Clearly, whether you are pursuing lower funnel direct marketing objectives or upper funnel branding objectives, improving viewability presents an enormous opportunity.
The Making Measurement Make Sense (3MS) initiative is working to standardize the core metrics around viewability, which is no simple feat. The elephant in the room is that there are at least six radically different techniques being used to collect viewability data by over a dozen technology vendors, each with potentially serious limitations. We are putting the proverbial cart before the horse. A well-intentioned standard for how to compute viewability metrics is of limited use if fundamental limitations and differences in viewability measurement technology are not addressed.
The following are the six most common approaches used to determine viewability in-market today:
Page Geometry: The oldest and most commonly employed technique for measuring viewability is to compute it directly from geometric information gathered from the page, including the size and relative placement of the page, browser viewport and ad. While simple, this approach suffers from many technical limitations, particularly if the impression is delivered within nested cross-domain iframes. These iframes are very common, and are used by both publishers and ad servers to prevent ads and other external content from intentionally or inadvertently interfering with websites. As a side effect, they restrict access to the geometric data needed to compute viewability, leading to huge gaps in data collection.
Panel: A familiar approach from television measurement, panels rely upon a small group of people who have installed software on their device to measure viewability directly. This has all of the well-known problems of using a small group to stand in as representative for the whole. Many applications of viewability (including vCPM pricing, attribution and frequency capping) require data for most if not all impressions, which cannot be delivered by this method.
Behavioral Proxy: A similar approach to panels, this technology infers viewability for impressions that cannot be measured otherwise by detecting user actions that occur only when an ad is viewable. For example, by tracking mouse movement one can infer that any advertisement where the mouse passed over must have been in view during that moment. In addition to the panel drawbacks, this measure also introduces biases related to the specific placement of the advertisement. For example, ads located in the upper left corner of a website may be inadvertently moused over far more frequently than ads placed in the upper right hand corner. And of course, a user may not perform any measurable action on a page, even when the ad is viewable the entire time they browse.
Browser Exploits: Browser exploit techniques bypass the limitations of iframes by taking advantage of security weaknesses in a given browser version. These loopholes are usually specific to a single browser, and will stop working as soon as the browser manufacturer patches the security hole (and they usually do). Furthermore, circumventing the iframe also means circumventing the publisher’s safeguards to protect their own usage data and potentially the user’s personal data, as well as the integrity of the content of the webpage. This is risky on a number of levels.
Publisher API: Publisher API solutions work by placing code within the publisher’s website that executes on every page view and makes available an API that can be accessed by other parties even within cross-domain iframes. This is the basis for the IAB’s SafeFrames framework that could replace iframes, offering the same security but allowing access to the geometric data needed to compute viewability accurately. However, every publisher website and many ad servers would need significant overhauls to accommodate this new framework, so we can anticipate it will be some time before this technology is widely adopted. Even then, there remains a risk of publisher fraud, wherein they pass invalid information into the SafeFrame in order to inflate viewability measurements, and so a need for other approaches may continue.
Browser Monitoring: Modern web browsers can tell what content is currently in view, and can ‘optimize’ content that is not in view to conserve device resources. These optimizations can be reverse engineered, and can be identified by code running alongside an advertisement to accurately measure viewability in many circumstances. This approach has the potential to be the most reliable of all, as it piggybacks off of the very accurate viewability data that the browser is already collecting. The primary limitation is that it is very difficult to execute. Each browser manufacturer and version will have different approaches to optimizing content rendering that will vary by circumstance, operating system and device. Keeping track of all of this is a significant technical hurdle and ongoing quality assurance investment.
The bottom line is that there is tremendous variation in the quality of data (and ultimately insights) that can be derived from these different technologies. Some will report accurate estimates; others will be badly biased by missing data and misleading signals. Some will measure almost all impressions (80-95%); others will measure only a small fraction (1-5%).
The 3MS and MRC have taken tremendous strides in raising awareness in the industry about viewability, and proposing standards for how viewability should be computed. But the promise of viewability hinges more on how accurate the data is than in how it is presented. We have only just begun to develop and validate the technological means to measure viewability accurately and reliably. The MRC is encouraging buyers and sellers to seek accredited viewability solutions (audited by an independent third party), yet absent an industry standard, that has only ensured that the vendor is making truthful claims about their technology, it does not ensure they are measuring viewability accurately with the best technology. Over time, new approaches may arise (such as browser supplied viewability data) and the market will decide which technologies and vendors are best, placing the horse squarely back in front of the cart.
In the meantime, advertisers who can act faster with reliable data can gain significant competitive advantages by ensuring their advertising dollars are spent on impressions that have a chance of being seen.
In part two of this series I will address the practical next steps both brand and direct response advertisers should be taking now with regards to selecting a viewability vendor and incorporating viewability into their campaign strategies.