The Problem With Viewability: Declining Inventory Quality

ezrapierceData-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media. 

Today’s column is written by Ezra Pierce, CEO at Avocet.

The ultimate value of viewability is in understanding causation, not targeting.

That’s not hair-splitting. It’s a distinction with consequences that we’re already experiencing. When viewability is targeted, it’s implicitly identified with quality, with which viewability has nothing to do. That misidentification creates new inefficiencies and distortions in the ad market.

Viewable targeting doesn’t and can’t capture the difference between an ad “connecting” or “cluttering,” and because of that, it tends to produce the latter. Buyers get more of what they pay for (or target). By inflating the exchange value of viewable inventory, targeting creates an incentive for publishers to push more units above the fold.

Ironically, these attempts are self-defeating for both the publisher and the advertiser: Users become accustomed to quickly scrolling down the page, page loads are slowed and page aesthetics are impacted. The effect of viewable marketplaces and viewable targeting is likely to be a decrease in inventory quality because it’s easier and more profitable to increase viewability in ways that decrease quality. Viewability as quality produces its opposite: viewability as clutter. There are threads of connection between the rise of ad blockers and the rise of viewability as a targeting paradigm.

That’s a strong claim, but ultimately advertising is about engagement, and while viewability is necessary to engagement, it isn’t remotely sufficient. While it’s true that you can only engage with the ads you see, once an ad is in view it doesn’t matter if the viewability rate was 15% or 95%.

What happens next is a function of the gestalt created by the content and the advertisement. If it works, we call that quality. The ability of a placement or domain to do its work – connecting a user to a message – is represented well by viewable attribution and poorly by viewability rates.

That leaves us with the constructive question: How should viewability be deployed if not through targeting? This takes us back to the purpose of advertising – changing behavior. We can use viewability to come to a more nuanced and fundamental understanding of which media is really driving engagement. The obvious implementation is, of course, to include viewability in our conversion attribution models. No view, no conversion.

That’s a positive first step, but it doesn’t altogether avoid rewarding clutter. What’s needed is a measurement that straightforwardly answers the question: “How likely is this ad to engage the user?” That is not the likelihood that the ad is simply “seen” – raw sense stimuli – but the likelihood that it can reach the foreground of perception.

What’s needed doesn’t exist yet in any programmatic platform, so let’s call our new metric “share of view.” The core elements would be the viewability rate and the percentage of advertising surface area taken by an ad. It would draw a line of demarcation between a “good” view and a “cluttered” view, and correct the perverse incentives the buy side is currently creating for publishers. Interestingly, it opens up the possibility of financial models more in line with old-fashioned “share of voice.” Further, it provides an intellectually respectable next step for attribution modeling after viewable post-view: Attribute to the last viewed ad with the highest share of view.

Targeting viewability isn’t “bad” – it’s a workaround for the lack of a better solution. Publishers aren’t “bad” for increasing their above-the-fold inventory. Publishers, especially the smaller ones that rely on programmatic revenue, don’t have the option of waiting for smarter buyers. That puts the onus on the buy side.

Viewable attribution and share of view are things we can do now, easily, that will improve the ad tech ecosystem for everyone at the table – the user, the publisher and the advertiser.

Follow Avocet (@AvocetHQ) and AdExchanger (@adexchanger) on Twitter.

5 Comments

  1. Your insight about how viewability has been (mis)used to push more inventory above the fold, thus creating more clutter. You know, it was 20 years ago that DoubleClick did their first study of click through trends, and found the #1 factor causing a decrease in click through was "more ads on the page."

    It was also 20 years ago when we produced the first measurement of the effectiveness of Online Advertising. (See Journal of Advertising Research, Briggs & Hollis, 1996). I think your point about using viewability as part of the advertising effectiveness analysis is correct. The problem is, too many marketers simply don't measure effectiveness. Rather, they use simple metrics like viewability and clicks as their proxy. I am not sure share of view is the answer, because it is likely to be gamed too. But, if you connect to the ultimate goal of the marketers (be it branding or sales), and turn around the analysis in-near real-time, we might see the market adopt a more evolved practices in terms of number of ads per page, etc.

    Reply
    • You have a good point; there is no end state as the thing we want to represent - effectiveness/engagement - happens in the mind; our metrics will always be heuristics. As for being gamed, I don't see it being easy (relative to viewability), but it's ad-tech so I'm sure people will try.

      Reply
  2. Nathan

    Ezra, if you knew more about viewability, you'd know that being "above the fold" does not actually necessarily correlate to better viewability. It's all about where the content that's engaging the user is on a page.

    Also, good verification providers (such as Integral Ad Science) can already give reporting metrics on ad-clutter, so the buyer can see when highly viewable isn't delivering better advertising cut through (or share of voice).

    Overall, viewability is a proxy that helps to move other metrics and I agree with your point that it better determines conversion impact... No view, no conversion is spot on!

    Reply
  3. Kristof

    @Nathan, you mention IAS as a good verification provider. Unfortunately thedata tells us otherwise. When going in discussion with providers that do/did measure ad clutter such as MOAT and IAS we could all agree that measuring ad clutter is not an easy task and has a high error rate. Ad clutter measurements from IAS where completely off at more than 40% of the impressions that we measured.

    Reply
    • The detection side of this is really interesting. There will always be gaps in the data provided by vendors like MOAT and IAS, but - if you could trust that data - you could get Bayes working on the problem to fill in the sparse areas.

      Nothing to say about whether the data is trustworthy though.

      Reply

Add a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>