The Viewability Mess We’ve Created

jayfriedmannewData-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Jay Friedman, COO at Goodway Group.

Measuring the viewability of ads is now a reality that’s not going away.

The intent behind this is good and the reason why we should measure impressions is valid. The current mechanisms and standards, though, are untested and unreliable.

We didn’t think this through and now we’re in a real mess.

The Original Problem

If all websites had the quality and reputation of a cnn.com or yahoo.com, we may not have ever needed to measure viewability.

Unfortunately for our industry, there are tons of sites that display multiple ad units on each side of the page, most of which are unlikely to ever be seen. That’s a legitimate cause for advertiser concern.

To fix the problem, the industry took a two-pronged approach. First, companies used technology to determine whether or not an ad was in view. The technology took a few years to evolve to the point of being more reliable than not, but are we really there yet? I don’t think most MRC-certified viewability vendors are.

The recent certifications by the ABC, the UK’s version of the Media Rating Council, explicitly show that only MOAT passed all of the tests it was given, yet three others were certified. How did this happen? This isn’t school, where a score of 90 is an A. Billions of dollars of ad spending are at stake.

Second, the IAB set a standard for defining what “in view” actually means: It chose 50% of the ad pixels in view for one second or more. From a marketing standpoint, this is great. “Fifty and one” is so easy to remember. But was this truly tested and is it the right standard?

Bad Information

The viewability standard doesn’t seem to have been tested much. I’ve seen results showing that an ad being viewable at 50% or more does indeed improve the likelihood of a prospect taking a meaningful action on an advertiser’s website later. Those results also suggest the length of time the ad is in view does not play a role in conversion. But frequency unsurprisingly plays a significant role in influencing the likelihood of a user converting.

I find it very hard to believe the right standard is exactly 50% and one second. What if an advertiser prefers 90% and .3 seconds? Now you need to get your viewability vendor to code a custom standard.

What a pain. It didn’t have to be this way.

Unintended Consequences

I wrote a blog post years ago titled something like “Why aren’t publishers accepting my eighth party ad call?” I was flippantly commenting on the state of ad servers and how poorly they kept up with the needs of advertisers to measure and collect information, such as verification, viewability and audience data. It seems that every time ad servers catch up and integrate a feature, another newer “must-have” measurement arises and we’re back to tag wrapping, discrepancies and wasted money.

Some agencies try to impress their clients by reassuring them they’ve added a viewability and fraud-monitoring vendor and forced their “partners” to pay for it. Some demand 100% viewability at a time when that simply isn’t realistic. Looking beyond the fact that “forcing a partner” is an oxymoron, this situation causes even more friction and lost spending in the digital ecosystem with major discrepancies between certain viewability and fraud-monitoring vendors and ad servers.

We keep hearing how little friction there is in traditional media buying and that there’s too much in digital. Yet clients and agencies both keep approving the addition of more friction.

Where We’ll End Up

When you peel the duct tape off of something, you’re left with that sticky residue. This seems to be where we’re headed.

In the next few years, publishers will move to infinite scroll and ad rotation design. Ad units will always be in view and able to be counted well beyond the one-second mark, before being rotated so publishers can make more money. It will be Goodhart’s law – "When a measure becomes a target, it ceases to be a good measure" – at work yet again.

In this scenario we will indeed get to very high viewability levels if we’re still using the current standards of 50% and one second of viewability. Ninety percent viewability or higher isn’t unreasonable. Yet the viewability vendors that have introduced significant friction into the marketplace will leave their technological footprint, like the plastic in our landfills that will remain long after humans are gone.

In the coming years, do your own testing. Every product is different and will require different standards. Identify sites and environments in which you’re comfortable placing your ads and use viewability technology as needed. But when it’s no longer needed, remember to unwrap that tag and remove friction where it’s not required.

Follow Jay Friedman (@jaymfriedman) and AdExchanger (@adexchanger) on Twitter.

4 Comments

  1. Good points Jay. In particular, the point about publishers moving to infinite scroll and rotation design reinforces the fact that publishers need to focus on their Viewability Deficit (bit.ly/VBDeficit). More importantly, understanding that they can take proactive steps to increase their overall VB score. Page design is important, but understanding that technology, the easiest contributory factor that influences their VB Deficit, is an easy fix. Pubs should be thinking more about Viewability Optimization (bit.ly/MoreRevenue).

    Reply
  2. Jay, Thank you for this article. Viewability and its rollout has certainly been more complex than anyone would have imagined.

    I do want to point out one inaccuracy in the piece, which relates to the ABC UK audits. Four vendors were audited. Each vendor was audited for the question "Did the product perform as expected in the following scenarios?" Only two vendors received marks of "no" in certain circumstances to that question. Both comScore and MOAT were free of "no" marks on the audit. comScore has all "yes" marks and one mark for "see note."

    In addition, viewabilility is not valuable unless you can also remove the underlying Non-Human Traffic that should never be counted as a view. In this comScore also has been accredited by the MRC. That is not universal for all viewability vendors.

    Some of the complexity is coming from the fact not all vendors are meeting the same high standards. It’s certainly a topic that warrants further discussion and action among our industry.

    Reply
  3. Jay, great read. The industry has created a monster that has added additional complexity & cost to an already complex & costly marketplace and lacks any "real" solution. It's probably easy for me to say, given I sit in the middle (ad tech) but publishers have seemingly ceded way to much control to advertisers. To now have to bear the burden of viewability is completely absurd. No other channel is held to such ridiculously high standards. What's the viewability of a print ad? No one can answer that question, yet somehow, some way, brands were able to leverage print ads to build a brand and sell products. If we all face facts the reality is simple: Publishers are no longer about monetizing eyeballs. Those days are LONG gone. Any marketer/advertiser can log into an automated ad buying tool and tap into their EXACT target audience, on demand and at scale.

    Publishers are and have always been about monetizing context & trust. People trust CNET to provide reviews about technology products. Any electronics advertiser that buys into CNET is buying the trust CNET has created with their audience & of course the context. Targeting that same potential buyer in Facebook is not even close to the same thing. If you have that type of audience, why should viewability even enter the equation? If you, the advertiser, want to advertise to my premium, engaged and loyal audience, you are going to pay me for ACCESS not IMPRESSIONS.

    The solution is not ensuring that every ad is viewable. The solution is ensuring that as a publisher you are serving ads to humans not robots. That is of course what made print so powerful, you had an actual human being on the other end who not only gave you their money but information as well. This is the exact reason why Facebook is going on an all out offensive against the 3rd party cookie with their People Based Marketing approach. The main reason advertisers don't perform well in web display is that most of the traffic is not human traffic. And this issue is entirely solvable but in the age of Page Views & Impressions, publishers sacrificed knowledge of their audience for scale and the result is now very clear - no one knows anything about their site traffic. The solution is simple however, identity, just like the good ole' print days, it's time to start asking people to identify themselves if they want your content.

    Reply
  4. Anne, thanks for the correction and I agree that NHT is a second, required, dimension of this discussion. This piece only focused on viewability because most often when we put more than one dimension into a discussion, the discussion gets side-tracked and nothing gets solved. I believe we can work on the NHT problem and the viewability problem simultaneously and the solutions will dovetail with each other once we've become reasonable about what's acceptable and what's not. To play chicken and egg is to not solve the problem.

    Reply

Add a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>