Home On TV & Video The Facebook Video Metric Mess: Another Example Of Ad Tech’s Broken Telephone Problem

The Facebook Video Metric Mess: Another Example Of Ad Tech’s Broken Telephone Problem

SHARE:

melindastarosOn TV And Video” is a column exploring opportunities and challenges in programmatic TV and video.

Today’s column is written by Melinda Staros, senior manager of research and insights at Sharethrough.

Facebook made major headlines recently when it came clean about its “serious” miscalculation of a key video metric.

For two years when calculating the average amount of time users spent watching video ads, Facebook only included people who watched a video for three seconds or longer, instead of averaging across all sessions.

That inflated the native video metrics and spurred a furious debate about the nature of the wrongdoing. This confusion in itself points to a much deeper problem underlying our industry’s approach to analytics, especially as we try to decipher the impact of new formats with unique challenges, such as native video that plays automatically in feeds.

There’s a dizzying amount of metrics available but no concrete strategy in place about how to use them. Too many things get lost in translation between the work being done and the advertisers paying for it, everyday.

It means we’re all stuck playing a lousy game of broken telephone.

What Happened? A Closer Look

Let’s say a 30-second video was watched 100 times. Ninety people scrolled past it and stopped watching after one or two seconds, amounting to 180 seconds of time watched. The 10 people who did watch longer than three seconds combined to watch 250 seconds. Together, that equals 430 seconds of total time watched.

How do we calculate average view time? It depends. An advertiser who’s purchased three-second views may want to know the average length of a view they paid for (advertisers commonly buy three-, five- or 10-second “views” and don’t pay if someone stops watching before that point). Another factor: how advertisers classify the large bucket of time that comes from videos briefly autoplaying as someone scrolls through their feed but doesn’t count as a paid view.

It breaks out like this:

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Facebook’s miscalculation:

Total view time (430 seconds) / number of three-second views (10) = 43 seconds

Facebook’s solution:

Total view time (430 seconds) / total number of views (100) = 4.3 seconds

The equation an advertiser buying three-second views would need:

Total amount of time paid for (250 seconds) / total number of purchased views (10) = 25 seconds

All answers can be considered technically correct (in Facebook’s miscalculation the advertiser did get 43 seconds of time for the number of views they paid for), but not all are interchangeable without context.

You then have different internal pressures for how data should be used. A marketer will gravitate toward the highest number – 43 seconds – but leave out important details for simplicity. A client needs data that reflects the reality of what they paid for, which in this case would be 25 seconds. What you get is fuzzy numbers too often presented to the wrong audiences in the wrong way.

The Riddle Of Standardization

One way to avoid this problem is standardization. Standardized metrics help regulate a smoothly running ad industry. They simplify an increasingly complicated common language and help benchmark performance across the industry. They also help clients compare vendors, ad types, placements and content to measure their marketing investments. Until we get this right, new mediums, such as native video, are only going to make this issue more confounding.

Given the number of factors that need to be standardized and how custom each use case is, standardization is difficult and transparency can fall by the wayside.

When outsourced to third-party data vendors, context for what makes sense is lost. Simple ads can return vastly inflated average read times that have little to do with content quality and only consider how long a browser was open. But nobody questions it.

The real solution lies in not just standardizing the manner in which metrics are calculated, but also making those calculations transparent. Before the metrics are standardized, the industry should first standardize the process used to create and share metrics.

The first step is to establish standard beacons for data collection and explicit naming conventions that keep the calculation method front and center.

There also needs to be a set of common standards for how data is shared with advertisers. They need to fully understand how their success is being measured and what impacts it, with metrics customized for each campaign based on their needs.

Not all networks, ads types and content types are created equal – the context in which these metrics are made and measured changes everything. Without the requisite transparency to explain this to the industry, there is a risk in continuing a situation where no one is reading the same page.

Follow Sharethrough (@sharethrough) and AdExchanger (@adexchanger) on Twitter.

Must Read

Comic: Lunch Is Searched

Based On Its Q3 Earnings, Maybe AIphabet Should Just Change Its Name To AI-phabet

Google hit some impressive revenue benchmarks in Q3. But investors seemed to only have eyes for AI.

Reddit’s Ads Biz Exploded In Q3, Albeit From A Small Base

Ad revenue grew 56% YOY even without some of Reddit’s shiny new ad products, including generative AI creative tools and in-comment ads, being fully integrated into its platform.

Freestar Is Taking The ‘Baby Carrot’ Approach To Curation

Freestar adopted a new approach to curation developed by Audigent that gives buyers a priority lane to publisher inventory with higher viewability and attention scores than most open-auction inventory.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Header Bidding Rapper (Wrapper!)

IAB Tech Lab Made Moves To Acquire Prebid In 2021 – And Prebid Said No

The story of how Prebid.org came to be – and almost didn’t – is an important one for the industry.

Discover Wiped Out MFA Spend By Following These Four Basic Steps

By implementing the anti-MFA playbook detailed in the ANA’s November report, brands were able to reduce the portion of their programmatic budgets going to made-for-advertising sites to about 1%.

Welcome to the Cookie Complaint Department

PAAPI Could Be As Effective For Retargeting As Third-Parties Cookies, Study Finds

There’s been plenty of mudslinging in and around the Chrome Privacy Sandbox. But the Protected Audiences API (PAAPI) maybe ain’t so bad, according to researchers at Boston University.