Home Data-Driven Thinking The Problem With Attribution

The Problem With Attribution

SHARE:

stevelathamData-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Steve Latham, CEO at Encore Media Metrics.

In recent months we’ve heard some noise about the problems with using multitouch attribution to measure and optimize ad spend.

Some claim attribution is flawed due to the presence of non-viewable ads in user conversion paths. Others say attribution does not prove causality and should therefore be disregarded.

My view is that these naysayers are either painting with too big of a brush or they’re missing the canvas altogether.

Put The Big Brush Away

The universe of attribution vendors, tools and approaches is large and diverse. You can’t take a broad-brushed approach to describe what they do.

If the critics are referring to static attribution models offered by ad servers and site analytics platforms, such as last-touch, first-touch, U-shaped, time-based and even weighting, I would agree that these are flawed because of the presence of non-viewable ads. Including every impression and click and arbitrarily allocating credit will do more harm than good. But if they’re referring to legitimate, algorithmic attribution solutions, they clearly don’t understand how things work.

First, not all attribution tools include every impression when modeling conversion paths. Occasionally, non-viewable impressions can be excluded from the data set via outputs from the ad server or a third-party viewability vendor. For the majority of cases where impression-level viewability is not available, there are proven approaches to excluding and/or discounting the vast majority of non-viewable ads. Non-viewable ads and viewable, low-quality ads almost always have a very high frequency among converters, serving 50, 100 or more impressions to retargeted users. By excluding the frequency outliers from the data set, you eliminate a very high percentage of non-viewable ads. You also exclude most viewable ads of suspect quality.

Second, unlike static models, machine-learning models are designed to reward ads that contribute and discount ads that are in the path but are not influencing outcomes. As cookie bombing is not very efficient, with lots of wasted impressions of questionable value, they are typically devalued by good algorithmic attribution models.

By excluding frequency outliers and using machine-learning models to allocate fractional credit, attribution can separate much of the signal from the noise, even the noise you can’t see. And while algorithmic attribution does not necessarily prove causality, a causal inference can be achieved by adding a control group. While not perfect, it’s more than sufficient for helping advertisers optimize spend.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

You Missed The Entire Canvas

Complaining that attribution models are not accurate enough is like chiding Monet for being less precise than Picasso, especially when many advertisers are still painting with their fingers.

It’s easy to split hairs and poke holes in attribution, viewability, brand safety, fraud prevention, device bridging, data unification and other essential ad tech solutions. But the absence of a bulletproof solution is not a valid reason to continue relying on last century’s metrics, such as click-through rates and converting clicks. As Voltaire, Confucius and Aristotle said in their own ways, “Perfect is the enemy of good.” Ironically, so is click-based attribution.

While no one claims to have all the answers with 100% accuracy, fractional attribution modeling can improve media performance over last-click and static models. And while not every advertiser can be the next Van Gogh, they can use the tools and data that exist today to get a solid “A” in art class.

The Picture We Should Be Painting

I’m a big fan of viewability tools and causality studies, and I’m an advocate for incorporating both into attribution models. I am not a fan of throwing stones based on inaccurate or theoretical arguments.

Every campaign should use tools to identify fraud, non-viewable ads and suspect placements. The outputs from these tools should be inputs to attribution models, and every advertiser should carve out a small budget for testing. While this is an idealistic picture, it may not be too far away. As the industry matures, capabilities are integrated and advertisers, including agencies and brands, learn to use the tools, we will get closer to marketing nirvana.

In the mean time, advertisers should continue to make gradual improvement in how they serve, measure and optimize media. Even if it’s not perfect, every step counts.

Ad tech companies should remember we’re all part of an interdependent ecosystem. We need to work together to help advertisers get more from their media budgets. And we all need to have realistic expectations. From a measurement perspective, the industry will always be in catch-up mode, trying to validate the shiny new objects being created by media companies.

All that said, we can do much more today than only one year ago. We’ll continue to make progress. Advertisers will be more successful. And that will be good for everyone.

Follow Steve Latham (@stevelatham) and AdExchanger (@adexchanger) on Twitter.

Must Read

CleanTap Says It Easily Fooled Programmatic Tech With Spoofed CTV Devices

CleanTap claims that 100% of the invalid traffic it spoofed was accepted into live auctions run by programmatic platforms and was successfully bid on by advertisers.

HUMAN Expands Its IVT Detection Tool Kit With A New Product For Advertisers, Not Platforms

HUMAN has recently started complementing its bid request analysis by analyzing the time between when a bot clicks an ad and when the landing page loads. Now it’s offering the solution to individual advertisers.

Index Exchange Launches A Data Marketplace For Sell-Side Curation

Through Index Exchange’s data vendor marketplace, curators gain access to third-party data sets without needing their own integrations.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Can Publishers Trust The Trade Desk’s New Wrapper?

TTD says OpenAds is not just a reaction to Prebid’s TID change, but a new model for fairer, more transparent ad auctions. So what does the DSP need to do to get publishers to adopt its new auction wrapper?

Scott Spencer’s New Startup Wants To Help Users Monetize Their Online Advertising Data

What happens when an ad tech developer partners with a cybersecurity expert to start a new company? You end up with a consumer product that is both a privacy software service and a programmatic advertising ID.

Former FTC commissioner Alvaro Bedoya speaks to AdExchanger Managing Editor Allison Schiff at Programmatic IO NY 2025.

Advertisers Probably Shouldn’t Target Teens At All, Cautions Former FTC Commissioner

Alvaro Bedoya shared his qualms with digital advertising’s more controversial targeting tactics and how kids use gen AI and social media.