Home Data-Driven Thinking The Attribution Error

The Attribution Error

SHARE:

Data-Driven Thinking“Data-Driven Thinking” is a column written by members of the media community and containing fresh ideas on the digital revolution in media.

Jeremy Stanley is SVP Product and Data Sciences for Collective.

As an industry we have largely concluded that existing measurement solutions (CTR, view-through and click-through conversion) have glaring flaws. And so we have turned to independent vendors (see Forrester on Interactive Attribution) to employ sophisticated algorithmic attribution solutions to value digital advertising impressions. These solutions cater to our desire to glean definitive and actionable data about what works from the oceans of data exhausted by our digital campaigns.

Yet algorithmic attribution is founded on a fatally flawed assumption – that causation (a desired outcome happened because of an advertisement) can be determined without experimentation – the classic scientific model of test and control.

No medicine is FDA approved, no theory accepted by the scientific community absent rigorous experimental validation. Why should advertising be any different?

Consider that there are two driving forces behind a consumer conversion.  The first is the consumer’s inherent propensity to convert. Product fit, availability, and pricing all predispose some consumers to be far more likely to purchase a given product than others.

The second is the incremental lift in conversion propensity driven by exposure to an advertisement. This is a function of the quality of the creative, the relevance of the placement and the timing of the delivery.

To determine how much value an advertising impression created, an attribution solution must tease out the consumer’s inherent propensity to convert from the incremental lift driven by the ad impression. Algorithmic attribution solutions tackle this by identifying which impressions are correlated to future conversion events. But the operative word here is correlated – which should not be confused with caused.

By and large, algorithmic attribution solutions credit campaigns for delivering ads to individuals who were likely to convert anyway, rather than creating value by driving incremental conversions higher!

To highlight this problem, let’s consider retargeting. Suppose that an advertiser delivered at least one advertisement to every user in their retargeting list (users who previously visited their home page). Then, suppose that 10% of these users went on to purchase the advertised product.

In this simple example, it is impossible to tell what impact the advertising had. Perhaps it caused all of the conversions (after all, every user who converted saw an ad). Or perhaps it caused none of them (those users did visit the home page, maybe they would have converted anyways). Either conclusion could be correct

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

More complex real-world scenarios just get more complicated. Biases arise from cookie deletion, variation in Internet usage and complex audience targeting executed across competing channels and devices. Sweeping these concerns aside and hoping that an algorithm can just ‘figure it out’ is a recipe for disaster.

Instead, the answer is to conduct rigorous A/B experiments. For a given campaign, a set of random users is held out as a control group, and their behavior is used to validate that advertising in the test group is truly generating incremental conversion or brand lift.

Further, through careful analysis of audience data, one can identify the ‘influenceables’ – pockets of audiences who are highly receptive to an advertising message, and will generate outsized ROI for a digital advertising campaign.

My own observation, observed across numerous campaigns, is that consumers with a high inherent propensity to convert tended to be the least influence-able!  Many of these consumers have already made up their mind to purchase the product. Showing them yet another digital advertisement is a waste of money.

Yet that is precisely what many advertisers reward today: serving ads to audiences who are likely to convert anyway to gather credit in the last view attribution schemes. Algorithmic attribution might make this marginally better (at least credit is distributed over multiple views), but at significant expense.

Advertisers would be far better served if attribution providers invested in experimentation instead. However, I anticipate that many attribution vendors will fight this trend. The only rigorous way to experiment is to embed a control group in the ad serving decision process that is checked in real time, to ensure specific users are never shown an advertisement. This approach is radically different from the prevailing attribution strategy of “collect a lot of data and throw algorithms at it.”

By leveraging experimentation coupled with audience insights, savvy marketers can extract far more value from their digital advertising dollars. Those who do so now will gain significant competitive advantages.

Follow AdExchanger (@adexchanger) on Twitter.

Must Read

For Super Bowl First-Timers Manscaped And Ro, Performance Means Changing Perception

For Manscaped and Ro, the Big Game is about more than just flash and exposure. It’s about shifting how audiences perceive their brands.

Alphabet Can Outgrow Everything Else, But Can It Outgrow Ads?

Describing Google’s revenue growth has become a problem, it so vastly outpaces the human capacity to understand large numbers and percentage growth rates. The company earned more than $113 billion in Q4 2025, and more than $400 billion in the past year.

BBC Studios Benchmarks Its Podcasts To See How They Really Stack Up

Triton Digital’s new tool lets publishers see how their audience size compares to other podcasts at the show and episode level.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Traffic Jam

People Inc. Says Who Needs Google?

People Inc. is offsetting a 50% decline in Google search traffic through off-platform growth and its highest digital revenue gains in five quarters.

The MRC Wants Ad Tech To Get Honest About How Auctions Really Work

The MRC’s auction transparency standards aren’t intended to force every programmatic platform to use the same auction playbook – but platforms do have to adopt some controversial OpenRTB specs to get certified.

A TV remote framed by dollar bills and loose change

Resellers Crackdowns Are A Good Thing, Right? Well, Maybe Not For Indie CTV Publishers

SSPs have mostly either applauded or downplayed the recent crackdown on CTV resellers, but smaller publishers see it as another revenue squeeze.