Home Data-Driven Thinking Google Hasn’t Killed Attribution Modeling – It Never Really Worked To Begin With

Google Hasn’t Killed Attribution Modeling – It Never Really Worked To Begin With

SHARE:

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Nico Neumann, assistant professor and fellow, Centre for Business Analytics at Melbourne Business School.

Following Google’s announcement to no longer share user IDs externally, some industry voices have raised concerns about how this move would harm independent multitouch attribution (MTA).

Indeed, it is hard to deny that any attribution analysis, which attempts to determine the efficiency of different ads, would lack a big piece of the puzzle without Google’s data.

However, how useful was attribution modeling even before this happened?

The customer journey was never completely trackable

Attribution modeling requires data on every user exposure to ads before a conversion occurs.

While such touch point records in the form of web cookies used to be readily available for many digital ads in the early days of online marketing, touch point data from traditional channels, such as TV, out-of-home, radio and print, has always been harder to access.

Beacon technology for mobile phones and TV set boxes create new opportunities to track individuals, but they require complex integrations to link lots of different identifiers into one meaningful list. While this is technically feasible, it creates extra costs and is rarely done well.

In any case, a complete user touch point recovery across competing vendors that tend to avoid supporting each other is not the only challenge for advertisers. Google is not alone in its decision to exclude cookie IDs in downloadable files in the name of privacy: Other advertising powerhouses, such as Amazon, Facebook and Twitter, don’t share any of their identifiers either.

Hence, it is fair to say that clients that leverage several of the leading online platforms will always have some key touch point data missing in their attribution models.

Use experiments for short-term effects, not correlational models

Independently of any data issues, there are good reasons to stop using attribution models, even if they are based on algorithmic estimation techniques instead of simple, gameable rules, such as last-touch attribution. The problem with any algorithmic MTA model is that it still relies on correlations and curve fitting and is therefore prone to wrong ad effect estimations.

Research by Google and Facebook and Northwestern University has demonstrated that only well-designed randomized experiments provide accurate insights. Netflix’s Kelly Uphoff shared similar findings at Programmatic IO in 2016: Attribution models will not reveal proper incremental uplifts as they do not allow causal inferences.

Programmatic targeting makes randomization in experiments difficult

To perform a proper experiment, one needs two identical groups, whereby one group sees no ads and the other one is exposed to ads. Then we can compare the two groups and observe the impact of ads. To achieve a robust result, the allocation of prospective customers to each group should be random. In other words, people in each group should have an equal likelihood of conversion before seeing ads.

This is the big challenge in programmatic advertising as demand-side platforms’ (DSPs’) algorithmic targeting makes it difficult to allocate groups based on similar conversion probability. Most DSPs are programmed to find customers with the highest probability to convert. Therefore, in principle, any randomization of people needs to occur before targeting happens to have two similar groups for experiments. Unfortunately, only a few platforms allow this in limited markets.

Long-term branding effects represent the biggest measurement challenge

Now we must consider that both MTA and experiments measure short-term effects of advertising, typically looking at time horizons of several weeks. The reason is that most web cookies do not last long enough for longer touch point analyses, while the opportunity costs to have no ads run for half of your prospective customers become extremely high the longer an experiment is live.

However, many ad campaigns have the goal of building long-term brand equity. To measure the long-term impact of different ads, we must fall back to complex econometric models, even though these have many possible shortcomings as they again rely on curve fitting. There are some mathematical tricks to make corrections for known issues, but this is still an area of active research.

While advances in computer power and greater data availability help us continuously improve model performance, the safest way to obtain some insights into the long-term impact of ads is to use a combination of research methods: econometric marketing-mix modeling and non-mathematical techniques. The latter can be as simple as using Google Trends to obtain a proxy of brand awareness and brand recall over time.

Alternatively, one can pay for brand tracker surveys that can be tailored to specific questions and include any brand metric of interest. Just be aware that the sample for any survey-based measurement needs to be large enough to not have random or biased results either. Free sample size estimators provide some guidance here.

Follow Melbourne Business School (@MelbBSchool) and AdExchanger (@adexchanger) on Twitter.

Must Read

Hard Truths For Retail Media At The IAB Connected Commerce Summit

The IAB Tech Lab’s Connected Commerce event in New York City this week felt to me like the retail media industry’s first sit-down explanation to a child who is now a “big kid” and must act accordingly.

Meta Is Launching An Easy Button For CAPI

Meta is simplifying its CAPI setup and teaching its pixel new tricks, including adding an AI-powered feature that automatically pulls in data from an advertiser’s website.

TelevisaUnivision Joins The Streaming Self-Service Bandwagon

TelevisaUnivision is the latest TV publisher to join the self-serve trend that’s rising in popularity across connected TV advertising. Its streaming inventory is now available to buy through fullthrottle.ai’s self-serve platform. The collaboration includes an ad bidder designed to improve both targeting and measurement.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Gamechanger (Google lost the DOJ's search antitrust case)

For Google Advertisers Who Overpaid The Monopoly – Don’t Hate, Arbitrate

Law firm Keller Postman is leading mass arbitration suits against Google, seeking advertiser damages for alleged monopoly overpricing. The total available pot is a quarter-trillion dollars.

Can An AI Solution Fix Misaligned Marketing Orgs?

Opal launched Gem, a new AI solution, to help large brands unify the layers of media and tech within their organizations.

Sports Publisher On3 Tries AI Recommendations To Keep Engagement In Its Home Court

Mula’s AI native content feed helps On3 keep its engagement and RPS consistent amid traffic drop-offs to publisher sites and the growing scarcity of online attention.