Home On TV & Video It’s Key To Study Existing TV Attribution Practices

It’s Key To Study Existing TV Attribution Practices

SHARE:

On TV And Video” is a column exploring opportunities and challenges in advanced TV and video.

Today’s column is written by Jane Clarke, managing director and CEO at the Coalition for Innovative Media Measurement (CIMM).

Thanks to the glaring lack of standardization across the industry, TV attribution providers working with advertisers and agencies offer various approaches, relying on different modeling techniques and applying disparate data sources to reach different conclusions and recommendations for marketers.

As a result, any two TV attribution providers working on the same campaign are likely to share wildly different results that may drive dramatically different business decisions.

Right now, most TV attribution providers operate in a black box mode to protect their “secret sauce” that might give them a leg up against competitors. I understand that desire, but with so much secrecy we don’t know which process is the most precise, which data sets represent which consumers or, ultimately, which outcomes can be trusted.

A number of factors contribute to this messy, apples-to-oranges TV attribution ecosystem. Many of the issues lie within the data that are used (ranging from the ad occurrence data to the TV viewing exposure data and even to the “outcomes” data), but also the data-matching techniques and modeling approaches (which may have different attribution windows, adstock, baselines, incremental sales, etc.) that are employed.

Therefore, we need to study existing TV attribution approaches now to bring transparency and learn what drives the difference in results so that we can begin to develop best practices for attribution.

CIMM and the 4A’S Media Measurement Task Force are partnering with Janus Strategy and Insights and Sequent Partners to study how different TV data inputs impact attribution results. The goal is to begin to define best practices for better representation of television in attribution models and increase confidence in this important new area of measurement.

As part of the study, we are looking at the difference in attribution results from six national, linear television campaigns that aired in 2019 to compare ad occurrence data sources, TV exposure data sources and delivery across data providers. We’ll also glean insights into how different the campaign schedules provided by leading occurrence data sources are; if occurrences are currently over or undercounted; the differences in viewership data from different TV data sources, including set-top box data, smart TV data and combinations of both; and how the differences impact model lift estimates and the decisions marketers will make.

CIMM members are particularly interested in bringing more transparency and best practices, which can hopefully lead to greater confidence, trust and reliance on these new methods by marketers and media companies alike.

Analyzing ad occurrence data from Hive, iSpot, Kantar and Nielsen and television exposure data from 605, Alphonso, Ampersand, Comscore, iSpot, Nielsen, Samba, TVadSync, TVSquared and VideoAmp may not provide all of the answers that we are seeking. But it will be a good place to start to determine some best practices for attribution model television data inputs.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

The industry will benefit from having an understanding of what the current marketplace looks like. However, this is only the beginning. We need to focus subsequent research on outcome data, data matching techniques and optimal modeling approaches, along with combining TV and digital exposure data.

But, let’s take our TV attribution input “baby steps” first. By looking at ad occurrences (schedules) and exposure data (ratings and delivery), we can begin to unpack what drives the difference in attribution results.

Follow CIMM (@CIMM_NEWS) and AdExchanger (@adexchanger) on Twitter.

Must Read

Buyers Can Now Target High-Attention Inventory In The Trade Desk

By applying Adelaide’s Attention Unit scoring, buyers can target low-, medium- and high-attention inventory via TTD’s self-serve platform.

How Should Advertisers Navigate A TikTok Ban Or Google Breakup? Just Ask Brian Wieser

The online advertising industry is staring down the barrel of not one but two potential shutdowns that could radically change where brands put their ad dollars in 2025, according to Madison and Wall’s Brian Weiser and Olivia Morley.

Intent IQ Has Patents For Ad Tech’s Most Basic Functions – And It’s Not Afraid To Use Them

An unusual dilemma has programmatic vendors and ad tech platforms worried about a flurry of potential patent infringement suits.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

TikTok Video For Open Web Publishers? Outbrain Built It.

Outbrain is trying to shed its chumbox rep by bringing social media-style vertical video to mobile publishers on the open web.

Billups Launches Attention Measurement For Out-Of-Home

Billups, a managed services agency that specializes in OOH, is making its attention measurement solution and a related analytics dashboard available for general use.

US District Court for the Eastern District of Virginia, Alexandria

The Google Ad Tech Antitrust Case Is Over – And Here’s What’s Happening Next

Just three weeks after it began, the Google ad tech antitrust trial in Virginia is over. The court will now take a nearly two-month break before reconvening for closing arguments right before Thanksgiving.