Home On TV & Video Applying The Scientific Method To TV Attribution

Applying The Scientific Method To TV Attribution

SHARE:
Jason Fairchild, CEO and co-founder of tvScientific

On TV and Video” is a column exploring opportunities and challenges in advanced TV and video.

Today’s column is written by Jason Fairchild, CEO and co-founder of tvScientific.

Most of us probably associate the scientific method with high school science class. But really, isn’t this process exactly what advertising should follow, too?

Arguably, the scientific method has fueled performance digital advertising from its infancy in 1998 to over $200B in revenue per year in 2022.

Meanwhile, TV has been historically stuck in the 1950s. The market size of TV advertising compared to digital says it all. Where digital advertising claims nine million advertisers, TV’s ~$72B industry is 85% concentrated to just 500 national brands.

With over half of all TV now consumed via streaming services, measuring it in a granular way is possible for the first time. This allows advertisers to deterministically connect the dots between TV ads viewed and business outcomes like website visits and sales.

Given this newly-available feedback loop, here’s how advertisers can actually apply the scientific method to TV advertising.

1. All experiments start with seeking an answer to a question. In advertising, the question should be “How do I drive my target KPIs?” Are you trying to drive website visitors? Sales? App installs? ROI? Be specific.

2. Develop hypotheses around your target customer and make predictions about what will resonate with this target audience. Consider the following:

    • Demographics
    • Program viewing habits
    • Geographical regions

3. Set up test campaigns, including:

    • Line items for each of your hypotheses.
    • A “follow the data” line item to experiment across as many apps, geographies, and dayparts as possible. This should be at least 15% of the test budget.
    • Define your attribution window. This should be a data-driven process. All purchases have different consideration periods, and thus should have different attribution windows. Determine and evaluate the exposure-to-conversion window for your specific category. Let’s say it’s seven days. You should expect to see that actual data and evaluate against the seven days hypothesis, and adjust as the data dictates.
    • Set your budgets. Most digital platforms, like Google and Facebook, have no minimum spend commitments. But, it is important to avoid the “false read” problem, which can happen if you’re not driving enough test data. 

4. Launch your test campaigns and watch them in action.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

5. Evaluate results:

    • Within 24-48 hours, you should see enough conversion events to start to understand which of your hypotheses are working and which are not.  
    • When your tests are “cohort complete” (meaning fully through the attribution window time horizon), evaluate results and determine what works and what doesn’t. 
    • Evaluate the attribution window. This requires that you see the time/date stamp of each ad impression and compare it to the outcome event. You should evaluate the conversion data all the way through 45 days or more. The data will show a sharp curve which should inform the most accurate attribution window.
    • Evaluate ROAS:
      • This is a simple calculation of sales over ad spend.  
      • Make sure you fully understand the media cost of your platform provider versus what they charge, as this will directly impact your ROAS. If your provider buys inventory for 20 and sells it to you for 40, this “arbitrage” approach is costing you money that could be spent on driving ROAS.
    • Validate results with incrementality testing. Done correctly, this will value the incremental value of any given marketing channel, including TV.

6. Use the data you’ve gathered to iterate on your approach and optimize future campaigns.

    • Shift budgets from underperforming targeting dimensions to those that are performing as soon as the winning combination is clear.
    • Make sure you have 100% transparency into actual media costs. One best practice is to renegotiate CPMs down on the most performing media in return for a spend commitment. 

Scientific method requires absolute transparency

The reason nine million advertisers participate in search and social is because they can measure ROI. The data is simple to understand: ad-to-click-to-sale. However, the models don’t take multi touch attribution into account for reasons that benefit the last-click platforms. And Facebook and Google limit the data they share with advertisers due to commercial motivations. But we can be sure that their own optimization algorithms are totally reliant on data transparency and measurement.

TV is fundamentally different and offers the industry a chance to get it right. Because we can’t click on TV, we can’t understand or trust TV attribution unless we can see the raw exposure-to-outcome data for ourselves. We have to see, measure, and verify the timestamp of the TV-exposure-to-outcome journey for all outcomes, including last-click channels. Only when we can do that can we apply the scientific method and scale TV on a data-driven, ROI-positive basis.

The good news is that applying the scientific method to TV buying is 100% possible on platforms and technologies that exist today. And the upside is massive. There are 122M U.S. TV living rooms waiting to be transformed into the next massive growth channel. The challenge is that it’s going to require new thinking around how TV attribution is different from last-click digital channels, and total data transparency within ad platforms.

Follow tvScientific on LinkedIn and AdExchanger (@adexchanger) on Twitter.

For more articles featuring Jason Fairchild, click here.

Must Read

The FTC's latest staff report has strong message for social media and streaming video platforms: Stop engaging in the "vast surveillance" of consumers.

FTC Denounces Social Media And Video Streaming Platforms For ‘Privacy-Invasive’ Data Practices

The FTC’s latest staff report has strong message for social media and streaming video platforms: Stop engaging in the “vast surveillance” of consumers.

Publishers Feel Seen At The Google Ad Tech Antitrust Trial

Publishers were encouraged to see the DOJ highlight Google’s stranglehold on the ad server market and its attempts to weaken header bidding.

Albert Thompson, Managing Director, Digital at Walton Isaacson

To Cure What Ails Digital Advertising, Marketers And Publishers Must Get Back To Basics

Albert Thompson, a buy-side veteran with 20+ years of experience, weighs in on attention metrics, the value of MFA sites, brand safety backlash and how publishers can improve their inventory.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
A comic depiction of Google's ad machine sucking money out of a publisher.

DOJ vs. Google, Day Five Rewind: Prebid Reality Check, Unfair Rev Share And Jedi Blue (Sorta)

Someone will eventually need to make a Netflix-style documentary about the Google ad tech antitrust trial happening in Virginia. (And can we call it “You’ve Been Ad Served?”)

Comic: Alphabet Soup

Buried DOJ Evidence Reveals How Google Dealt With The Trade Desk

In the process of the investigation into Google, the Department of Justice unearthed a vast trove of separate evidence. Some of these findings paint a whole new picture of how Google interacts and competes with its main DSP rival, The Trade Desk.

Comic: The Unified Auction

DOJ vs. Google, Day Four: Behind The Scenes On The Fraught Rollout Of Unified Pricing Rules

On Thursday, the US district court in Alexandria, Virginia boarded a time machine back to April 18, 2019 – the day of a tense meeting between Google and publishers.