Home Data-Driven Thinking What Weather Prediction Tells Us About Programmatic

What Weather Prediction Tells Us About Programmatic

SHARE:

jayhabeggerData-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Jay Habegger, co-founder and CEO at ownerIQ.

When I wrote this column last month, a major snowstorm was bearing down on New England, where I live. The storm could have either hit with a wallop or whimper. Which was more likely? Depends on which weather model you believe.

Sound familiar? In some ways, there are parallels between predicting the weather and programmatic advertising.

In the US, we have the North American Meoscale (NAM) Forecast System and the Global Forecasting System (GFS). The National Oceanic and Atmospheric Administration (NOAA) also makes the results of multiple weather models available free of charge. Canada produces weather forecasts using its own model, while a coalition of 34 European countries operates the ECMWF model. There are hundreds of smaller scale and experimental weather models used around the globe.

Each weather model has different strengths and weaknesses. Some are designed for short-range forecast accuracy, while others take the long view. Some models are best applied to specific forecasting problems or areas, while others focus on a broader picture.

All have access to the same raw weather data at massive scale, yet they produce different forecasting results with sometimes significant variability. Tremendous resources are deployed to interpret the data. NOAA, for example, claims a staff of 6,773 scientists and engineers. Clearly lots of shared data, Ph.D.s, computing resources and money are required to interpret and use raw weather data.

The takeaway for marketers is that despite the current worship of raw data among marketers and investors, the ability to interpret data and transform it into useful, predictive information is the more important trick and perhaps the one with the most added value.

The corollary conclusion is that interpreting data and creating useful information isn’t easy. The fundamental media problem is understanding how to sort through billions of ad placement opportunities appearing in front of hundreds of millions of users and use the limited budget to find the combination of these opportunities, users and ad frequency that produce the best result for the advertiser.

Only in extremely limited cases is a single signal and a rigid segment based on that behavior sufficient to use as a filter to narrow the number of possible opportunity-user-frequency combinations to a manageable number and produce great outcomes. The only time this technique works is in the presence of both a single highly predictive signal and a very small user population.

That scenario describes only one common tactic: retargeting. The only situation where this relatively simplistic technique works with reliability is with retargeting. This explains why site retargeting was the first widely adopted programmatic use case.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

In all other cases, the simplistic technique fails and data alone, of any type or scale, will not deliver the marketing outcome because the data signal alone is not sufficiently predictive to overcome the noise that is inherent in the programmatic landscape.

Noise comes in many forms. Fraud and brand safety are the obvious ones, but there are many others. They include publishers that load pages with lots of ad units. Publishers with content so eye-catching that it overwhelms any ads placed on the page. Users who seem to have nothing better to do than to surf the web and explore everything, making their observed behaviors of marginal importance. Users who share a family computer with their teenage daughter and therefore present a confusing bag of behaviors.

Marketers who bought into the idea of raw data being the most important or only determining factor in media placement are now discovering this to their dismay. A DSP software license combined with data from third-party aggregators, or even just with large quantities of first-party data, cannot produce great results in the absence of models that interpret that data in the context of the specific marketing problems to be solved.

Just like weather forecasting, marketers need programmatic forecasting models that score each of the billions of possible opportunity-user-frequency combinations according to each business use case and make a prediction about what is going to happen if the advertiser’s media is placed in a given opportunity.

And, it turns out, the quality of these models and how they are built adds more value to the marketer than the reliance on mere data alone. Recall that in weather forecasting everybody has access to the same data, yet the predictive results are anything but the same.

Data isn’t unimportant, to be sure. False signals will mislead any model. Some signals are more predictive than others. Raw data inputs do matter.

But, when it comes to programmatic excellence, data isn’t the only thing that matters. Raw data deployed without a model to sort through the billions of opportunity-user-frequency combinations is really just a baby step up from the often-derided spray-and-pray techniques. Models matter, too, and marketers should be as focused on how their data is going to be interpreted and used as they are on the data inputs.

Oh, and that snowstorm? Whimper. The Canadian model got it right. NOAA got it wrong. Those who cancelled their vacations expecting the wallop lost out.

Same data, different prediction, different outcome. The model matters.

Follow ownerIQ (@ownerIQ) and AdExchanger (@adexchanger) on Twitter.

Must Read

Integral Ad Science Goes Big On Social Media As Retail Ad Spend Softens In Q3

Integral Ad Science shares dropped more than 10% on Wednesday, after the company reported lackluster revenue growth and softened its guidance for the Q4 season.

Comic: Gen AI Pumpkin Carving Contest

Meet Evertune, A Gen-AI Analytics Startup Founded By Trade Desk Vets

Meet Evertune AI, a startup that helps advertisers understand how their brands and products appear in generative AI search responses.

Private Equity Firm Buys Alliant As The Centerpiece To Its Platform Dreams

The deal is a “platform investment,” in which Inverness Graham sees Alliant as a foundation to build on, potentially through further acquisitions.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Even Sony Needed Guidance For Its First In-Game Ad Campaign

In-game advertising is uncharted territory even for brands like Sony Electronics that consumers associate with gaming.

Comic: Always Be Paddling

The Trade Desk Maintains Its High Growth Rate And Touts New Channels

“It’s hard not to be bullish about CTV when it’s both our largest channel and our fastest growing,” said The Trade Desk Founder and CEO Green during the company’s earnings report on Thursday.

After The Election, News Corp Has Harsh Words For Advertisers Who Avoided News

News Corp’s chief exec blasted “the blatant biases of ad agencies and ad associations,” which are “boycotting certain media properties” due to “personal political prejudices.”