Home Data-Driven Thinking The Truth About AI In Marketing Measurement: What Works, What Doesn’t And What It Costs You

The Truth About AI In Marketing Measurement: What Works, What Doesn’t And What It Costs You

SHARE:
Michael Kaminsky, Co-Founder & Co-CEO, Recast

The world of marketing measurement is buzzing about AI. But when it comes to complex techniques like media mix modeling (MMM), the field is awash with false promises about the benefits that these new technologies offer. This creates enormous risks for enterprise marketers who regularly base multimillion-dollar decisions on such models.

The hard truth is that AI, especially LLMs, produce confident-sounding but often wrong statistical analyses that can lead to poor budget allocation decisions.

LLMs aren’t designed to solve causal inference problems that connect real-world causes to effects. And since that’s the fundamental goal of marketing measurement, it means LLM-type models struggle to produce actionable recommendations that consistently improve business performance.

Even worse, the hype around AI creates a dangerous distraction from the only question that truly matters in MMM: Is this model helping us invest our media budget in ways that actually yield profit?  

Still, this doesn’t mean AI has no place in marketing measurement. It just means brands need to be sure that they’re selecting the right tool for the job at hand – all while maintaining a healthy skepticism of any vendors who claim AI magic will solve fundamental problems.

Why “AI-powered” measurement can be dangerous

The fundamental purpose of media mix modeling should be simple: to help businesses drive more profit through better marketing decisions. Yet, historically, MMMs have failed to deliver on this promise.  

Dubious AI claims have simply repackaged MMM into a modern black-box problem. Many vendors now use AI-powered models as a marketing term to obscure their methodologies and avoid critical model validation techniques. These models have replaced the MMM consultant, producing seemingly confident but fundamentally wrong statistical analyses that are never validated and can’t drive more profit for the business.

This creates enormous risk for marketing teams. Consider what this might look like for a household brand like Alaska Airlines, which has started embracing an open-source MMM solution. With a likely nine-figure marketing budget, small forecast errors in their model can lead to multimillion-dollar budget misallocations. Unvalidated tooling compounds these errors and, perhaps worse, allows bad statistical analyses to hide in plain sight.

The real role of AI in measurement

If we define AI more broadly to include all machine learning techniques, like Hamiltonian Monte Carlo (HMC), AI has been a part of MMM for years.  

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

However, while this brand of AI can accelerate model estimation and enhance output analysis, it cannot magically solve attribution. Large language models like ChatGPT simply can’t perform causal inference effectively. But they still can be useful for MMM.

Specifically, AI excels at peripheral modeling and analysis tasks, such as:

  • Summarizing model outputs and supporting analysis
  • Explaining model documentation so teams understand underlying assumptions
  • Flagging data anomalies that warrant investigation

While AI can’t replace the core mechanics of causal modeling, it can make MMM workflows faster, clearer and more accessible to the teams who use them.

How to choose a measurement solution

With all the hype around AI measurement solutions, how can you cut through the noise to find a tool that’s statistically rigorous, trustworthy and capable of driving profit?

First, choose an internal model validation framework that operates independently of any vendor promises. Because even AI-powered models require validation to confirm that they’ve identified true causal signal.

This framework should focus on:

Experimentation: Set aside a specific experimentation budget for tests that are meant to validate (and calibrate) the output of your MMM.

Forecast reconciliation: Run consistent forecasts with your MMM, note the expected outcomes and reconcile them with actual business performance. Consistent and significant forecast misses should be a red flag that your model has failed to identify a true causal signal.

Model quality checks: Demand that your vendor run (and report on) the results of out-of-sample forecast accuracy checks, parameter recovery exercises and model stability checks. These will help consistently confirm the quality and forecasting ability of your model.

Remember that AI model-building or analysis frameworks can be valuable, but their bells and whistles shouldn’t distract from what really matters: using MMM to help your business drive profit. Indeed, every model – AI-powered or not – must prove it can identify which investments generate incremental revenue and which are just capturing demand that would have happened anyway.

Trust through internal validation, not hype

Marketing measurement vendors making grand promises about AI while continuing to hide their methodologies aren’t selling anything but a costly mirage. 

Real progress in marketing measurement doesn’t come from AI magic; it comes from changes to your marketing program that demonstrably improve your marketing ROI. When AI enhances that process (through better analysis tools or faster computation, for example), it adds genuine value. When it obscures what’s actually happening, it’s just expensive measurement theater.

The future belongs to marketing teams who see through the hype and focus relentlessly on what matters: Can this model help us grow our business? Answer that question with actual evidence, and you’ll be ahead of those busy chasing a perfect attribution solution.

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Follow Recast and AdExchanger on LinkedIn.

Must Read

Wall Street Wants To Know What The Programmatic Drama Is About

Competitive tensions and ad tech drama have flared all year. And this drama has rippled out into the investor circle, as evident from a slew of recent ad tech company earnings reports.

Comic: Always Be Paddling

Omnicom Allegedly Pivoted A Chunk Of Its Q3 Spend From The Trade Desk To Amazon

Two sources at ad tech platforms that observe programmatic bidding patterns said they’ve seen Omnicom agencies shifting spend from The Trade Desk to Amazon DSP in Q3. The Trade Desk denies any such shift.

influencer creator shouting in megaphone

Agentio Announces $40M In Series B Funding To Connect Brands With Relevant Creators

With its latest funding, Agentio plans to expand its team and to establish creator marketing as part of every advertiser’s media plan.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Google Rolls Out Chatbot Agents For Marketers

Google on Wednesday announced the full availability of its new agentic AI tools, called Ads Advisor and Analytics Advisor.

Amazon Ads Is All In On Simplicity

“We just constantly hear how complex it is right now,” Kelly MacLean, Amazon Ads VP of engineering, science and product, tells AdExchanger. “So that’s really where we we’ve anchored a lot on hearing their feedback, [and] figuring out how we can drive even more simplicity.”

Betrayal, business, deal, greeting, competition concept. Lie deception and corporate dishonesty illustration. Businessmen leaders entrepreneurs making agreement holding concealing knives behind backs.

How PubMatic Countered A Big DSP’s Spending Dip In Q3 (And Our Theory On Who It Was)

In July, PubMatic saw a temporary drop in ad spend from a “large” unnamed DSP partner, which contributed to Q3 revenue of $68 million, a 5% YOY decline.