The world of marketing measurement is buzzing about AI. But when it comes to complex techniques like media mix modeling (MMM), the field is awash with false promises about the benefits that these new technologies offer. This creates enormous risks for enterprise marketers who regularly base multimillion-dollar decisions on such models.
The hard truth is that AI, especially LLMs, produce confident-sounding but often wrong statistical analyses that can lead to poor budget allocation decisions.
LLMs aren’t designed to solve causal inference problems that connect real-world causes to effects. And since that’s the fundamental goal of marketing measurement, it means LLM-type models struggle to produce actionable recommendations that consistently improve business performance.
Even worse, the hype around AI creates a dangerous distraction from the only question that truly matters in MMM: Is this model helping us invest our media budget in ways that actually yield profit?
Still, this doesn’t mean AI has no place in marketing measurement. It just means brands need to be sure that they’re selecting the right tool for the job at hand – all while maintaining a healthy skepticism of any vendors who claim AI magic will solve fundamental problems.
Why “AI-powered” measurement can be dangerous
The fundamental purpose of media mix modeling should be simple: to help businesses drive more profit through better marketing decisions. Yet, historically, MMMs have failed to deliver on this promise.
Dubious AI claims have simply repackaged MMM into a modern black-box problem. Many vendors now use AI-powered models as a marketing term to obscure their methodologies and avoid critical model validation techniques. These models have replaced the MMM consultant, producing seemingly confident but fundamentally wrong statistical analyses that are never validated and can’t drive more profit for the business.
This creates enormous risk for marketing teams. Consider what this might look like for a household brand like Alaska Airlines, which has started embracing an open-source MMM solution. With a likely nine-figure marketing budget, small forecast errors in their model can lead to multimillion-dollar budget misallocations. Unvalidated tooling compounds these errors and, perhaps worse, allows bad statistical analyses to hide in plain sight.
The real role of AI in measurement
If we define AI more broadly to include all machine learning techniques, like Hamiltonian Monte Carlo (HMC), AI has been a part of MMM for years.
However, while this brand of AI can accelerate model estimation and enhance output analysis, it cannot magically solve attribution. Large language models like ChatGPT simply can’t perform causal inference effectively. But they still can be useful for MMM.
Specifically, AI excels at peripheral modeling and analysis tasks, such as:
- Summarizing model outputs and supporting analysis
- Explaining model documentation so teams understand underlying assumptions
- Flagging data anomalies that warrant investigation
While AI can’t replace the core mechanics of causal modeling, it can make MMM workflows faster, clearer and more accessible to the teams who use them.
How to choose a measurement solution
With all the hype around AI measurement solutions, how can you cut through the noise to find a tool that’s statistically rigorous, trustworthy and capable of driving profit?
First, choose an internal model validation framework that operates independently of any vendor promises. Because even AI-powered models require validation to confirm that they’ve identified true causal signal.
This framework should focus on:
Experimentation: Set aside a specific experimentation budget for tests that are meant to validate (and calibrate) the output of your MMM.
Forecast reconciliation: Run consistent forecasts with your MMM, note the expected outcomes and reconcile them with actual business performance. Consistent and significant forecast misses should be a red flag that your model has failed to identify a true causal signal.
Model quality checks: Demand that your vendor run (and report on) the results of out-of-sample forecast accuracy checks, parameter recovery exercises and model stability checks. These will help consistently confirm the quality and forecasting ability of your model.
Remember that AI model-building or analysis frameworks can be valuable, but their bells and whistles shouldn’t distract from what really matters: using MMM to help your business drive profit. Indeed, every model – AI-powered or not – must prove it can identify which investments generate incremental revenue and which are just capturing demand that would have happened anyway.
Trust through internal validation, not hype
Marketing measurement vendors making grand promises about AI while continuing to hide their methodologies aren’t selling anything but a costly mirage.
Real progress in marketing measurement doesn’t come from AI magic; it comes from changes to your marketing program that demonstrably improve your marketing ROI. When AI enhances that process (through better analysis tools or faster computation, for example), it adds genuine value. When it obscures what’s actually happening, it’s just expensive measurement theater.
The future belongs to marketing teams who see through the hype and focus relentlessly on what matters: Can this model help us grow our business? Answer that question with actual evidence, and you’ll be ahead of those busy chasing a perfect attribution solution.
“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.
Follow Recast and AdExchanger on LinkedIn.
