Home Data-Driven Thinking Everything You Know About Frequency Is Wrong

Everything You Know About Frequency Is Wrong

SHARE:

andrewshebbeareupdated“Data Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Andrew Shebbeare, founding partner and global innovation director at Essence.

It’s a slightly sensationalist headline, I admit. It’s been a while since I contributed to this series and I figured I might need to grab some attention.

This isn’t complete link bait though. A year ago I wrote that frequency control was “RTB’s unsung hero.” I still believe this is true. One of the greatest sources of waste in media is ads directed at people who see them more often than needed, or too few times to notice. Programmatic has the power to eliminate that waste.

Unfortunately, this is more complicated than it might sound. I wanted to share some of the things I’ve learned and some practical suggestions that should help.

Long Tail, Stubby Torso?

It is a dirty digital secret that frequency tends to get concentrated. The averages you hear mask the fact that frequency distributions usually have a very long tail, with small percentages of users seeing a big share of impressions. This effect can stay hidden in standard reports which cut off at around 15 exposures, but some log analysis will unearth the truth: I have seen some cookies – likely not all human – subjected to thousands of impressions a month.

With all those impressions soaked up, somebody has to be underexposed. The flip side of the long tail is a modal frequency sitting at around one or two ads – large swaths of your audience probably aren’t noticing your ads at all.

We know that frequency drives breakthrough, but too many impressions cause wear-out. Imagine if everyone saw your message exactly the right number of times. You’d see a big ROI jump, those overexposed users would thank you and maybe some of the bots could take the afternoon off.

OK, But Why Is This Hard?

Somewhere between “too much” and “not enough” must be a creamy middle – the much vaunted “optimal frequency.” But how to find it, exactly?

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Let’s say people exposed to n impressions respond better than any other group. You see increasing marginal returns below n, and decreasing returns at n+1 and beyond. You might conclude that n is your optimum. In fact, is it just as likely that users with a predisposition toward your brand tend to see six impressions, because that’s how much they use the sites on your media plan. You really can’t tell cause and correlation. Experiments pretty much always show a positive relationship between frequency and conversion rate – more ads exposures usually equal more sales. When we repeat the same analysis with a placebo creative, the relationship remains – albeit weaker. Clearly there are effects other than ad exposure at play.

Undeterred, you might setup a control group, and measure the difference between the control and exposed audience at each level of frequency. A noble thought, but unfortunately even that won’t give you the answer. I have seen hard evidence proving that people who come to see n ads are different to people who see n+1. The difference in control and exposed lift could be as much to do with their demographic, psychographic or media consumption profile as the impact of your ads – you still have no clear measure of what’s optimal in terms of per-user frequency.

If you really want to understand the marginal impact of your ad impression, you need to target an audience with n impressions and compare them to a control that saw n-1 but would have seen n impressions if you hadn’t prevented it. This kind of stratified experiment is possible with programmatic or dynamic ad replacement but is fiddly to set up and requires big slices of control inventory.

If you do bravely embark on this journey, the next problem you will run into is significance. Since you are trying to measure the difference between quite similar treatments, you will need a good degree of precision, so quite large samples at each frequency level. This is hard for direct-response marketers, and even tougher for brand marketers contending with survey response rates.

A brand testing three frequency levels and seeing exposed awareness rise from 10% to 15%,16% and 17%, respectively, would need about 12,000 respondents in each frequency group to say with 90% confidence and power which is the best by a margin of 1% or more. More levels will need yet more. These numbers are going to be a challenge for anyone.

Worst of all, no matter how rigorous your experiments, it unlikely that they will spit out a single magic number – different audiences will have different reactions to ads, pre-existing attitudes and intent. We’ve not addressed the time interval over which frequency is measured, the interplay with cookie deletion or cross-device measurement. I’ve ignored the importance of pacing and recency, which are effects even harder to understand because they carry so much variability. Basically, we’re just scratching the surface.

What You Can Do About It

You might feel a bit deflated by now. The good news is that you don’t have to fix all these problems to get performance gains in your campaigns. Do the best analysis you can, mix in some common sense, segment your users and test iteratively. Use A/B audience splits and make sure you can break out campaign performance by segment – you can still unlock a bunch of value.

You don’t need to know precisely what frequency is optimal to be able to feel your way to improvements. For example, I recently worked on an experiment spanning four different frequency management strategies against randomly selected audience segments. It turns out the best treatment had up to 9% better ROI overnight. It is still impossible to say exactly what frequency is best, but layering three capping windows and reinvestment at low frequencies was better than a simple frequency cap. These represent millions of dollars in efficiency.

I recommend testing layered strategies. Most ads work better when they are delivered evenly rather than in bursts. A two-per-24 cap guarantees that no individual day gets too crazy, but still adds up to more than 60 potential exposures per month. On the other hand, you can burn through a 25-per-month cap in a few minutes if you aren’t careful. Apply both caps together, and you have a better chance of achieving steady, measured campaign delivery.

Depending on your DSP, you might need to get quite creative. Most platforms only support one cap at each level in the campaign, and it can be hard to manage the aggregate. You should team caps with pacing settings if your chosen platform has this helpful feature. Alternatively, you can use a DMP to group users by exposure history and target each segment individually – it’s more work but most flexible.

Finally, you might consider bidding a little more for impressions that aren’t your first, with the aim of building frequency to effective levels before you burn your entire budget on reach.

For The Product Managers Out There

I hope we will eventually see awesome frequency targeting algorithms that will help model and manage these effects better. In the meantime, these things would help:

  • Pacing control: I know of a couple DSPs that support pacing control, but it’s not the norm. This feature feels important, if tricky to execute well.
  • Vector-based bidding: Frequency caps are a blunt instrument. In reality, ads don’t suddenly stop adding value after your optimum is reached; it would be better to progressively taper bids based on the expected marginal value of each extra impression.
  • Supply modeling: Some users are harder to reach and will give you fewer chances to build frequency. If you expect to have plenty more opportunities, you might bid more conservatively. This game of Texas Hold ’em requires a lot of data – you’ll need to be a large-scale advertiser and/or have access to lost bid data to predict scarcity.
  • Refresh latency: It is still hard to manage frequency really tightly. Managing ad collision through near-concurrent bids is tricky, and you can still technically exceed a frequency target “by accident.”
  • Native viewability: I talk about viewability a lot, but frequency only matters for ads that are seen. The easier it is to filter signal from noise, the better we can optimize. The link needs to be at the impression level.

This geekery has massive potential. It’s the stuff that will make programmatic media kick traditional buying’s butt, especially when it comes to branding.

Follow Essence (@essencedigital) and AdExchanger (@adexchanger) on Twitter.

Must Read

Google in the antitrust crosshairs (Law concept. Single line draw design. Full length animation illustration. High quality 4k footage)

Google And The DOJ Recap Their Cases In The Countdown To Closing Arguments

If you’re trying to read more than 1,000 pages of legal documents about the US v. Google ad tech antitrust case on Election Day, you’ve come to the right place.

NYT’s Ad And Subscription Revenue Surge As WaPo Flails

While WaPo recently lost 250,000 subscribers due to concerns over its journalistic independence, NYT added 260,000 subscriptions in Q3 thanks largely to the popularity of its non-news offerings.

Mark Proulx, global director of media quality & responsibility, Kenvue

How Kenvue Avoided $3 Million In Wasted Media Spend

Stop thinking about brand safety verification as “insurance” – a way to avoid undesirable content – and start thinking about it as an opportunity to build positive brand associations, says Kenvue’s Mark Proulx.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Lunch Is Searched

Based On Its Q3 Earnings, Maybe AIphabet Should Just Change Its Name To AI-phabet

Google hit some impressive revenue benchmarks in Q3. But investors seemed to only have eyes for AI.

Reddit’s Ads Biz Exploded In Q3, Albeit From A Small Base

Ad revenue grew 56% YOY even without some of Reddit’s shiny new ad products, including generative AI creative tools and in-comment ads, being fully integrated into its platform.

Freestar Is Taking The ‘Baby Carrot’ Approach To Curation

Freestar adopted a new approach to curation developed by Audigent that gives buyers a priority lane to publisher inventory with higher viewability and attention scores than most open-auction inventory.