Home Data-Driven Thinking Frequency Management: Let’s Do Better Than Average

Frequency Management: Let’s Do Better Than Average

SHARE:

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Steve Latham, global head of analytics at Flashtalking

Frequency ranks among the most important factors in determining advertising effectiveness. Regardless of the quality of the placement, creative or context, too much exposure – or not enough – can lead to disappointing outcomes.

Years of intensive research indicates the optimal frequency during the customer journey typically ranges between five and 15 impressions, depending on the brand, offer and purchase cycle.

Unfortunately, most frequency distribution curves are heavily polarized at the extreme highs and lows. Even if a campaign achieves an average frequency within the optimal range, it’s highly likely that a small percentage of users account for a disproportionately high share of impressions, while most users received too few ads to make a difference.

It’s time to revisit the average frequency metric. When relying on the “average” frequency metric – calculated as total impressions divided by the number of exposed users – the advertiser assumes the “average” is representative of the larger group, due to two underlying assumptions. The first assumption is that each publisher has unique inventory, and the second assumption is that each publisher is actively seeking the optimal frequency to its unique users.

Unfortunately, this is almost always wishful thinking.

Assumption No. 1: unique audiences

Prior to programmatic buying and audience targeting, it was reasonable to think that media buys on ESPN.com, HGTV.com and NBC.com offered unique audiences, and that each publisher would seek to deliver the desired frequency across all users.

Today that paradigm no longer exists as programmatic media vendors, including demand-side platforms, aggregators and publishers, inadvertently buy the same audiences through multiple sources. Overlap and redundant targeting are constant issues.

For example, consider an advertiser that buys from one DSP and two programmatic aggregators. Each buy may be segmented into unique strategies or tactics based on audience demographics, behavior or other criteria. If each media vendor optimizes for average frequency, the advertiser still risks overserving ads to its target audience due to the fact that each vendor is targeting the same small group of individuals. This is a costly scenario for advertisers, not only in terms of wasted impressions but also for the potential to annoy that small and important set of targeted users.

Assumption No. 2: frequency distribution

Compounding the overlapping audience issue is that many programmatic media vendors seek to win the “last-touch” prize by allocating a large share of impressions to a relatively small subset of users who are likely to convert, in what is known as “cookie-bombing.”

For example, an advertiser buys 10 million impressions from their programmatic vendor, who serves 3 million impressions to 3 million users at a frequency of one and another 2 million impressions to 1 million users at a frequency of two, saving the remaining 5 million impressions to retarget the 50,000 users (frequency of 100) who click or visit the advertiser’s site.

With the hope of winning the last-touch conversion, publishers may serve each retargeted user (some of which are bots) 100 impressions over several weeks – many of which are not viewable. While the overall average frequency looks OK at 3.3 impressions per user (10 million impressions served to 3,050,000 users), most audiences are underserved (one or two) or overserved (100).

While this practice is not nearly as prevalent as it was a few years ago, it still takes place more than it should.

The cumulative effect of overlapping audiences and cookie bombing results in excessively high frequency for a subset of exposed users. This harms advertisers in the form of wasted spend and lost opportunities to find new customers – not to mention the risk of annoying retargeted users.

Follow Flashtalking (@flashtalking) and AdExchanger (@adexchanger) on Twitter.

Must Read

Comic: Causal Meets Casual

Jones Road Beauty Is Using A New Type Of MMM To Reset Its Media Measurement

Inside how Jones Road Beauty is trying to turn messy, conflicting measurement signals into a single testing roadmap for its media mix.

Comic: America's Mext Top AI Model

AI Is Moving Fast. The Law, Not So Much

IAPP’s Global Summit in DC was a reminder that AI is moving fast – and judges, privacy lawyers and practitioner are racing to keep up.

CIMM Is Out To Prove That All Media Isn’t Equal

An upcoming paper from CIMM doesn’t just demonstrate that differences in media quality can be measured. It also argues that tying media value to short-term outcomes has perpetuated longstanding industry challenges.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

TikTok On Why Brands Can’t Buy Its New Ad Formats Programmatically

Not unlike last year, the mood during TikTok’s NewFronts presentation last week felt like cautious optimism, if not outright relief.

Meta’s NewFronts Message To Advertisers: Embrace The Noise

Can a good sales presentation offset the impact of a very bad news week? That’s a question for Meta, which collected two guilty verdicts in court this week for failing to protect children and creating additive products.

AI Helps Manscaped Trim Social Chatter Down To The Bare Essentials

Meet Clamor, a new social listening product that pulls cultural insights from online conversations in real time. Clamor helped Manscaped freshen up its marketing, including for this year’s Super Bowl.