Home Data-Driven Thinking If You Want To Measure Incrementality, Do It Right

If You Want To Measure Incrementality, Do It Right

SHARE:

 sebastien-blanc“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Sebastien Blanc, general manager, US, at Struq.

Considering the number of channels where marketers can spend their budgets, understanding and proving a return on ad spend (ROAS) is vital. If ROAS is properly understood, then marketing budgets shouldn’t be capped.

If you can drive revenue above the cost of your advertising, why would you stop? Unfortunately, CMO budgets are capped because understanding incremental revenue is nontrivial.

Incremental revenue is defined as revenue that would not have occurred without a specific campaign, everything else being equal. It is a view that is radically different and more reliable than last-click revenue.

The concept of incrementality is still maturing and different tests with the same name actually cover very different realities, ranging from accurate depiction of the truth to pure science fiction. But while testing incrementality is littered with pitfalls, such as misallocating users or premature decision-making, there are two methodologies that can help avoid them.

Different Approaches

An incremental revenue test compares the average revenue driven by users from two groups: those assigned to a retargeting group vs. users in a control group.

When and how users are assigned is critical. You never want to compare people who saw an ad to those who did not. This would flatter results, since you would only show ads to the highest-value users. We need both groups to have the same blend of users, some highly engaged and others less engaged.

The first step is to split your pool of users in two groups: one to be retargeted in a normal way and a control group. There are two ways to split the groups that avoid the problem of comparing “apples with pears.”

You can split new users randomly as they land on site and only show ads to the half that make up the retargeting group (bear in mind some in this group you will decide not to show ads to). Never show ads to the other half of users that comprise the control group.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

This methodology has the advantage of not triggering any cost of media for users in the control group, making the experiment slightly cheaper. Revenue per user is computed using all the conversions happening in each group – therefore ignoring notions like last click and last impressions.

Because this set-up takes all users into account, the methodology drives the most reliable results when budgets are close to the maximum theoretical budget. If you decide to spend only 10% of the maximum deliverable budget, results are likely to be way below the potential incremental revenue of your program.

You can also split users randomly at the point of ad serving, thereby showing ads to users in the retargeting ad group and charity ads to those in the control group. This methodology drives more reliable results at lower levels of spending and is easier to track on a daily basis. Because ads are shown to users, the control group will incur additional media costs that will decrease the ROAS of your program during testing. This methodology is what most marketers go for because you also help a charity in the process.

Neither methodology is perfect, so marketers need to choose based on specific sites and goals.

The Right Set-Up For Your Goals

As in most things digital, the devil lies in the detail. The first critical aspect involves understanding how many conversions are needed in the control group for the results to be reliable. There are several simulators out there to help you compute the right sample size. Keep in mind that before you hit this threshold, it is impossible to rely on any result because small samples often produce dramatic results, either positive or negative.

Beside statistical significance, it is also important to include at least one full decision cycle in your experiment, preferably two. If you know that customers usually take seven days to buy one of your products, then ideally the incrementality test must last at least 14 days to include two full cycles.

Most marketers want to go for 50/50 split of users. Even though it might sound more reliable, it actually does not make results more reliable or easy to interpret. It instead limits the revenue generating power of your campaign. On a website receiving more than 1 million visitors per month, you can reach statistical significance in a few weeks with a 90/10 split, thus maximizing revenue at the same time as you measure incremental revenue.

Finally, it is important to make sure the control group is not contaminated, meaning that no user in the control group should ever see a retargeting ad. You can guarantee that by only populating the control group with brand new users.

Being able to measure incremental revenue in an accurate way is the key to maximizing your growth as a retailer. Since each situation is unique, make sure you study your goals thoroughly and agree on the best possible methodology with your vendor before starting any test.

Follow Struq (@struq) and AdExchanger (@adexchanger) on Twitter.

Must Read

NYT’s Ad And Subscription Revenue Surge As WaPo Flails

While WaPo recently lost 250,000 subscribers due to concerns over its journalistic independence, NYT added 260,000 subscriptions in Q3 thanks largely to the popularity of its non-news offerings.

Mark Proulx, global director of media quality & responsibility, Kenvue

How Kenvue Avoided $3 Million In Wasted Media Spend

Stop thinking about brand safety verification as “insurance” – a way to avoid undesirable content – and start thinking about it as an opportunity to build positive brand associations, says Kenvue’s Mark Proulx.

Comic: Lunch Is Searched

Based On Its Q3 Earnings, Maybe AIphabet Should Just Change Its Name To AI-phabet

Google hit some impressive revenue benchmarks in Q3. But investors seemed to only have eyes for AI.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Reddit’s Ads Biz Exploded In Q3, Albeit From A Small Base

Ad revenue grew 56% YOY even without some of Reddit’s shiny new ad products, including generative AI creative tools and in-comment ads, being fully integrated into its platform.

Freestar Is Taking The ‘Baby Carrot’ Approach To Curation

Freestar adopted a new approach to curation developed by Audigent that gives buyers a priority lane to publisher inventory with higher viewability and attention scores than most open-auction inventory.

Comic: Header Bidding Rapper (Wrapper!)

IAB Tech Lab Made Moves To Acquire Prebid In 2021 – And Prebid Said No

The story of how Prebid.org came to be – and almost didn’t – is an important one for the industry.