Home Data-Driven Thinking If You Want To Measure Incrementality, Do It Right

If You Want To Measure Incrementality, Do It Right

SHARE:

 sebastien-blanc“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Sebastien Blanc, general manager, US, at Struq.

Considering the number of channels where marketers can spend their budgets, understanding and proving a return on ad spend (ROAS) is vital. If ROAS is properly understood, then marketing budgets shouldn’t be capped.

If you can drive revenue above the cost of your advertising, why would you stop? Unfortunately, CMO budgets are capped because understanding incremental revenue is nontrivial.

Incremental revenue is defined as revenue that would not have occurred without a specific campaign, everything else being equal. It is a view that is radically different and more reliable than last-click revenue.

The concept of incrementality is still maturing and different tests with the same name actually cover very different realities, ranging from accurate depiction of the truth to pure science fiction. But while testing incrementality is littered with pitfalls, such as misallocating users or premature decision-making, there are two methodologies that can help avoid them.

Different Approaches

An incremental revenue test compares the average revenue driven by users from two groups: those assigned to a retargeting group vs. users in a control group.

When and how users are assigned is critical. You never want to compare people who saw an ad to those who did not. This would flatter results, since you would only show ads to the highest-value users. We need both groups to have the same blend of users, some highly engaged and others less engaged.

The first step is to split your pool of users in two groups: one to be retargeted in a normal way and a control group. There are two ways to split the groups that avoid the problem of comparing “apples with pears.”

You can split new users randomly as they land on site and only show ads to the half that make up the retargeting group (bear in mind some in this group you will decide not to show ads to). Never show ads to the other half of users that comprise the control group.

This methodology has the advantage of not triggering any cost of media for users in the control group, making the experiment slightly cheaper. Revenue per user is computed using all the conversions happening in each group – therefore ignoring notions like last click and last impressions.

Because this set-up takes all users into account, the methodology drives the most reliable results when budgets are close to the maximum theoretical budget. If you decide to spend only 10% of the maximum deliverable budget, results are likely to be way below the potential incremental revenue of your program.

You can also split users randomly at the point of ad serving, thereby showing ads to users in the retargeting ad group and charity ads to those in the control group. This methodology drives more reliable results at lower levels of spending and is easier to track on a daily basis. Because ads are shown to users, the control group will incur additional media costs that will decrease the ROAS of your program during testing. This methodology is what most marketers go for because you also help a charity in the process.

Neither methodology is perfect, so marketers need to choose based on specific sites and goals.

The Right Set-Up For Your Goals

As in most things digital, the devil lies in the detail. The first critical aspect involves understanding how many conversions are needed in the control group for the results to be reliable. There are several simulators out there to help you compute the right sample size. Keep in mind that before you hit this threshold, it is impossible to rely on any result because small samples often produce dramatic results, either positive or negative.

Beside statistical significance, it is also important to include at least one full decision cycle in your experiment, preferably two. If you know that customers usually take seven days to buy one of your products, then ideally the incrementality test must last at least 14 days to include two full cycles.

Most marketers want to go for 50/50 split of users. Even though it might sound more reliable, it actually does not make results more reliable or easy to interpret. It instead limits the revenue generating power of your campaign. On a website receiving more than 1 million visitors per month, you can reach statistical significance in a few weeks with a 90/10 split, thus maximizing revenue at the same time as you measure incremental revenue.

Finally, it is important to make sure the control group is not contaminated, meaning that no user in the control group should ever see a retargeting ad. You can guarantee that by only populating the control group with brand new users.

Being able to measure incremental revenue in an accurate way is the key to maximizing your growth as a retailer. Since each situation is unique, make sure you study your goals thoroughly and agree on the best possible methodology with your vendor before starting any test.

Follow Struq (@struq) and AdExchanger (@adexchanger) on Twitter.

Tagged in:

Must Read

Paramount’s Upfront Pitch Is About Three Things

Paramount is merging the ad tech stacks behind Paramount+ and Pluto TV, releasing a new performance product, offering more control over ad placements and introducing dynamic ad insertion in live sports.

Hard Truths For Retail Media At The IAB Connected Commerce Summit

The IAB’s Connected Commerce event in New York City this week felt to me like the retail media industry’s first sit-down explanation to a child who is now a “big kid” and must act accordingly.

Meta Is Launching An Easy Button For CAPI

Meta is simplifying its CAPI setup and teaching its pixel new tricks, including adding an AI-powered feature that automatically pulls in data from an advertiser’s website.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

TelevisaUnivision Joins The Streaming Self-Service Bandwagon

TelevisaUnivision is the latest TV publisher to join the self-serve trend that’s rising in popularity across connected TV advertising. Its streaming inventory is now available to buy through fullthrottle.ai’s self-serve platform. The collaboration includes an ad bidder designed to improve both targeting and measurement.

Comic: Gamechanger (Google lost the DOJ's search antitrust case)

For Google Advertisers Who Overpaid The Monopoly – Don’t Hate, Arbitrate

Law firm Keller Postman is leading mass arbitration suits against Google, seeking advertiser damages for alleged monopoly overpricing. The total available pot is a quarter-trillion dollars.

Can An AI Solution Fix Misaligned Marketing Orgs?

Opal launched Gem, a new AI solution, to help large brands unify the layers of media and tech within their organizations.