Home Brand Aware Data For Data’s Sake

Data For Data’s Sake

SHARE:

Brand Aware” explores the data-driven digital ad ecosystem from the marketer’s point of view.

Today’s column is written by Belinda J. Smith, global director of media activation at Electronic Arts. Belinda will present “EA’s Programmatic Arts” at AdExchanger’s upcoming PROGRAMMATIC I/O New York conference on October 25-26.   

Having been in the biddable/programmatic space for more than 10 years now, I have spent a lot of my time focusing on that infamous “o” word: optimization.

I have had more than one job where my entire purpose was to test every variable and combination thinkable to find the most efficient way to get ads in front of an audience and maximize ROI. The point was to collect as much data about every single thing under the sun to help marketers succeed.

When I think about this approach it reminds me of the “Rime of the Ancient Mariner”: “Water, water, everywhere / Nor any drop to drink.” That is to say, as an industry we are so focused on optimizing, testing and getting more and more data that we are now drowning in data without a sense of what we’re really learning from it.

Programmatic was built on the promise of being one of the few ways to help marketers truly understand “which 50%” of their media is waste. And while it has enabled us to test and learn in a way we never thought imaginable, we’re not devoting nearly enough time or energy to thinking about what to do with all of this new data to improve our business outcomes.

With the emergence and growing accessibility of, and interest in, machine learning, augmented or artificial intelligence and the ubiquitous “algorithm,” marketers have become absolutely obsessed with optimizing, testing and reaching unimaginable efficiencies with ever more sophisticated and nuanced media-buying strategies.

Consider that there are multimillion-dollar firms that do nothing but show ads to people for products they’ve already looked at on a website. They are able to profitably host yacht parties at Cannes and rent halls at Dmexco by doing nothing but testing the right time and way to remind consumers that they’ve already looked at something!

Even more fascinating for me to watch has been the growing popularity of the nebulous concept of incrementality. Incrementality includes paying for your target audience to not see your ads so that you can (hopefully) determine if you’re showing ads to people who were going to convert anyway or if showing your ad was a factor in their decision to convert.

And while this approach is sophisticated, complicated, costly and intensive, the data you get back is also perplexing. Someone recently told me they ran an “incrementality” test (which was actually the definition of an A/B holdout test, but that’s a topic for another day) that showed 20% lift. They were thrilled because the other tests they’ve run had only showed a 3% to 8% lift. And as a marketer, my first thought was, “Did you just tell me you’re excited about a test which suggests 80% of your budget is waste?”

When I press people to ask what they’ve changed as a result of this type of testing, the most common answer I get is that the data goes into a black-box attribution model so that they can further optimize their spend. What?!

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Now I can already predict some comment along the lines of, “My company tested DCO/retargeting/incrementaility/etc. and saw a 30% lift in effectiveness.” And to that I say, that’s great! So, does that mean you were able to sell 30% more of your goods or services with the same budget by applying those learnings? The answer I usually get back to that question is “Well, no.” Then what does the 30% increase in “effectiveness” mean? What action are you supposed to take with that information? Isn’t the point of understanding waste to be able to stop wasting?

I’m not saying we shouldn’t test or optimize or gather as much data as possible. What I’m saying is this: We need to get our houses in order. We should not be relying on tests we don’t fully understand to give us directional data we don’t feel confident in as proof we are doing a good job.

If you are running a test and would not significantly alter your marketing strategy or budget based on the outcome, why are you wasting your money on that test? If you are running a test that you can’t explain to a colleague or letting a proprietary algorithm blindly optimize your campaigns to be more efficient without understanding how that’s being done, how will you interpret the data and results and know it’s being done correctly? If you ran a test that told you your marketing is effective but your business did not see a positive increase in sales, was that data really helpful to you?

For something to be valuable it should be understandable, instructional and scalable. Similar to managing product development cycles, I have found it incredibly helpful to spend time creating a testing road map for media. That plan includes what I am seeking to learn, how I propose to test my hypotheses and what implication the resulting data will pose to my current operations and business overall.

Mapping this out ahead of time helps to better assign time and resources to prioritize the big things I want to learn. It is also a useful tool to allow others outside of my discipline to offer feedback on the objectives, priority and approach of such initiatives while tangibly demonstrating the overall value of the media program and the data coming back from it.

At this year’s ANA Media Conference in Orlando, CEO Bob Liodice reminded us that no matter how much growth eMarketer is showing in digital media, and no matter how much more effective Nielsen and comScore are claiming we are as an industry, total US sales declined 7.3% to $14.5 trillion dollars in 2016. This was the second straight year of sales declines, even as we talk about how much better we’re getting at digital. That’s the data that I’m most interested in.

Follow Belinda J. Smith (@BJStech), Electronic Arts (@EA) and AdExchanger (@adexchanger) on Twitter.

Must Read

Comic: Gamechanger (Google lost the DOJ's search antitrust case)

The DOJ And Google Sharpen Their Remedy Proposals As The Two Sides Prepare For Closing Arguments

The phrase “caution is key” has become a totem of the new age in US antitrust regulation. It was cited this week by both the DOJ and Google in support of opposing views on a possible divestiture of Google’s sell-side ad exchange.

create a network of points with nodes and connections, plain white background; use variations of green and grey for the dots and the connctions; 85% empty space

Alt Identity Provider ID5 Buys TrueData, Marking Its First-Ever Acquisition

ID5 bought TrueData mainly to tackle what ID5 CEO Mathieu Roche calls the “massive fragmentation” of digital identity, which is a problem on the user side and the provider side.

CTV Manufacturers Have A New Tool For Catching Spoofed Devices

The IAB Tech Lab’s new device attestation feature for its Open Measurement SDK provides a scaled way for original device manufacturers to confirm that ad impressions are associated with real devices.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: "Deal ID, please."

The Trade Desk And PubMatic Are Done Pretending Deal IDs Work

The Trade Desk and PubMatic announced a new API-based integration for managing deal ID campaigns built atop TTD’s Price Discovery and Provisioning (PDP) API, which was announced earlier this year.

Uber Launches A Platform-Specific Attention Metric With Adelaide And Kantar

Uber Advertising, in partnership with Adelaide and Kantar, launched a first-of-its-type custom attention metric score for its platform advertisers.

Google Shakes Off Its Troubles And Outperforms On Revenue Yet Again

Alphabet reported on Wednesday that its total Q3 revenue was $102.3 billion, up 16% year over year, while net profit increased by a third to $35 billion.