Home Data-Driven Thinking Leave No Stone Unturned When Searching For The Right Audience Data

Leave No Stone Unturned When Searching For The Right Audience Data

SHARE:

j-pelino-ddtData-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Jennifer Pelino, vice president of omnichannel media at 84.51°.

As marketing and technology converge, marketers must practice due diligence and push for greater quality control in the audience data they purchase and the metrics they use to manage future decisions and strategy.

No data set is perfect. A deeper dive is usually necessary to identify the most accurate data set for their needs. There is a lot of data to sift through so understanding the methodology and factors used to measure is essential to making the most of data to increase the bottom line.

I believe marketers need to act like investigative reporters by gathering facts, asking questions and following leads that will ultimately reveal the truth. The truth doesn’t have to be perfect – it just needs to be clear.

What are the right questions marketers should ask to uncover the truth in the audiences they are creating for targeting? A few guidelines and principles are key for gaining data-quality disclosure.

For example, marketers should begin by understanding the source and collection technique of the audiences. Is the data passively or actively collected from consumers? Is it survey, demographical, purchase or contextual data? The answers to these questions will help determine if they are working with the right vendor to meet their brand objective. 

Marketers should also ensure comprehension of recency, frequency and consistency at which the data is collected and refreshed. Recency can be the difference between reaching someone at the precise point in their purchase cycle and missing their window of opportunity. Data refreshment and its frequency is often not communicated, and it’s often why consumers receive many retargeting messages for a category product related to a recent purchase.

If propensity or lookalike modeling is being used, marketers need to know the underlying techniques and validation methods. If they don’t have access to first-party validated buyers, models can be used to drive improved personalization, predictive scoring, triggered campaigns and media sensitivity. These robust machine learning applications are strong but marketers still need to determine the basis of the original data in which the back-end models are using to churn through.

Similarly, once the most accurate data for driving business objectives has been determined, a thorough process of defining which metrics to use to measure the campaign should be employed.

Marketers should define which type of media source and device will be hosting their campaigns. If there are multiple sources and devices hosting, they will need to put rigor around how the integration and matching of the data is determined to obtain an accurate read on the consumer groups targeted.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

A test-and-control method is fairly standard practice but the key is to define how tightly the control is matched to the test set.

Finally, marketers should delineate which metrics are the best for their campaign. Bottom-line metrics are the strongest. Tying everything back to sales is critical at the end of the day, so marketers must ensure that incremental sales lift, increased household penetration and trial-and-repeat can be measured.

While this might be an exhausting process, transparency will begin to emerge. Marketers will ultimately feel freer in their thinking about what they are doing to help create their ultimate goal: a better experience for their customers, which is reflected in the bottom line.

Follow Jennifer Pelino (@JenniferJPelino), 84.51° (@8451group) and AdExchanger (@adexchanger) on Twitter.

Must Read

Publishers Feel Seen At The Google Ad Tech Antitrust Trial

Publishers were encouraged to see the DOJ highlight Google’s stranglehold on the ad server market and its attempts to weaken header bidding.

Albert Thompson, Managing Director, Digital at Walton Isaacson

To Cure What Ails Digital Advertising, Marketers And Publishers Must Get Back To Basics

Albert Thompson, a buy-side veteran with 20+ years of experience, weighs in on attention metrics, the value of MFA sites, brand safety backlash and how publishers can improve their inventory.

A comic depiction of Google's ad machine sucking money out of a publisher.

DOJ vs. Google, Day Five Rewind: Prebid Reality Check, Unfair Rev Share And Jedi Blue (Sorta)

Someone will eventually need to make a Netflix-style documentary about the Google ad tech antitrust trial happening in Virginia. (And can we call it “You’ve Been Ad Served?”)

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Alphabet Soup

Buried DOJ Evidence Reveals How Google Dealt With The Trade Desk

In the process of the investigation into Google, the Department of Justice unearthed a vast trove of separate evidence. Some of these findings paint a whole new picture of how Google interacts and competes with its main DSP rival, The Trade Desk.

Comic: The Unified Auction

DOJ vs. Google, Day Four: Behind The Scenes On The Fraught Rollout Of Unified Pricing Rules

On Thursday, the US district court in Alexandria, Virginia boarded a time machine back to April 18, 2019 – the day of a tense meeting between Google and publishers.

Google Ads Will Now Use A Trusted Execution Environment By Default

Confidential matching – which uses a TEE built on Google Cloud infrastructure – will now be the default setting for all uses of advertiser first-party data in Customer Match.