“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today’s column is written by Matt Spiegel, EVP of marketing solutions and head of the media vertical at TransUnion.
You would be hard pressed to find someone in this industry who doesn’t think marketing has a jargon problem. Our industry has the tendency to describe things – whether that’s processes, platforms or assets – with labels that are neither descriptive nor useful when put into real-world business application and strategy. At best, this becomes fodder we laugh about at industry conferences and networking events (remember those?). At worst, it morphs into narratives that ultimately become removed business outcomes.
I’m currently most annoyed by how we label and assign value to data, such as always calling data I own/originate first-party, and data from other companies third-party.
This labeling creates more questions than answers as the meaning or interpretation can depend on the use case – take a marketer using an audience from a publisher’s cookie pool as an example. To the publisher, this is their first-party data, but a marketer might consider this to be second- or third-party data. As use cases become more sophisticated, that variability makes the dated connotations associated with these terms problematic.
The real issue is these labels are subject to an increasingly false notion equating to an expected standard of quality and performance in which good data (first-party) is pitted against bad data (third-party).
But data markets have improved from a time when third-party data, like gender, may have been modeled or guessed at without a good understanding of its source methodology. The way data is sourced, validated and used is evolving with the rise of consent and compliance requirements, the end of third-party cookies and the continued evolution towards people-based marketing.
Almost no company – maybe a few of the largest walled gardens – has all the consumer data it needs. It is also true that there is only one path toward accomplishing both privacy compliance and personalization objectives at scale. That path is to maximize the value of your owned data (OK, first-party data), when available, and link it with data from high-quality partners and providers (yes, third-parties), and when not available, turn to trusted third-parties.
What’s bad is bad data
It’s incumbent on all of us to understand how data is created and used, and to stop proliferating the idea that third-party data is dead or dying. Bad data is bad data and good data is good data, regardless of the party creating or originating it.
What often matters more is the strength of signal – web behavior, offline transactions, etc. – in the context of how you’re using it. A web user searching for a Toyota dealership near them may be a good signal for a bottom funnel campaign. A person who’s only owned Toyota vehicles throughout their life may be a better, long-term signal to use in a predictive model because, as it turns out, data scientists are really smart and know what they’re doing with those insights.
Ultimately, good data are the signals which marketers can accurately use to expand and enhance their own data to create sustainable scale based on specific campaign objectives.
Data sourcing isn’t new
Data partnerships help fill gaps that exist over time – and having those gaps filled helps build new product, enables better customer trend analysis and more. That’s why data partnerships have been part of the marketing model for decades, going back to supporting catalog and direct mail needs.
The explosion of devices, content and technologies has digitized consumer experiences and in turn, created new data streams and attributes, allowing for a more robust view into consumers. While the transition from offline to digital came with some obstacles, we should credit the market for what we’ve learned.
We’ve improved data validation techniques using known, reliable data. We’re working toward more standardized consent frameworks. And we’ve created a deeper understanding of both individuals and households. Any fight to undo this progress and stop using third-party data is unhelpful and unrealistic. The focus should instead turn to continued improvement in transparency, data sourcing and the validation, standardization and testing of proven data sets.
Companies that can take advantage of proven data will benefit from flexibility to meet unique and specific needs. That’s why data marketplaces – and here I’m talking about places to get access to high-quality signals at the time they’re needed – make sense.
The future brings more applications
Over time, third-party data has been reduced to “bad” data since you don’t collect it yourself. Opaqueness of both the term third-party and the practices that arose in the height of programmatic buying perpetuated this thinking. But today’s data applications go beyond tactics of targeting simple demographics to predictive modeling, identity resolution, measurement and more.
High-quality identity signals sourced with transparency – with the proper consent and permissions, for example – could be part of the solution to the giant question mark that is the deprecation of cookies, shifts in mobile identifiers and growing privacy legislation and expectations. It’s unlikely the existential threat of privacy regulations will cause the entire digital marketplace to fall. Instead, those entities equipped with the tools, experience and expertise will prevail first by proactively adapting to new expectations.
The next step is to improve education in the market at all levels, including for consumers, the legislature and the marketing industry. The more we all understand what makes good data signals and good data sources, the more success we’ll have filling in necessary gaps in privacy-compliant ways that create better experiences for consumers and advertisers alike.
Follow Matt Spiegel (@mspiegel) and AdExchanger (@adexchanger) on Twitter.