Mo’ Match Rates Mo’ Problems As Cross-Device Vendors Aim For Scale

xdidimgCross-device identity match rates have shot up in recent years, but brands and agencies remain skeptical of the results.

“We were consistently disappointed with cross-device identity matches,” said David Kohl, CEO of the digital media advisory firm Morgan Digital Ventures. “There’s a gap in understanding of what’s possible between vendors and the buy side, [which leads to] frustration (on both sides) over unclear expectations or results.”

Some point to industry consolidation as the driver behind rising match rates. But as many cross-device vendors increasingly deploy probabilistic matching to manufacture scale, aggregated data is added and verifiable identities are sacrificed.

“This approach can present challenges for advertisers who may want to run verification, as there is no reliable way to measure accuracy of the match,” said Keith Johnson, EVP of strategic data solutions at Wunderman.

An agency can use a client’s CRM data or other owned personally identifiable information (PII) to verify identities if a person is served an ad to his or her phone or desktop and then visits the client’s site and logs in, makes a purchase or provides PII in some way. It’s a messy, laborious process.

“Although many clients like the idea that they can run these tests, in reality it very rarely happens today,” Johnson said.

There is no industry norm or consensus on what is a “good” match rate because they deliberately vary by type of campaign, brand industry category or first-party data used.

Say a brand wants its attribution provider to connect people who were exposed to TV and digital ads with retail foot traffic via a location analytics vendor. Stringent attribution requirements would call for a highly deterministic data set and a match rate greater than 90%, said Auren Hoffman, the former CEO of LiveRamp who now helms a startup called SafeGraph.

An automotive marketer more concerned with long-term exposure frequency and greater reach, however, may be happy with a more probabilistic set matching 30% to 40%.

This is why straightforward questions like “What are your match rates?” can be counterproductive.

Marketers are frustrated, but it’s a frustration born of optimism, said Kate Clough, a media-planning VP at MRM//McCann.

“It used to be that (cross-device) plans were breaking down in the execution. Now we see the opportunity to provide those connected experiences, but it can be pretty daunting because buyers and sellers speak very different languages,” she said.

Morgan Digital Ventures and MRM//McCann are pilot partners in an initiative launched recently by the DMA to standardize cross-device RFP templates and establish the basics with a glossary of industry jargon. Terminology can have different meaning for marketers and cross-device vendors, and it leads to confusion over results.

For example, marketers may use the words “accuracy” and “precision” interchangeably, but those are discrete terms in the cross-device space. “Accuracy” is the percentage of correctly identified matches plus correctly identified non-matches – so a campaign with a low match rate could actually have high accuracy. “Precision” is the percentage of all possible matches correctly identified by the vendor.

“Recall” is a media-buying metric to judge an ad’s impact on consumers, but a cross-device vendor considers recall to be the percentage of overall identified matches divided by all known true matches.

“Agencies typically have a clear idea of how they’d like match rates to be calculated, but if there’s any ambiguity in their guidance to the supplier the results can be inflated,” Clough said. “Are you talking about matched devices or unique individuals? In some cases it seemed accidental, but it left people vulnerable to potential manipulation.”

Michael Schoen, VP of marketing services at Neustar, was less charitable about some linkage claims: “As with everything in ad tech, where there’s a financial incentive, there’s fraud.”

It’s now common for a third-party measurement firm like comScore to verify cross-device graph results when they’re applied in advertising, said Yael Avidan, VP of product at the mobile DSP Adelphic.

Even with verification, some marketers still aren’t convinced the results are accurate.

“If I see match results I don’t like – they seem too high maybe or I’m just not confident what I’m seeing is correct – comScore verification isn’t going to change that,” said one major toy brand marketer who said she couldn’t comment publicly due to NDAs with multiple data-matching vendors.

Much of the overall confusion over match rates is due to a lack of vendor transparency, said LiveRamp Chief Product Officer Anneka Gupta.

“There are lots of people out there saying they do deterministic matching, but you have to be careful about what’s being layered into the data,” Gupta said.

Many cross-device vendors and data aggregators regularly pay publishers to help them connect a customer’s data to web traffic or email sign-ups. The inconsistencies plaguing publisher data monetization – bot farms juicing numbers with fake emails or actual people using throwaway addresses on non-billing accounts – can be passed on to device graphs.

To shake buy-side concerns about finding quality data at scale, some firms are “moving toward a shared, authoritative match set,” said Jay Stocki, SVP of data and product at Experian.

Stocki alluded to Experian’s partnership with Neustar. The two longstanding titans in cross-channel identification began selling a shared profile-matching product earlier this year.

Acxiom, another legacy giant, pursued a similar goal when it purchased LiveRamp in 2014. The European telco Telenor and Oracle each bought a cross-device ID firm this year (Tapad and Crosswise, respectively).

Executives from Acxiom/LiveRamp, Oracle and Experian/Neustar each attributed a steep growth in match rates over the past two years to industry consolidation and cooperation.

Clough anticipates more consolidation as the wider ecosystem seeks enough shared scale to mollify marketers and offset the advantages enjoyed by Google, Facebook and Amazon, all of which have their own proprietary cross-device data.

“[Cross-device vendors] are willing to work with a variety of partners,” Clough said, “because that’s actually the only way for them to differentiate.”

6 Comments

  1. Thanks for helping to bring this discussion to the forefront, James. As you point out, match rates, recall, coverage, precision, etc. each have very specific meanings that often lead to confusion, or worse, misguided comparisons and decisions. Regardless, it’s important to keep having these conversations and talking about these issues.

    Given Drawbridge’s stake in the space, we pay very close attention to these conversations. Something we’ve learned over the years from speaking with clients and technology partners is that there is no such thing as a one-size-fits-all graph. As a direct result of this, we’ve developed a transparent, self-serve dashboard that gives marketers options and control over the graph algorithm employed for their specific use case, be it audience extension, retargeting, attribution, or any other application. Marketers now have complete visibility into how their choices impact the precision, recall, and coverage of the custom graph mapped to their audience, down to the level of the device identifier.

    Until we reach a universal currency for cross-device identity, there are going to be growing pains. In the meantime, it’s up to the graph providers to provide tools that educate and inform the market.

    Reply
  2. I'm not sure a universal currency is a realistic goal ... and my instinct is that it shouldn't be. Rahul is spot-on that this is a space where there will be growing pains, and I think we'll be experiencing them for quite some time. What is necessary is a new level of honesty and transparency. DMA's initiative, which I applaud, is designed to bring clarity to the buyer-seller relationship, and to improve the odds that when an organization needs cross-device identity services, the solution chosen delivers what was promised. Too often, buyers aren't getting what they anticipated ... and often it is because of the language-gap between the buyer and seller. Another common symptom, so I understand, is that buyers don't always tell sellers all they need to know. DMA's RFI template addresses these issues. I encourage you to take a look (thedma.org/xdid-rfi) and give the DMA your comments. With the right level of input from a sufficiently-diverse set of marketers, agencies, publishers and cross-device solution providers, we'll see an acceleration of the maturity that closes this buy-sell gap, and enables all boats to rise across the supply chain.

    Reply
  3. I for one am very glad to see this conversation come to light, thanks to this well-researched article by James Hercher. It echoes what we hear from brands every day who are increasingly frustrated with match rates of their customers to devices that are falling far short of the scale they need.

    The key, though, is that buyers have to become better informed. Is it a case of just going with the agency's recommendation or simply not taking the time to really look underneath the hood of ad platforms and onboarders to truly understand how people and devices are being linked to data? Whatever the reason, ill-informed buyers risk being disappointed with the end results either because of low match rates or inaccuracies in matching.

    And accuracy is a big deal. Accurately linking data to people across their mobile devices is crucial for accurate campaign targeting and essential to getting a true read of campaign performance. In an environment where so much attention is paid to viewability and fraud, with advertisers insisting on not paying for unseen impressions, it seems illogical that advertisers are still accepting matching accuracy of 75% (which means a quarter of matches are flat wrong) or running campaigns against "audiences" that are constructed with guesswork and not actually knowing who is seeing the ad.

    Choosing the right onboarding partner can mean the difference between a major boost in sales for brands or a mega-bomb of wasted ad dollars. Thanks for starting the discussion, James. Let's make sure it continues, and advertisers become smarter and better informed by asking the tough questions and push beneath the sales pitch to understand what they are buying.

    Reply
  4. While all the arguments discussed in the article are indeed very valid, I would like to point out that yet again there is blatant ambiguity staring back at us. Take the title for instance. What is a ‘match rate’? How is it defined? Who defines it? When did ‘match rate’ become a valid cross-device metric?

    If I am asking these questions, it’s no wonder that the market feels frustrated and rather skeptical of the technology and its capabilities. It clearly indicates that the evolving cross-device market is going through a period of growing pains like many others have experienced before us. However, now that this issue has been brought to light, it is time for cross-device graph providers globally to step up and clean up their act.

    The solution is simple: educate the market. We need to speak the same language. It’s by far the easiest way to overcome miscommunication and or confusion. Even more importantly, it combats unrealistic expectations over results.

    If by ‘match rate’ the article refers to accuracy, I would argue that this metric is meaningless. It only indicates the map’s correctly identified matches, plus the correctly identified non-matches, out of all possible matches. In simple terms it will always be close to 100% regardless of how many correct matches are identified.

    The two metrics you need to know are precision and recall. Precision is a fraction of predicted matches between devices that are indeed the actual/true ones, whereas recall is the fraction of total actual matches that are correctly identified. Achieving the ideal balance between precision and recall is one of the main goals behind device matching.

    However, the market needs to understand why only asking for precision or recall scores can be more counterproductive than not. It’s not a simple one-size-fits-all solution. For some use cases, precision (correctness) is more important, whereas in others, recall (market reach) is what matters.

    Why is this? Well, cross-device graph providers can optimize the metrics in order to help you achieve your business goals. For example, performance focused agencies may prefer a more precision optimized graph, which has the highest level of precision, to ensure that they are communicating to the same user. Brand advertisers on the other hand may prefer a branding campaign to focus on more exposure and reach, which is why they might opt for a recall optimized graph where the precision can be slightly decreased.

    By proactively educating the market you can close this knowledge gap and better communicate what is actually possible in order to prevent dissatisfaction. Now is the pivotal point to take action and continue to move forward with the mantra: More matches, more clarity.

    Reply
    • The DMA assembled about 15 cross-device solution providers, marketers and agencies to tackle a lot of this ambiguity. I think they've done a super job, and I'm personally proud to have played my part, so far, in their efforts. Take a look at thedma.org/xdid for their "XDID industry standard RFI" tool. Worth a read, and the public comment period is open through Dec 31, 2016.

      Reply

Add a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>