Drawbridge is letting brands get their hands dirty with cross-device data.
On Monday, the cross-device company launched self-serve functionality that lets ad buyers see how their buying choices affect what sort of identity graph they get on the other side.
The goal is to give advertisers more control so “they can really see how this stuff works,” said Drawbridge CEO and founder Kamakshi Sivaramakrishnan.
“It’s been a black-box environment for a long time, where data files just get dumped on you and then it’s up to you to somehow evaluate the data and ask the right questions,” she said. “This is about providing transparency.”
The value of a device graph hinges on what an advertiser needs to accomplish. Want reach? You’ll have to sacrifice precision. And if your goal is to be super-targeted, expect scale to go down. Reach also has to be taken into consideration. And then there’s the question of where the device identifiers originate from and what regions the advertiser wants to target.
Drawbridge’s self-serve option lets advertisers upload their data and play with those configurations. For example, they can see how matching cookies-to-cookies vs. cookies-to-devices might impact the output, or how high-precision/low-recall vs. low-precision/high-recall does so. Existing Drawbridge customers can use the graphs to execute media buys through their DSP partners.
Sivaramakrishnan hopes the transparency helps further legitimize probabilistic methodologies.
“The question of whether solution A is better than solution B is secondary. This is an effort to show that probabilistic solutions stand a chance against the deterministic first-party assets that the walled gardens have,” she said. “Transparency into the process increases confidence in probabilistic methods more broadly.”
There’s been a growing move recently toward demystifying cross-device.
The Direct Marketing Association, for example, recently released an RFI template in collaboration with Acxiom, LiveIntent, MediaMath, LiveRamp, Oracle and others. The goal is to help advertisers and publishers ask the right questions when evaluating cross-device technologies.
“The buyers are as responsible here as the vendors,” said Jane Clarke, CEO and managing director of the Coalition for Innovative Media Measurement during a cross-device workshop co-hosted with the DMA at Advertising Week in September. “Buyers have to demand transparency. Keep asking questions.”
Keep asking questions, but realize that there are no simple answers.
Consider Nielsen’s efforts to size up the cross-device competitive set. Tapad and Drawbridge both received flak after working with Nielsen separately to verify the accuracy of their graphs, which clocked in at 91.2% and 97.3% accurate, respectively.
The definition of accuracy in the cross-device context has a lot to do with it. Although accuracy and precision would seem to be synonyms, precision is defined as the number of correct matches, while accuracy includes the number of matches correctly identified and the number of matches correctly identified as not being matches. In other words, you get credit for knowing when something isn’t right.
“It’s really hard to cut through some of the claims around accuracy and scale,” said Philipp Tsipman, VP of audience identity at MediaMath. “If you say something has a certain very high level of accuracy and then it doesn’t perform that way for you, that puts a lot of doubt in people’s minds about cross-device in general.”
Sivaramakrishnan acknowledged that there was some dubiousness in the industry about the Nielsen numbers. The self-serve graph, she said, goes beyond that.
“Some more sophisticated brands and marketers reacted with some skepticism about the Nielsen validation,” she said. “But a self-service platform lets them test the quality of the data and see for themselves.”