BlueKai CEO Omar Tawakol discusses what his company offers in terms of linking consumers across devices.
This is a continuation of a series that previously featured John Nardone, CEO of [x+1]. On Tuesday we’ll post an interview on this subject with Bill Demas, CEO of Turn, and on Wednesday we’ll post an interview with Scott Howe, CEO of Acxiom.
AdExchanger: What are the client demands around connecting mobile devices with other consumer activity?
OMAR TAWAKOL: At the end of the day, there’s one customer and whether they’re coming through search, site or a mobile application or the store, it’s still one customer. The more [clients are] able to unify the knowledge of the customer, it’s much better service for that customer. And that’s a c-level mandate from our clients where they try to get an omnichannel strategy in place for 2014.
So what can be done? First thing we’re starting to see is offline to online linkages. That comes through various providers and that’s pretty standard. The linkage that’s more interesting right now is online to mobile. Now you’ve got a ring of three channels that are connected. We’re seeing and deploying two types of techniques to get online to mobile.
How do you get that linkage?
The first is the statistical technique, which says this user is probably, with a very high probability, the same user I saw online. It has high reach, because you’re using a statistical technique, you’re essentially having a guess about everybody, and you have a statistical cutoff. There’s another technique we’re also deploying, which is nonstatistical, more accurate, which says: I know for sure that this is the same user on the MacBook Air as on that iPhone. Those techniques have lower coverage, because they’re leveraging a relationship that might be in place like a billing relationship or a carrier relationship that might happen between the two devices. So they’re higher accuracy but have far lower reach.
The reason we’re deploying both is because neither solution is yet ideal. Both are still forming. You use higher accuracy where you can. And where you lack the coverage, you use the higher-reach statistical approach.
What’s the accuracy percentage?
Say you’re using an iPhone and you’re using a statistical technique to identify someone on the mobile Web or on a mobile app. As you know on the mobile Web you can’t use a third-party cookie and if you don’t have access to the advertiser ID, you’ll need a statistical bridge. Those bridges tend to be in the high 70% accuracy range and there are things you can do to boost it. People talk of those bridges being good. I personally don’t think that’s a great area to be. You need to use other techniques, other than statistics to try to boost the accuracy levels. The industry seems to be happy with high 70s is a good thing, but in my opinion it’s going to get better and it needs to get better.
What is ideal? Obviously, 100% is ideal but you can’t get that…
Yeah, you can get 100%. That’s the coverage technique. That’s if you have some sort of stable first-party identifier like the carrier ID. The carrier ID is in the IP traffic that goes on the mobile Web and on the mobile app. When you have that kind of signal, it’s 100% accurate. But you don’t have it on all carriers, you don’t have access to it unless you have the right deals.
What you’ll see in 2014 is this battle out. The high accuracy techniques will fight to get higher reach. The statistical techniques with higher reach will battle to get higher accuracy. We’ll see who wins. Our bet? It doesn’t matter because we’re working with both techniques, and with multiple vendors.
How can the statistical approach get more accurate?
If you just use statistics alone, it can only fine-tune its accuracy a little bit at a time. Because there’s going to be essentially a war from the browsers, who will try to reduce the entropy that comes through the browser, so the statistical techniques do worse and worse. While [advertisers using statistical methods] try to do better, the browsers are trying to make them do worse. It’s like fingerprinting works because subtle attributes come through the browser. The browser wants to reduce differentiating bits so you can’t do what you want to do.
That war will play out so even if statistical ID gets better, it’ll also over time degrade. So you’ll use other techniques, like having first-party stable IDs, and you’ll have to have enough of those so you’re not reliant only on statistical techniques.
Why is there that push from the browsers to obfuscate information that might help advertisers?
I have 50 reasons to tell you but I can’t because I don’t want to speak to the browser’s business. They have at certain times disclosed to me what the reasons are, but unfortunately I can’t talk about it.
Are their reasons valid?
Let’s not talk about browsers. Let’s step back. There’s a war for owning addressability, which used to be free because it was a third-party cookie and it was an even playing field.
A lot of players don’t like that. They want it replaced with a controlled for-pay mechanism. Once that’s up for grabs, who’s going to fight for this? The browser manufacturers, the device manufacturers, the carriers who carry the traffic, the ad exchanges who control the aperture of what you see in a real-time bid request, the log-in sites with lots of logins and lots of customers.
Those are five players who enter the battle with very different resources. There won’t be a single winner. There will be more of an oligarchy and the role of something like a DMP will be to figure out how to translate across those ID spaces. A single winner is unlikely and it would be bad for the industry, and it probably won’t happen because there are too many big forces in play.