Introducing Marketing-Stack Management, Powered By Enterprise Machine Learning

"Data-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Nathan Woodman, senior vice president of strategic development at IPONWEB.

There’s a scene in the 2002 Tom Cruise movie, “Minority Report,” that has become legendary in marketing and ad tech circles.

In the scene, Cruise’s character, John Anderton, walks through a crowded mall. Retina scanners and other technologies identify Anderton in real time and serve him entirely personalized ad experiences and product messages. Upon walking into a Gap store, a hologrammatic shopper bot immediately recognizes him, asks about previous transactions and recommends additional items he might like.

For advertisers, this looks like the holy grail of marketing: true one-to-one communication – though in a more dystopian, big brother kind of way. For mar tech companies, it represents the challenge everybody is trying to solve, and fast.

This race to personalization has resulted in an avalanche of disparate point solutions tackling highly specific pieces of the much larger problem: CRMs, data management platforms (DMPs), demand-side platforms (DSPs), email service providers, recommendation engines, search engine marketing bidding platforms, tag management solutions, measurement and attribution systems, marketing automation platforms and social listening tools – just to name a few.

The average enterprise uses more than 12 of these different solutions to power its digital marketing efforts, with some companies using more than 30 unique tools, according to a 2015 report from Winterberry Group and IAB. With 70% of companies planning to increase their mar tech spending in 2017, one can only assume those numbers will climb even higher.

Most of these tools are smart, powerful and effective at doing what they’re designed to do. Most leverage machine learning to create value, drive performance or both. What happens, though, when they aren’t talking to each other? How smart can they be when they don’t see what the other tools are doing?

Take a DSP, for example. A standard machine-learning application in a DSP may record and use a variety of attributes to decide which impression to buy and how much to bid on it. One such attribute might be the length of time since a user last saw an ad impression.

If an advertiser is using multiple DSPs – as most are – how would any single DSP know the last time a user was exposed to an ad without understanding what the other DSPs are doing? Now imagine compounding this situation with social, search, email, web visits, TV and all other forms of media exposure.

This lack of communication between systems creates massive data gaps, where each system is blind to the data of the other systems being used. This data blindness results in each system’s unique machine-learning applications making decisions and learning behaviors based on incomplete or inaccurate data.

DMPs came along to fill this void, but the application of the data contained in DMPs has largely failed because the preferred method of distribution has been through heuristic rules that guide segmentation and then one-way pushes of one-dimensional audience data into existing DSPs. This process flattens the contrast of the underlying data set and degrades performance.

One approach to solving the data gap problem, at least in the programmatic channel, is to use a single stack with an integrated DMP and DSP. This seems viable, but most brands and agencies use multiple DSPs to achieve mass reach and drive efficiencies. It also doesn’t solve for the problem of nonprogrammatic media channels.

Alternatively, a brand or publisher could build its own stack tightly integrated with its own data sets. This would require an enterprise to build, replace and maintain the 12-plus tools they currently use with homegrown technology – a daunting task, to say the least.

Perhaps the most viable path lies somewhere in the middle, between consolidation and building one’s own stack. With this approach, the same data that is already being captured in a DMP can be used to build a holistic decisioning engine that looks across an increasing number of digital marketing tools and consumer touch points to decide what message to present to what user in what channel.

The result is an enterprise-centric learning system that sits atop a series of ad tech stacks and executes against marketing directives in as close to real time as the points of distribution allow.

This is not yet another mar tech product, but rather a process. It requires analyzing current available data sets, including gaps between systems, as well as the distribution interfaces of the existing media channels. Think of it as marketing stack management, powered by enterprise machine learning (bring on the acronyms).

Wider adoption and support for open machine learning or the concept of brand-controlled decisioning algorithms (brandgorithms) that are portable across distribution channels would only accelerate this process. But first, brands and agencies need to ask for this level of openness of their mar tech and ad tech vendors.

Only when all channels are considered holistically can media decisions be made intelligently, bringing the promise of “Minority Report” and one-to-one marketing closer to reality.

Follow IPONWEB (@IPONWEB) and AdExchanger (@adexchanger) on Twitter.


Popular On AdExchanger Right Now:

5 Comments

  1. Dead on....There are seemingly two paths brands can go down in terms of implementing a data driven marketing platform. Buy the components and stitch them together or buy an all-in-one solution from one of the big marketing cloud players; Oracle, SFDC, Adobe, Amobee etc...The real value the marketing cloud's provide, is the ability to seamlessly integrate, transfer and leverage data. That component is built into the overall solution, whereas if you buy all the pieces separately you need a data platform to sit in the middle. DMP's ended up not being the right technology for this and the new 3 letter acronym of the day seems to be CDP (Customer Data Platform, helpful definition here: https://www.quora.com/What-is-the-difference-between-a-Customer-Data-Platform-and-a-DMP-1)

    Whichever direction a brand chooses to go, the overall direction of the marketplace is clear...Data is being organized around the individual customer in order to truly execute People Based Marketing. Exciting times for sure...

    Reply
  2. Well put. The number of channels, channel systems, and disconnected customer identifiers will only grow. There's no realistic hope of any one vendor serving them all, so marketers are stuck with multiple systems whether or not they want to be. This means they need a separate system that pulls in identifiers and data from all those channel systems, connects the identifiers relating to a single person, and then builds a complete customer view. As Scott mentions, that's what a Customer Data Platform is for (see http://www.cdpinstitute.org for more info). On top of that, we'll need an orchestration engine to coordinate messages sent to each person across all channels. This extends to all customer interactions, not just advertising or even just marketing. Sales, customer service, operations, and other groups all need to provide a coordinated customer experience. Consumers think companies can already do this, and get annoyed when they don't.
    note: I run the Customer Data Platform Institute

    Reply
  3. Melissa LaFrance

    Great article. But the obvious and biggest challenge is the walled gardens like Facebook and Amazon, who currently get a disproportionate amount of spend. Until brands that spend in these channels force them to provide better data (Amazon doesn't even tell you who bought) it will be nearly impossible to achieve a true view of the customer, or orchestrate a true 1:1 communication strategy. I'd love for open machine learning to be a reality but for that to truly happen, we need these players and Google to participate.

    Love the article, Nate. Keep 'em coming.

    Reply
  4. I love the idea of being able to get data from various sources and use it to increase relevance and engagement etc.

    Melissa makes a good point about the reality of when this might be able to work given the hording of data etc. but great ideas Nathan and will keep an eye on this.

    Here is a clip of when the eye scanner at the Gap scans Tom Cruises eyes that were replaced, so he would not be detected.

    The "robot" speaks back at him and says "Hello Mr. Yakamoto and welcome back to the Gap"...Kind of like when you buy a book on Amazon for a pregnant friend and then you start getting suggested products like diapers instead of tuna fishing rods... All in good time with algorithms "writing themselves" and refining for each situation.

    https://www.youtube.com/watch?v=ITjsb22-EwQ (15 seconds of the Minority Report)

    Reply
  5. The advantage of combining data integration and media activation functionality is clear. Running both as an integrated platform reduces the lag-time between when we learn something new about an individual and when we can use that knowledge to inform and improve the bidding process with programmatic inventory. A deeper, more robust DMP, devoid of the flattening Nathan describes in this article, can supply additional attributes that make the difference between marketing that learns, and marketing that is still blindly guessing. We find that look-alike and act-alike modeling significantly improves the deeper the attribute set.

    Multiple DSPs emerged as a feel-good solution to a world in which digital uncertainty and a lack of insight into what was happening down in the programmatic weeds prompted people to avoid putting eggs in too few baskets. So they spread them around instead—a little budget there, and there, and there, and so on. Unfortunately, this limits the ability to learn deeply – the way 10 1-credit classes will never give you the expertise of two comprehensive 5-credit classes. That practice made a lot of sense when programmatic was new, and AI was more of a novelty. But today, when more and more people are using machine learning to make decisions based on the data available, it’s counterproductive. It removes data from the system, and that means worse decisions. And worse decisions mean inefficiency, higher CPAs, and less overall insights into your audience.

    Reply

Add a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>