Home Data-Driven Thinking All Hail the V.A.D.A.R.

All Hail the V.A.D.A.R.

SHARE:

Data-Driven ThinkingData Driven Thinking” is a column written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by David Soloff, CEO, Metamarkets, a publisher analytics company.

Around the time he was busy founding a retail bank, my good friend Josh Reich offered the insight that ‘data trades inversely to liquidity.’ This presents two related use cases, relevant to all electronic trading markets including the one for online ads.

Use Case the First; in which the trade itself is the shortest distance to discovering price

In fast-moving, liquid markets, the best way to discover the value of a given asset is to trade it. This works beautifully in standardized markets like equities, options or commodities futures, reliant as they are on screen-based trading and standardized instrumentation. In these more advanced market structures, trade capture, high speed information distribution and clearing services are all tightly inter-networked with the result that in the selfsame moment a bid has been hit, price is revealed. The direct relationship holds true in all but the most broken of markets: trade these standardized assets at higher volumes and lower latency in order to get a more precise and reliable snapshot of valuation. None of this is to say that trading markets go even part-way toward perfectly determining price according to the philosophical definitions of ‘perfect’.

In liquid markets, market data services will lag trade execution as a mechanism for price discovery. Which isn’t to say that market data services don’t have a critical role to play in fast-moving markets. No trading desk or electronic broker would outline an execution strategy without the data necessary to provide a solid analytic basis for developing a winning trading strategy. In this use case, market data provides a solid signal against which to execute: any HFT (High Frequency Trading) shop relies on multiple high-quality, high-frequency, neutrally-aggregated market data sets to feed the models that form the basis for the day’s trade execution.

In these markets, data is thrown off by the transacting parties with such volume and velocity that the need to keep pace with capture, analytics and display of financial markets data has been the driver of technological innovation in the big data and anomaly detection industries for over a generation. In such environments, transaction data is plentiful and cheap: it’s easy to come by a trading signal to approximately or precisely value an asset you may hold. As a result, the financial value the market places on these data sets trades inversely to the liquidity of the market that generates the data. Trading is the primary mechanism that market actors use to assimilate perfectly all information. In liquid markets, price discovery is best effected through the transaction. In some regard this is perhaps a reflection of capital allocation: either actors in a market allocate capital to the build out of low-latency clearing infrastructure, or to trade capture and data distribution. This is the lagging use case in which market data validates, tests and optimizes a principal trading strategy.

Use Case the Second; in which the market itself conspires against price discovery

In more difficult, lumpy and opaque markets, great difficulty frequently attends discovering the value of, and finding the appropriate buyer for an asset. Pricing is a seemingly impossible problem –  and as a result, illiquidity begets illiquidity as traders sit on assets, either for fear of selling the asset short, or triggering a collapse upon revaluation of the asset as confirmed by the transaction. The only answer here is lots and lots of corroborating or triangulating data. Data trades inversely to liquidity. In the absence of a transaction flow to dip one’s toes into, data sets and valuation frameworks are the only mechanism to discover how to value an asset. This is the case in emerging markets fixed income, distressed debt, CMBS/CDOs, rare earths metals or special situations stocks.

This need for data presents a major business opportunity. Capitalism abhors a vacuum, and all manner of public and private data sets have been created to help seller and buyer determine a fair trade. In illiquid and opaque markets, capital has been allocated very efficiently to help buyers and sellers of the asset learn how to fairly value that asset. Extremely large and profitable businesses have grown up around the creation and delivery of these information products: FactSet, Markit, RiskMetrics, Moody’s, S&P, to name a few. The system of independent information brokers works pretty well, despite some periodic cataclysmic hiccups, and especially taking into account the amount of capital and risk flowing through these various systems’ pipes. Ironically and inevitably, transactional liquidity is a very natural consequence of the introduction of this data to illiquid markets as actors get their sea legs and trade with increased confidence.

The best form of information vendor to these illiquid markets are typically contributory data networks: individual parties contribute their transaction set to the common pool, in exchange for access to the aggregated data set, stored and attended to by a consortium-owned data broker. Mike Driscoll, has coined the acronym V.A.D.A.R. to describe these “value-added data aggregators and redistributors.” These are the businesses that turn straw into gold. They can be for-profit, benefitting the data contributors in more ways than one via revenue share prorated according to volume of contributed data. When well-executed, in many regards they are perfect businesses: eminently scalable; increasing quality of product as more contributors join the pool; low marginal cost for distribution of data to new customers; opportunities for multiple derivative products and business lines.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Peter Borish, another friend with financial markets pedigree, says that for the VADAR to create meaningful products, the mechanism for data capture and processing must display certain key traits:

  1. Capture must be repeatable
  2. Output must be functional
  3. Information products must be rational with demonstrable interrelationships
  4. Data transformation can be opaque but not impossibly black box

When these businesses gain traction, they are things of operational beauty:

  1. Markit: $1B in 2010 sales
  2. CoreLogic: $2B in 2010 sales
  3. IMS Health: $2B in 2009 sales
  4. GfK Group: $1.4B in 2009 sales
  5. Information Resources (IRI, now Symphony IRI): $660m in 2009 sales
  6. Westat Group: $425m in 2009 sales

These businesses act as information aggregators for specific verticals, and by virtue of the scale and quality of the data they collect, they become the de facto gold standard for their industry. Nobody conducts business, either buying assets or pricing product, without consulting a data set or information product delivered by these companies. These companies are duty-bound to protect and keep secure the data that has been entrusted to them. By the same token, contributors recognize that only by allowing their data to be analyzed and statistically combined with other data can value as information be derived.

Postscript

Perhaps the time is now for such a business to emerge to serve the electronic media vertical. Perhaps the absence of information products is holding the big money on the sidelines? Maybe the introduction of triangulating data will enable buyers to more confidently participate in these opaque and illiquid markets. Perhaps this business may offer two product lines to suit both use cases: information products to suit low-latency auction markets on the one hand, and more opaque and hard-to-value assets such as contract-based inventory on the other. Perhaps the rule of efficient allocation of capital dictates that this happen sooner rather than later?

Follow David Soloff (@davidsoloff), Metamarkets (@metamx) and AdExchanger.com (@adexchanger) on Twitter.

Must Read

The FTC's latest staff report has strong message for social media and streaming video platforms: Stop engaging in the "vast surveillance" of consumers.

FTC Denounces Social Media And Video Streaming Platforms For ‘Privacy-Invasive’ Data Practices

The FTC’s latest staff report has strong message for social media and streaming video platforms: Stop engaging in the “vast surveillance” of consumers.

Publishers Feel Seen At The Google Ad Tech Antitrust Trial

Publishers were encouraged to see the DOJ highlight Google’s stranglehold on the ad server market and its attempts to weaken header bidding.

Albert Thompson, Managing Director, Digital at Walton Isaacson

To Cure What Ails Digital Advertising, Marketers And Publishers Must Get Back To Basics

Albert Thompson, a buy-side veteran with 20+ years of experience, weighs in on attention metrics, the value of MFA sites, brand safety backlash and how publishers can improve their inventory.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
A comic depiction of Google's ad machine sucking money out of a publisher.

DOJ vs. Google, Day Five Rewind: Prebid Reality Check, Unfair Rev Share And Jedi Blue (Sorta)

Someone will eventually need to make a Netflix-style documentary about the Google ad tech antitrust trial happening in Virginia. (And can we call it “You’ve Been Ad Served?”)

Comic: Alphabet Soup

Buried DOJ Evidence Reveals How Google Dealt With The Trade Desk

In the process of the investigation into Google, the Department of Justice unearthed a vast trove of separate evidence. Some of these findings paint a whole new picture of how Google interacts and competes with its main DSP rival, The Trade Desk.

Comic: The Unified Auction

DOJ vs. Google, Day Four: Behind The Scenes On The Fraught Rollout Of Unified Pricing Rules

On Thursday, the US district court in Alexandria, Virginia boarded a time machine back to April 18, 2019 – the day of a tense meeting between Google and publishers.