“The Sell Sider” is a column written by the sell side of the digital media community.
Today’s column is written by James Curran, CEO and founder at STAQ.
When I heard about the “cable wars” of the major investment banks in order to minimize latency, I completely understood. They wanted to make their fiber-cable connection to the servers as short as possible to gain an advantage during trading.
Latency is a big deal for publishers, too. But here’s the problem: Many of Goldman Sachs’ individual trades are larger than an entire year’s revenue for a single publisher. Yet digital publishers must take as much care as Goldman to minimize latency.
Header bidding is the latest latency culprit that publishers must contend with. Many publishers are painfully aware of the latency that header bidding causes. However, I find that publishers are looking at header bidding latency only as a user experience killer.
There is another issue that matters just as much: impression discrepancy. If not managed correctly, impression discrepancies may cause header bidding to lose money for publishers.
The Ecosystem Strikes Again
Publishers need to create a more realistic calculation of header bidding revenue by factoring discrepancies into their line-item valuations. Some header bidding solutions can cause up to a 50% discrepancy between the publisher ad server impression reports and the impression reports from the programmatic partner. That means a $2 CPM is really a $1 CPM once you account for the adjustments made by the exchange for viewability, verification and performance tracking.
The discrepancy is so large because when there is an impression call, it must go through many partners, each with its own verification method and standard. Each auction participant, including the supply-side platform (SSP), demand-side platform and exchange, verifies the impression and conducts audits to eliminate fraud, bots and double counting before posting official stats during billing and payout.
Each step is done for a good reason, but they add up to a very difficult process for operations teams across the ecosystem. It can be described as an auction where someone bids and wins an item on the auction floor during the sale, but at the end of the day is only willing to pay 85% of the bid when seeing the item up close.
Publishers can control for some of this by working directly with verification companies to identify and cleanse their traffic sources and edit their placements for the highest viewable metrics. If individual programmatic partners are particularly slow in their impression response from their bid, they should be made aware because it could be a technical issue that can be fixed.
Yet even the most stringent quality management can’t eliminate discrepancies that threaten the value of header bidding. Even the best-managed sites can expect discrepancies between 20% and 30%. That’s significant.
Factoring Discrepancies Into Each Bid
Publishers cannot control for discrepancies entirely, but they can factor the discrepancies they see for each partner into the value they assign for that partner. If one SSP regularly shows 20% discrepancies, their $1 “bid bucket” should be recalibrated as $0.80.
Managing header bidding discrepancies, like much of digital ad tech, is complex and time consuming. In order to make these changes, publishers need to recalibrate hundreds if not thousands of line items, which makes header bidding just as manual as a waterfall setup. This manual labor is another costly factor that publishers must include when determining the value of header bidding.
Accounting for discrepancies is a good idea. Not only does it create a more accurate scenario for publishers, it puts pressure on the industry to decrease discrepancies. Partners that often win bids based on a high initial CPM might see volume decrease after they are recalibrated. That’s a good thing. If enough publishers make the effort to accurately account for these gaps, they gain some control over the pricing and bidding that’s happening on their own properties. They might even nudge the market into adopting more transparent standards for reporting, data scrubbing and quality metrics.