"Data-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today’s column is written by Marcus Pratt, director of insights and technology at Mediasmith.
Digital media provides massive amounts of data against which to plan, optimize and buy. Smart marketers have learned to turn this data into information that drives insights and creates advantage, but not all data is useful. In fact, some data creates harm in marketing campaigns.
A Hypothetical RFP
A media planner would never issue an RFP stating “we are seeking low-quality inventory, preferably serving significant amounts of fraudulent impressions against bots and click farms.”
However, an RFP asking for impressions delivering a high CTR or low cost-per-click without concern for inventory quality could encourage vendors to buy low-quality traffic, whether intentional or not. This approach to planning and optimizing media is not uncommon across advertisers of all types.
Similarly, a direct-response advertiser optimizing to last-touch conversions may well be optimizing to tactics that deliver unseen impressions that provide no incremental performance uplift, effective only in dropping the final cookie on a user about to make a purchase.
In both of these examples, the marketer is making decisions based on an incomplete data set. A report may show that one website clearly delivers more clicks than another, but the quality of that traffic is not included the same report. Unfortunately it is never possible to have every single relevant data point, so we must make decisions based on what is available. Great marketers are able to see beyond the data in front of them, thinking about what data they don’t have and what factors may skew the data they do have.
Lack Of Measurement ≠ Lack Of Results
Many marketers insist on being able to measure effectiveness of a campaign before committing to an investment. Certainly this is reasonable in an industry so focused on measurement, but this approach can lead to missed opportunities, as many new and innovative tactics can be difficult to measure.
Mobile advertising has not been as quick to grow as many initially hoped; it has been “the year of mobile” for nearly a decade now, depending on whom you ask. PlaceIQ even registered the domain to capture search traffic for “year of mobile,” which Google shows has been fairly constant since 2007.
There are many reasons mobile advertising has been slow to grow, but measurement challenges are not helping. In a previous post on AdExchanger, Jeremy Steinberg made the case that solving attribution will fix mobile monetization. Jeremy may be right, but many advertisers are holding off on significant investment in mobile media because they cannot effectively measure the results using standard Web display methods.
In essence, some marketers are deciding not to advertise in a particular medium due to lack of data. But does this make sense?
One Weird Trick To Increase Results
I recall one programmatic media buyer who gave me a clever "tip" to increase performance of DR campaigns by about 10% across any campaign. The advice was simple: Block users of the Safari browser from all campaigns. The reason this "works" is because, even on desktop computers, the default setting in Safari is to reject third-party cookies. Since ad servers and DSPs are considered third parties, these platforms tend to significantly underreport conversions from the Safari browser. The issue is not that consumers running Safari fail to convert or that those users spend less – in fact, Orbitz found Mac users tend to spend more. This trick simply manipulates a data loophole for better reported results.
If I were to block all Safari users on programmatic buys, numbers might look better, but I could actually cause my clients to lose business. Particularly for advertisers with a target audience that skews more affluent or more artistic/design-focused, removing 10% of a qualified audience from every campaign could hurt the results.
Taking this a step further, consider the idea of blocking all Safari users on iPads, a practice enforced on many campaigns. Consider that tablets are quickly overtaking computers for consumer Web browsing, second-screen use, social media and email, with 34% of US adults owning a tablet as of May (according to Pew Research). Among all tablet owners, iPad owners tend to be more affluent and could be higher spenders. Avoiding iPad users could mean missing out on valuable prospects entirely or unintentionally skewing the audience of a campaign.
Seeing these examples where following the data leads us astray, I think we should all consider getting back to basics and applying good judgment where needed. Generally, advertising is effective when a target audience is exposed to a message (ideally multiple times) that resonates with them. It is easy to get caught up in cookie-level data, daily performance tracking and the allure of big data and forget a little bit of advertising 101.
The next time you are making a decision based on a set of data, considering asking yourself a few questions:
- What information is not in this report?
- How does this data fit into the bigger picture?
- What outside factors could skew this one direction or another?
- Is there anything here that seems to be without explanation?
Hopefully these questions will lead to deeper exploration and better decision-making.