“AdExchanger Politics” is a weekly column tracking developments in the 2016 political campaign cycle.
Today’s column is written by Kevin Tan, CEO at Eyeota.
With elections season heating up in the US and Australia, political candidates have their eye on the prize. But opinion polls are confusing and occasionally inaccurate – something that we can draw parallels with when it comes to an ad campaign and the importance of using the right data sample size.
Hillary Clinton polled 21 points ahead of Bernie Sanders in this year’s Michigan Democratic primary, yet lost to Sanders by 1.5 points when the votes were counted. She was also predicted to win Indiana, but failed.
Political polls’ inaccuracy has given them a bad reputation in recent years. And for good reason, too. Surely if they were doing their job, the outcome would never come as a shock. Yet increasingly, it does.
So why are the polls so wrong? It is all about the sample.
Need For The Right Sample Size
Sampling has been blamed as the reason for such poor forecasts in many modern-day political polls. In most cases, the sample used was not representative enough of the diverse voting public; therefore the results were skewed and inaccurate.
For instance, polls underestimated youth turnout for the US elections, hence Sanders’ surprise victory. Apparently, not many 18- to 35-year-olds are likely to answer their home phones at 10 in the morning.
In the 2015 UK general election, polls predicted a close call between Labour and Conservatives in the last General Election – but the Conservatives won a majority – 12 seats to be exact, forcing the resignation of Ed Miliband and consigning Labour to another five years on the opposition benches.
After the pollsters got it so wrong, an independent inquiry was launched to investigate why. It found that despite the large sample size, significant groups of the population were underrepresented – specifically older voters and Conservative supporters – many of whom were perhaps too shy to admit to their conservative affiliation.
James Kanagasooriam, an analyst for pollster Populus, and Matt Singh, who runs Number Cruncher Politics, may have found the answer. With Brexit fever gripping the UK, polls seem unable to provide a clear consensus on the outcome of the EU referendum. When trying to explain the inconsistencies in forecasting the EU referendum result, Kanagasooriam and Singh ran a phone poll where they found an 11-point lead for remaining in the EU. In a similar online version of the poll, those who wanted to leave the EU had a six-point lead.
Because the two styles of the poll were created differently, the biases point in opposite directions. In other words, the phone poll shows people to be more liberal and the online poll shows people to be more conservative. Factors such as these need to be taken into consideration as they play a major role in the result.
Advertisers: Be Wary Of Small, Modeled Data Sets
So with political polls coming under so much criticism, what would happen if ad campaigns were run in the same way? Imagine if advertisers charged clients to use data sets to determine the placements that would reach your target audience but there was a glitch that affected the outcome.
The issue with data, especially when it relates to behavior, is that you can never determine what that glitch is. What if you were applying a profile to your data – making the assumption that everyone who fits a specific demographic behaves the same? What if geographic factors need to be considered? Or political allegiances?
Brands need to bear in mind that if they are using modeled data, which draws inferences from small samples, they would have to continuously ask themselves what potential factors could skew their sample. Luckily higher-quality data sets are developing, meaning these risks are being reduced.
As with opinion polls, when it comes to data, more is definitely better.
Follow Eyeota (@EyeotaTweets) and AdExchanger (@adexchanger) on Twitter.