Home Politics Opinion Polls And Ad Campaigns Are More Similar Than You Might Think

Opinion Polls And Ad Campaigns Are More Similar Than You Might Think

SHARE:

kevintanAdExchanger Politics” is a weekly column tracking developments in the 2016 political campaign cycle.

Today’s column is written by Kevin Tan, CEO at Eyeota.

With elections season heating up in the US and Australia, political candidates have their eye on the prize. But opinion polls are confusing and occasionally inaccurate – something that we can draw parallels with when it comes to an ad campaign and the importance of using the right data sample size.

Hillary Clinton polled 21 points ahead of Bernie Sanders in this year’s Michigan Democratic primary, yet lost to Sanders by 1.5 points when the votes were counted. She was also predicted to win Indiana, but failed.

Political polls’ inaccuracy has given them a bad reputation in recent years. And for good reason, too. Surely if they were doing their job, the outcome would never come as a shock. Yet increasingly, it does.

So why are the polls so wrong? It is all about the sample.

Need For The Right Sample Size

Sampling has been blamed as the reason for such poor forecasts in many modern-day political polls. In most cases, the sample used was not representative enough of the diverse voting public; therefore the results were skewed and inaccurate.

For instance, polls underestimated youth turnout for the US elections, hence Sanders’ surprise victory. Apparently, not many 18- to 35-year-olds are likely to answer their home phones at 10 in the morning.

In the 2015 UK general election, polls predicted a close call between Labour and Conservatives in the last General Election – but the Conservatives won a majority – 12 seats to be exact, forcing the resignation of Ed Miliband and consigning Labour to another five years on the opposition benches.

After the pollsters got it so wrong, an independent inquiry was launched to investigate why. It found that despite the large sample size, significant groups of the population were underrepresented – specifically older voters and Conservative supporters – many of whom were perhaps too shy to admit to their conservative affiliation.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

James Kanagasooriam, an analyst for pollster Populus, and Matt Singh, who runs Number Cruncher Politics, may have found the answer. With Brexit fever gripping the UK, polls seem unable to provide a clear consensus on the outcome of the EU referendum. When trying to explain the inconsistencies in forecasting the EU referendum result, Kanagasooriam and Singh ran a phone poll where they found an 11-point lead for remaining in the EU. In a similar online version of the poll, those who wanted to leave the EU had a six-point lead.

Because the two styles of the poll were created differently, the biases point in opposite directions. In other words, the phone poll shows people to be more liberal and the online poll shows people to be more conservative. Factors such as these need to be taken into consideration as they play a major role in the result.

Advertisers: Be Wary Of Small, Modeled Data Sets

So with political polls coming under so much criticism, what would happen if ad campaigns were run in the same way? Imagine if advertisers charged clients to use data sets to determine the placements that would reach your target audience but there was a glitch that affected the outcome.

The issue with data, especially when it relates to behavior, is that you can never determine what that glitch is. What if you were applying a profile to your data – making the assumption that everyone who fits a specific demographic behaves the same? What if geographic factors need to be considered? Or political allegiances?

Brands need to bear in mind that if they are using modeled data, which draws inferences from small samples, they would have to continuously ask themselves what potential factors could skew their sample. Luckily higher-quality data sets are developing, meaning these risks are being reduced.

As with opinion polls, when it comes to data, more is definitely better.

Follow Eyeota (@EyeotaTweets) and AdExchanger (@adexchanger) on Twitter.

Must Read

After The Election, News Corp Has Harsh Words For Advertisers Who Avoided News

News Corp’s chief exec blasted “the blatant biases of ad agencies and ad associations,” which are “boycotting certain media properties” due to “personal political prejudices.”

LiveRamp Outperforms On Earnings And Lays Out Its Data Network Ambitions

LiveRamp reported an unexpected boost to Q3 revenue, from $160 million last year to $185 million in 2024, during its quarterly call with investors on Wednesday.

Google in the antitrust crosshairs (Law concept. Single line draw design. Full length animation illustration. High quality 4k footage)

Google And The DOJ Recap Their Cases In The Countdown To Closing Arguments

If you’re trying to read more than 1,000 pages of legal documents about the US v. Google ad tech antitrust case on Election Day, you’ve come to the right place.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

NYT’s Ad And Subscription Revenue Surge As WaPo Flails

While WaPo recently lost 250,000 subscribers due to concerns over its journalistic independence, NYT added 260,000 subscriptions in Q3 thanks largely to the popularity of its non-news offerings.

Mark Proulx, global director of media quality & responsibility, Kenvue

How Kenvue Avoided $3 Million In Wasted Media Spend

Stop thinking about brand safety verification as “insurance” – a way to avoid undesirable content – and start thinking about it as an opportunity to build positive brand associations, says Kenvue’s Mark Proulx.

Comic: Lunch Is Searched

Based On Its Q3 Earnings, Maybe AIphabet Should Just Change Its Name To AI-phabet

Google hit some impressive revenue benchmarks in Q3. But investors seemed to only have eyes for AI.