Archive for the 'Election polling' Category

How Did Pollsters Do in the Midterm Elections?

Our most read blog post was posted the morning after the 2016 Presidential election. It is a post we are proud of because it was composed in the haze of a shocking election result. While many were celebrating their side’s victory or in shock over their side’s losses, we mused about what the election result meant for the market research industry.

We predicted pollsters would become defensive and try to convince everyone that the polls really weren’t all that bad. In fact, the 2016 polls really weren’t. Predictions of the popular vote tended to be within a percent and a half or so of the actual result which was better than for the previous Presidential election in 2012. However, the concern we had about the 2016 polls wasn’t related to how close they were to the result. The issue we had was one of bias: 22 of the 25 final polls we found made an inaccurate prediction and almost every poll was off in the same direction. That is the very definition of bias in market research.

Suppose that you had 25 people flip a coin 100 times. On average, you’d expect 50% of the flips to be “heads.” But, if say, 48% of them were “heads” you shouldn’t be all that worried as that can happen. But, if 22 of the 25 people all had less than 50% heads you should worry that there was something wrong with the coins or they way they were flipped. That is, in essence, what happened in the 2016 election with the polls.

Anyway, this post is being composed the aftermath of the 2018 midterm elections. How did the pollsters do this time?

Let’s start with FiveThirtyEight.com. We like this site because they place probabilities around their predictions. Of course, this gives them plausible deniability when their prediction is incorrect, as probabilities are never 0% or 100%. (In 2016 they gave Donald Trump a 17% chance of winning and then defended their prediction.) But this organization looks at statistics in the right way.

Below is their final forecast and the actual result. Some results are still pending, but at this moment, this is how it shapes up.

  • Prediction: Republicans having 52 seats in the Senate. Result: It looks like Republicans will have 53 seats.
  • Prediction: Democrats holding 234 and Republicans holding 231 House seats. Result: It looks like Democrats will have 235 or 236 seats.
  • Prediction: Republicans holding 26 and Democrats holding 24 Governorships. Result: Republicans now hold 26 and Democrats hold 24 Governorships.

It looks like FiveThirtyEight.com nailed this one. We also reviewed a prediction market and state-level polls, and it seems that this time around the polls did a much better job in terms of making accurate predictions. (We must say that on election night, FiveThirtyEight’s predictions were all over the place when they were reporting in real time. But, as results settled, their pre-election forecast looked very good.)

So, why did polls seem to do so much better in 2018 than 2016? One reason is the errors cancel out when you look at large numbers of races. Sure, the polls predicted Democrats would have 234 seats, and that is roughly what they achieved. But, in how many of the 435 races did the polls make the right prediction? That is the relevant question, as it could be the case that the polls made a lot of bad predictions that compensated for each other in the total.

That is a challenging analysis to do because some races had a lot of polling, others did not, and some polls are more credible than others. A cursory look at the polls suggests that 2018 was a comeback victory for the pollsters. We did sense a bit of an over-prediction favoring the Republican Senatorial candidates, but on the House side there does not seem to be a clear bias.

So, what did the pollsters do differently? Not much really. Online sampling continues to evolve and get better, and the 2016 result has caused polling firms to concentrate more carefully on their sampling. One of the issues that may have caused the 2016 problem is that pollsters are starting to almost exclusively use the top 2 or 3 panel companies. Since 2016, there has been a consolidation among sample suppliers, and as a result, we are seeing less variance in polls as pollsters are largely all using the same sample sources. The same few companies provide virtually all the sample used by pollsters.

Another key difference was that turnout in the midterms was historically high. Polls are more accurate in high turnout races, as polls almost always survey many people who do not end up showing up on election day, particularly young people. However, there are large and growing demographic differences (age, gender, race/ethnicity) in supporters of each party, and that greatly complicates polling accuracy. Some demographic subgroups are far more likely than others to take part in a poll.

Pollsters are starting to get online polling right. A lot of the legacy firms in this space are still entrenched in the telephone polling world, have been protective of their aging methodologies, and have been slow to change. After nearly 20 years of online polling the upstarts have finally forced the bigger polling firms to question their approaches and to move to a world where telephone polling just doesn’t make a lot of sense. Also, many of the old guard, telephone polling experts are now retired or have passed on, and they have largely led the resistance to online polling.

Gerrymandering helps the pollster as well. It still remains the case that relatively few districts are competitive. Pew suggests that only 1 in 7 districts was competitive. You don’t have to be a pollster to accurately predict how about 85% of the races will turn out. Only about 65 of the 435 house races were truly at stake. If you just flipped a coin in those races, in total your prediction of house seats would have been fairly close.

Of course, pollsters may have just gotten lucky. We view that as unlikely though, as there were too many races. Unlike in 2018 though, in 2016 we haven’t seen any evidence of bias (in a statistical sense) in the direction of polling errors.

So, this is a good comeback success for the polling industry and should give us greater confidence for 2020. It is important that the research industry broadcasts this success. When pollsters have a bad day, like they did in 2016, it affects market research as well. Our clients lose confidence in our ability to provide accurate information. When the pollsters get it right, it helps the research industry as well.

NEW POLL SHOWS THAT IF US PRESIDENTIAL ELECTION WERE HELD AGAIN, INCREASED TURNOUT WOULD LIKELY RESULT IN A CLINTON VICTORY

Crux Research poll shows 92% of Trump voters and 91% of Clinton voters would not change their vote

ROCHESTER, NY – MARCH 12, 2017 – Polling results released today by Crux Research show that if there were a “do over” and the election were held again tomorrow, Hillary Clinton would likely win the Presidency.  But, this would not happen as a result of voters changing their vote – rather voters who didn’t turn out in the fall would provide an edge to Clinton.

In 2016, the popular vote was 48.0% for Hillary Clinton and 45.9% for Donald Trump (a gap of 2.1)[1].  This new poll shows that if the election were held again among these two candidates, the popular vote would be estimated to be 52.9% Clinton and 47.1% Trump (a gap of 5.8).

Further, few Clinton or Trump supporters would change their voting behaviors:

  • 92% of those who voted for Trump in November would vote for him again tomorrow.
  • 91% of those who voted for Clinton in November would vote for her again tomorrow.

A new election would bring out additional voters.  57% of non-voters in 2016 would intend to vote. Their votes would split approximately 60% for Clinton and 40% for Trump.  So, increased turnout would likely provide a decisive edge to Clinton.

A closer look at swing states (the five states where the winner won by 2 percentage points or less[2]), shows that Clinton  would win these states by a gap of 9.3, likely enough to change the election result.

WHO WOULD WIN TOMORROW?
Suppose there was a “do over” and the US presidential election were held again tomorrow. 
Whom would you vote for?
Actual 2016 Election Result March 2017 Crux Research Poll*
Donald Trump 45.9% 47.1%
Hillary Clinton 48.0% 52.9%
Others 6.0%
*2017 Crux Research poll is among those who say they would vote if the election were held again tomorrow.
VOTE SWITCHING BEHAVIOR
Suppose there was a “do over” and the US presidential election were held again tomorrow. 
Whom would you vote for?
Voted for Trump in 2016 Voted for Clinton in 2016
Donald Trump 92% 1%
Hillary Clinton 1% 91%
Others 4% 7%
Wouldn’t vote 2% 1%
SWING STATES RESULTS
Suppose there was a “do over” and the US presidential election were held again tomorrow. 
Whom would you vote for?
Actual 2016 Election Result in Swing States Swing States March 2017 Crux Research Poll*
Donald Trump 48.0% 47.1%
Hillary Clinton 47.2% 52.9%
Others 4.8%
*2017 Crux Research poll is among those who say they would vote if the election were held again tomorrow.
** Swing states are five states where the election was decided by 2 percentage points or less (PA, MI, WI, FL, and NH).

###

Methodology

This poll was conducted online between March 6 and March 10, 2017. The sample size was 1,010 US adults (aged 18 and over). Quota sampling and weighting were employed to ensure that respondent proportions for age group, sex, race/ethnicity, and region matched their actual proportions in the population.  The poll was also balanced to reflect the actual proportion of voters who voted for each candidate in the 2016 election.

This poll did not have a sponsor and was conducted and funded by Crux Research, an independent market research firm that is not in any way associated with political parties, candidates, or the media.

All surveys and polls are subject to many sources of error.  The term “margin of error” is misleading for online polls, which are not based on a probability sample which is a requirement for margin of error calculations.  If this study did use probability sampling, the margin of error would be +/-3%.

About Crux Research Inc.

Crux Research partners with clients to develop winning products and services, build powerful brands, create engaging marketing strategies, enhance customer satisfaction and loyalty, improve products and services, and get the most out of their advertising.

Using quantitative and qualitative methods, Crux connects organizations with their customers in a wide range of industries, including health care, education, consumer goods, financial services, media and advertising, automotive, technology, retail, business-to-business, and non-profits.

Crux connects decision makers with customers, uses data to inspire new thinking, and assures clients they are being served by experienced, senior level researchers who set the standard for customer service from a survey research and polling consultant.

To learn more about Crux Research, visit www.cruxresearch.com.

[1] http://uselectionatlas.org/RESULTS/index.html

[2] PA, MI, WI, FL, and NH were decided by 2 percentage points or less.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.