Crux Research poll shows 92% of Trump voters and 91% of Clinton voters would not change their vote

ROCHESTER, NY – MARCH 12, 2017 – Polling results released today by Crux Research show that if there were a “do over” and the election were held again tomorrow, Hillary Clinton would likely win the Presidency.  But, this would not happen as a result of voters changing their vote – rather voters who didn’t turn out in the fall would provide an edge to Clinton.

In 2016, the popular vote was 48.0% for Hillary Clinton and 45.9% for Donald Trump (a gap of 2.1)[1].  This new poll shows that if the election were held again among these two candidates, the popular vote would be estimated to be 52.9% Clinton and 47.1% Trump (a gap of 5.8).

Further, few Clinton or Trump supporters would change their voting behaviors:

  • 92% of those who voted for Trump in November would vote for him again tomorrow.
  • 91% of those who voted for Clinton in November would vote for her again tomorrow.

A new election would bring out additional voters.  57% of non-voters in 2016 would intend to vote. Their votes would split approximately 60% for Clinton and 40% for Trump.  So, increased turnout would likely provide a decisive edge to Clinton.

A closer look at swing states (the five states where the winner won by 2 percentage points or less[2]), shows that Clinton  would win these states by a gap of 9.3, likely enough to change the election result.

Suppose there was a “do over” and the US presidential election were held again tomorrow. 
Whom would you vote for?
Actual 2016 Election Result March 2017 Crux Research Poll*
Donald Trump 45.9% 47.1%
Hillary Clinton 48.0% 52.9%
Others 6.0%
*2017 Crux Research poll is among those who say they would vote if the election were held again tomorrow.
Suppose there was a “do over” and the US presidential election were held again tomorrow. 
Whom would you vote for?
Voted for Trump in 2016 Voted for Clinton in 2016
Donald Trump 92% 1%
Hillary Clinton 1% 91%
Others 4% 7%
Wouldn’t vote 2% 1%
Suppose there was a “do over” and the US presidential election were held again tomorrow. 
Whom would you vote for?
Actual 2016 Election Result in Swing States Swing States March 2017 Crux Research Poll*
Donald Trump 48.0% 47.1%
Hillary Clinton 47.2% 52.9%
Others 4.8%
*2017 Crux Research poll is among those who say they would vote if the election were held again tomorrow.
** Swing states are five states where the election was decided by 2 percentage points or less (PA, MI, WI, FL, and NH).



This poll was conducted online between March 6 and March 10, 2017. The sample size was 1,010 US adults (aged 18 and over). Quota sampling and weighting were employed to ensure that respondent proportions for age group, sex, race/ethnicity, and region matched their actual proportions in the population.  The poll was also balanced to reflect the actual proportion of voters who voted for each candidate in the 2016 election.

This poll did not have a sponsor and was conducted and funded by Crux Research, an independent market research firm that is not in any way associated with political parties, candidates, or the media.

All surveys and polls are subject to many sources of error.  The term “margin of error” is misleading for online polls, which are not based on a probability sample which is a requirement for margin of error calculations.  If this study did use probability sampling, the margin of error would be +/-3%.

About Crux Research Inc.

Crux Research partners with clients to develop winning products and services, build powerful brands, create engaging marketing strategies, enhance customer satisfaction and loyalty, improve products and services, and get the most out of their advertising.

Using quantitative and qualitative methods, Crux connects organizations with their customers in a wide range of industries, including health care, education, consumer goods, financial services, media and advertising, automotive, technology, retail, business-to-business, and non-profits.

Crux connects decision makers with customers, uses data to inspire new thinking, and assures clients they are being served by experienced, senior level researchers who set the standard for customer service from a survey research and polling consultant.

To learn more about Crux Research, visit


[2] PA, MI, WI, FL, and NH were decided by 2 percentage points or less.

Let’s Make Research and Polling Great Again!

Crux Logo Final 2016

The day after the US Presidential election, we quickly wrote and posted about the market research industry’s failure to accurately predict the election.  Since this has been our widest-read post (by a factor of about 10!) we thought a follow-up was in order.

Some of what we predicted has come to pass. Pollsters are being defensive, claiming their polls really weren’t that far off, and are not reaching very deep to try to understand the core of why their predictions were poor. The industry has had a couple of confabs, where the major players have denied a problem exists.

We are at a watershed moment for our industry. Response rates continue to plummet, clients are losing confidence in the data we provide, and we are swimming in so much data our insights are often not able to find space to breathe. And the public has lost confidence in what we do.

Sometimes it is everyday conversations that can enlighten a problem. Recently, I was staying at an AirBnB in Florida. The host (Dan) was an ardent Trump supporter and at one point he asked me what I did for a living. When I told him I was a market researcher the conversation quickly turned to why the polls failed to accurately predict the winner of the election. By talking with Dan I quickly I realized the implications of Election 2016 polling to our industry. He felt that we can now safely ignore all polls – on issues, approval ratings, voter preferences, etc.

I found myself getting defensive. After all, the polls weren’t off that much.  In fact, they were actually off by more in 2012 than in 2016 – the problem being that this time the polling errors resulted in an incorrect prediction. Surely we can still trust polls to give a good sense of what our citizenry thinks about the issues of the day, right?

Not according to Dan. He didn’t feel our political leaders should pay attention to the polls at all because they can’t be trusted.

I’ve even seen a new term for this bandied about:  poll denialism. It is a refusal to believe any poll results because of their past failures. Just the fact that this has been named should be scary enough for researchers.

This is unnerving not just to the market research industry, but to our democracy in general.  It is rarely stated overtly, but poll results are a key way political leaders keep in touch with the needs of the public, and they shape public policy a lot more than many think. Ignoring them is ignoring public opinion.

Market research remains closely associated with political polling. While I don’t think clients have become as mistrustful about their market research as the public has become about polling, clients likely have their doubts. Much of what we do as market researchers is much more complicated than election polling. If we can’t successfully predict who will be President, why would a client believe our market forecasts?

We are at a defining moment for our industry – a time when clients and suppliers will realize this is an industry that has gone adrift and needs a righting of the course. So what can we do to make research great again?  We have a few ideas.

  1. First and foremost, if you are a client, make greater demands for data quality. Nothing will stimulate the research industry more to fix itself than market forces – if clients stop paying for low quality data and information, suppliers will react.
  2. Slow down! There is a famous saying about all projects.  They have three elements that clients want:  a) fast, b) good, and c) cheap, and on any project you can choose two of these.  In my nearly three decades in this industry I have seen this dynamic change considerably. These days, “fast” is almost always trumping the other two factors.  “Good” has been pushed aside.  “Cheap” has always been important, but to be honest budget considerations don’t seem to be the main issue (MR spending continues to grow slowly). Clients are insisting that studies are conducted at a breakneck pace and data quality is suffering badly.
  3. Insist that suppliers defend their methodologies. I’ve worked for corporate clients, but also many academic researchers. I have found that a key difference between them becomes apparent during results presentations. Corporate clients are impatient and want us to go as quickly as possible over the methodology section and get right into the results.  Academics are the opposite. They dwell on the methodology and I have noticed if you can get an academic comfortable with your methods it is rare that they will doubt your findings. Corporate researchers need to understand the importance of a sound methodology and care more about it.
  4. Be honest about the limitations of your methodology. We often like to say that everything you were ever taught about statistics assumed a random sample and we haven’t seen a study in at least 20 years that can credibly claim to have one.  That doesn’t mean a study without a random sample isn’t valuable, it just means that we have to think through the biases and errors it could contain and how that can be relevant to the results we present. I think every research report should have a page after the methodology summary that lists off the study’s limitations and potential implications to the conclusions we draw.
  5. Stop treating respondents so poorly. I believe this is a direct consequence of the movement from telephone to online data collection. Back in the heyday of telephone research, if you fielded a survey that was too long or was challenging for respondents to answer, it wasn’t long until you heard from your interviewers just how bad your questionnaire was. In an online world, this feedback never gets back to the questionnaire author – and we subsequently beat up our respondents pretty badly.  I have been involved in at least 2,000 studies and about 1 million respondents.  If each study averages 15 minutes that implies that people have spent about 28 and a half years filling out my surveys.  It is easy to lose respect for that – but let’s not forget the tremendous amount of time people spend on our surveys. In the end, this is a large threat to the research industry, as if people won’t respond, we have nothing to sell.
  6. Stop using technology for technology’s sake. Technology has greatly changed our business. But, it doesn’t supplant the basics of what we do or allow us to ignore the laws of statistics.  We still need to reach a representative sample of people, ask them intelligent questions, and interpret what it means for our clients.  Tech has made this much easier and much harder at the same time.  We often seem to do things because we can and not because we should.

The ultimate way to combat “poll denialism” in a “post-truth” world is to do better work, make better predictions, and deliver insightful interpretations. That is what we all strive to do, and it is more important than ever.


Battle of the Brands is available for purchase!


How does your brand compete with others in the battle to win today’s youth?

Crux Research has conducted a syndicated study of 57 youth-oriented brands that is available for purchase on Collaborata.  We have a “data only” option for sale for $4,900 and an option including a full report and consultation/presentation for $9,500.

Brands that succeed with Millennials can enjoy their loyalty for years to come. This study’s 13- to 24-year-old group is often given short shrift by brands that have a more adult target. That can prove to be short-sighted thinking. Teens and young adults not only spend significant amounts of their own money, they also influence the spending of parents, siblings, and other adults in their lives. They are the adult shoppers of the future; building a relationship with them now can translate into loyalty that lasts their lifetime. This study shows you exactly where your brand fares among this critical cohort right now and what you need to do increase young consumers’ engagement with your brand.

More information about this study can be found here.

Objectives for our “Battle of the Brands” project are as follows:

  • Compare and contrast the relative strengths across a variety of measures of 57 youth-oriented brands.
  • See how your brand is “personalized” — learn where it statistically maps across 32 brand personality dimensions.
  • Discover how the 57 brands fare on the key measures of Awareness, Brand Interaction, Brand Connection, Brand Popularity, and Motivation.
  • Take away key insights into why some brand succeed, while others struggle, with these Millennials and Gen Z consumers.
  • These brands have been selected from a wide range of categories, including social causes, media and entertainment, retail, technology, and consumer packaged goods.

Become a co-sponsor of this actionable today! Increase your brand’s youth standing tomorrow.

Happy Birthday to Us!


This month, Crux Research turns 11 years old. What started as something transitional for us as we looked for the next big thing quickly morphed into the next big thing itself.

Since our start, we have now conducted 300+ projects for 65+ clients across a wide range of industries and causes. At times, we feel we know a little bit about everything at this point.

We’ve bucked a few trends along the way. We’ve never had a business plan and have never really looked past the next few months. We’ve resisted pressure to grow to a larger company. We don’t necessarily go where the opportunities are and instead prefer to work on projects and with clients that interest us. We’ve also eschewed the normal business week, and work nights, weekends, etc.

Our ability to collect incredible people as clients has only been surpassed by our good fortune to attract staff and helpers. A special thanks to our staff members and our “bench” who have been helping out our team throughout the years.

Onward!  Happy Holidays to all. May your response rates be high and all of your confidence intervals be +/-5%!

An Epic Fail: How Can Pollsters Get It So Wrong?


Perhaps the only bigger loser than Hillary Clinton in yesterday’s election was the polling industry itself. Those of us who conduct surveys for a living should be asking if we can’t even get something as simple as a Presidential election right, why should our clients have confidence in any data we provide?

First, a recap of how poorly the polls and pundits performed:

  • FiveThirtyEight’s model had Clinton’s likelihood of winning at 72%.
  • Betfair (a prediction market) had Clinton trading at an 83% chance of winning.
  • A quick scan of Real Clear Politics on Monday night showed 25 final national polls. 22 of these 25 polls had Clinton as the winner, and the most reputable ones almost all had her winning the popular vote by 3 to 5 points. (It should be noted that Clinton seems likely to win the popular vote.)

There will be claims that FiveThirtyEight “didn’t say her chances were 100%” or that Betfair had Trump with a “17% chance of winning.” Their predictions were never to be construed to be certain.  No prediction is ever 100% certain, but this is a case where almost all forecasters got it wrong.  That is pretty close to the definition of a bias – something systematic that affected all predictions must have happened.

The polls will claim that the outcome was in the margin of error. But, to claim a “margin of error” defense is statistically suspect, as margins of error only apply to random or probability samples and none of these polls can claim to have a random sample. FiveThirtyEight also had Clinton with 302 electoral votes, way beyond any reasonable error rate.

Regardless, the end result is going to end up barely within the margin of error of most of these polls erroneously use anyway. That is not a free pass for the pollsters at all. All it means is rather than their estimate being accurate 95% of the time, it was predicted to be accurate a little bit less:  between 80% and 90% of the time for most of these polls by my calculations.

Lightning can strike for sure. But this is a case of it hitting the same tree numerous times.

So, what happened? I am sure this will be the subject of many post mortems by the media and conferences from the research industry itself, but let me provide an initial perspective.

First, it seems that it had anything to do with the questions themselves. In reality, most pollsters use very similar questions to gather voter preferences and many of these questions have been in use for a long time.  Asking whom you will vote for is pretty simple. The question itself seems to be an unlikely culprit.

I think the mistakes the pollster’s made come down to some fairly basic things.

  1. Non-response bias. This has to be a major reason why the polls were wrong. In short, non-response bias means that the sample of people who took the time to answer the poll did not adequately represent the people who actually voted.  Clearly this must have occurred. There are many reasons this could happen.  Poor response rates is likely a key one, but poor selection of sampling frames, researchers getting too aggressive with weighting and balancing, and simply not being able to reach some key types of voters well all play into it.
  2. Social desirability bias. This tends to be more present in telephone and in-person polls that involve an interviewer but it happens in online polls as well. This is when the respondent tells you what you want to hear or what he or she thinks is socially acceptable. A good example of this is if you conduct a telephone poll and an online poll at the same time, more people will say they believe in God in the telephone poll.  People tend to answer how they think they are supposed to, especially when responding to an interviewer.   In this case, let’s take the response bias away.  Suppose pollsters reached every single voter who actually showed up in a poll. If we presume “Trump” was a socially unacceptable answer in the poll, he would do better in the actual election than in the poll.  There is evidence this could have happened, as polls with live interviewers had a wider Clinton to Trump gap than those that were self-administered.
  3. Third parties. It looks like Gary Johnson’s support is going to end up being about half of what the pollster’s predicted.  If this erosion benefited Trump, it could very well have made a difference. Those that switched their vote from Johnson in the last few weeks might have been more likely to switch to Trump than Clinton.
  4. Herding. This season had more polls than ever before and they often had widely divergent results.  But, if you look closely you will see that as the election neared, polling results started to converge.  The reason could be that if a pollster had a poll that looked like an outlier, they probably took a closer look at it, toyed with how the sample was weighted, or decided to bury the poll altogether.  It is possible that there were some accurate polls out there that declared a Trump victory, but the pollster’s didn’t release them.

I’d also submit that the reasons for the polling failure are likely not completely specific to the US and this election. We can’t forget that pollsters also missed the recent Brexit vote, the Mexican Presidency, and David Cameron’s original election in the UK.

So, what should the pollsters do? Well, they owe it to the industry to convene, share data, and attempt to figure it out. That will certainly be done via the trade organizations pollsters belong to, but I have been to a few of these events and they devolve pretty quickly into posturing, defensiveness, and salesmanship. Academics will take a look, but they move so slowly that the implications they draw will likely be outdated by the time they are published.  This doesn’t seem to be an industry that is poised to fix itself.

At minimum, I’d like to see the polling organizations re-contact all respondents from their final polls. That would shed a lot of light on any issues relating to social desirability or other subtle biases.

This is not the first time pollsters have gotten it wrong. President Hillary Clinton will be remembered in history along with President Thomas Dewey and President Alf Landon.  But, this time seems different.  There is so much information out there that seeing the signal to the noise is just plain difficult – and there are lessons in that for Big Data analyses and research departments everywhere.

We are left with an election result that half the country is ecstatic about and half is worried about.  However, everyone in the research industry should be deeply concerned. I am hopeful that this will cause more market research clients to ask questions about data quality, potential errors and biases, and that they will value quality more. Those conversations will go a long way to putting a great industry back on the right path.

Will Young People Vote?


Once again we are in an election cycle where the results could hinge on a simple question:  will young people vote? Galvanizing youth turnout is a key strategy for all candidates. It is perhaps not an exaggeration to say that Millennial voters hold the key to the future political leadership of the country.

But, this is nothing specific to Millennials and to this election. Young voters have effectively been the “swing vote” since the election of Kennedy in 1960. Yet, young voter turnout is consistently low relative to other age groups.

The 26th Amendment was ratified in 1971 giving 18-21 year olds the right to vote for the first time. This means that anyone born in 1953 or later has never been of age at a time when they could not vote in a Presidential election. So, only those who are currently 64 or older (approximately) will have turned 18 at a time when they were not enfranchised.

This right did not come easily. The debate about lowering the voting age started in earnest during World War II, as many soldiers under 21 (especially those drafted into the armed forces) didn’t understand how they could be expected to sacrifice so much for a country if they did not have a say in how it was governed. The movement gained steam during the cultural revolution of the 1960’s and culminated in the passage of the 26th Amendment.

Young people celebrated their new found right to vote, and then promptly failed to take advantage of it. The chart below shows 18-24 year old voter turnout compared to totalvoter turnout for all Presidential election years since the 26th Amendment was ratified.


Much was made of Obama’s success in galvanizing the young vote in 2008. However, there was only a 2 percentage point gain increase in young voter turnout in 2008 versus 2004. As the chart shows, there was a big falloff in young voter participation in 1996 and 2000, which were the last elections before Millennials comprised the bulk of the 18-24 age group.

It remains that young voters are far less likely to vote than older adults and that trend is likely to continue.

A Math Myth?


I just finished reading The Math Myth: And Other STEM Delusions by Andrew Hacker. I found the book to be so provocative and interesting that it merits the first ever book review on this blog.

The central thesis of the book is that in the US, we (meaning policy makers, educators, parents, and employers) have become obsessed with raising rigor and academic standards in math. This obsession has reached a point where we are convinced that our national security, international business competitiveness, and hegemony as an economic power rides on improving the math skills of all our high school and college graduates.

Hacker questions this national fixation. First, raising math standards has some serious costs. Not only has it caused significant disruption within schools and among educators and parents (ask any educator about the upheaval the Common Core has caused), but it has also cost significant money. But, most importantly, Hacker makes a strong case that raising of math standards has ensured that many students will be left behind and unprepared for the future.

Currently, about one in four high school students does not complete high school. Once enrolled in college, only a bit more than half of enrollees will graduate. While there are many reasons for these failures, Hacker points out that the chief ACADEMIC reason is math.

I think everyone can think of someone who struggled mightily in math. I personally took Calculus in high school and two further courses in college. I have often wondered why. It seemed to be more of a rite of passage than an academic pursuit with any realistic end in mind for me. It was certainly painful.

Math has humbled many a bright young person. I have a niece who was an outstanding high school student (an honors student, took multiple AP courses, etc.). She went to a reputable four-year college. In her first year at college, she failed a required math course in Calculus. This remains the only course she had gotten below a B in during her entire academic life. Her college-mandated math experience made her feel like a failure and reconsider whether she belonged in college. Fortunately for her she had good supports in place and succeeded in her second go round at the course. Many others are not so lucky.

And to what end? My niece has ended up in a quantitative field and is succeeding nicely. Yet, I doubt she has ever had to calculate the area under a curve, run a derivative, or understand a differential equation.

The reality is very few people do. Hacker, using Bureau of Labor Statistics data, estimates that about 5% of the US workforce currently uses math beyond basic arithmetic in their jobs. This means that only about 1 in 20 of our students will need to know basic algebra or beyond in their employment. 95% will do just fine with the math that most people master by the end of 8th grade.

And, despite the focus on STEM education, Hacker uses BLS data to show that the number of engineering jobs in the US is projected to grow at a slower rate than the economy as a whole. In addition, despite claims by policy makers that there is a dearth of qualified engineers, real wages for engineers have been falling and not rising, implying that supply is exceeding demand.

Yet, our high school standards and college entry standards require a mastery of not just algebra, but also geometry and trigonometry.

Most two-year colleges have a math test that all incoming students must pass – regardless of the program of study they intend to follow. As anyone who has worked with community colleges can attest to, remediation of math skills for incoming students is a major issue two-year institutions face. Hacker questions this. Why, for example, should a student intending to study cosmetology need to master algebra? When is the last time your haircutter needed to understand how to factor a polynomial?

The problem lies in what the requirement that all students master advanced math skills does to people’s lives unnecessarily. Many aspiring cosmetologists won’t pass this test and won’t end up enrolling in the program and will have to find new careers because they cannot get licensed. What interest does this serve?

Market research is a quantitative field. Perhaps not as much as engineering and sciences, but our field is focused on numbers and statistics and making sense of them. However, in about 30 years of working with researchers and hiring them, I can tell you that I have not once encountered a single researcher who doesn’t have the technical math background necessary to succeed. In fact, I’d say that most of the researchers I’ve known have mastered the math necessary for our field by the time they entered high school.

However, I have encountered many researchers who do not have the interpretive skills needed to draw insights from the data sets we gather. And, I’d say that MOST of the researchers I have encountered cannot write well and cannot communicate findings effectively to their clients.

Hacker calls these skills “numeracy” and advocates strongly for them. Numeracy skills are what the vast majority of our graduates truly need to master.  These are practical numerical skills, beyond the life skills that we are often concerned about (e.g. understanding the impact of debt, how compound interest works, how to establish a family budget).  Numeracy (which requires basic arithmetic skills) is making sense of the world by using numbers, and being able to critically understand the increasing amount of numerical data that we are exposed to.

Again, I have worked with researchers who have advanced skills in Calculus and multivariate statistical methods, yet have few skills in numeracy. Can you look at some basic cross-tabs and tell a story? Can you be presented with a marketing situation and think of how we can use research to gather data to make a decision more informed? These skills, rather than advanced mathematical or statistical skills, are what are truly valued in our field. If you are in our field for long, you’ll noticed that the true stars of the field (and the people being paid the most) are rarely the math and statistical jedis – they tend to be the people who have mastered both numeracy and communication.

This isn’t the first time our country has become obsessed with STEM achievement. I can think of three phases in the past century where we’ve become similarly single-minded about education. The first was the launch of Sputnik in 1957.This caused a near panic in the US that we were falling behind the Soviets and our educational system changed significantly as a result. The second was the release of the Coleman Report in 1966.This report criticized the way schools are funded and, based on a massive study, concluded that spending additional money on education did not necessarily create greater achievement. It once again produced a near-panic that our schools were not keeping up, and many educational reforms were made. The third “shock” came in the form of A Nation at Risk, which was published during the Gen X era in 1983. This governmental report basically stated that our nation’s schools were failing. Panicked policy makers responded with reforms, perhaps the most important being that the federal government started taking on an activist role in education. We now have the “Common Core Era” – which, if you take a long view, can be seen as history repeating itself.

Throughout all of these shocks, the American economy thrived. While other economies have become more competitive, for some reason we have come to believe that if we can just get more graduates that understand differential equations, we’ll somehow be able to embark on a second American century.

Many of the criticisms Hacker levies towards math have parallels in other subjects. Yes, I am in a highly quantitative field and I haven’t had to know what a quadratic equation is since I was 16 years old. But, I also haven’t had to conjugate French verbs, analyze Shakespearean sonnets, write poetry, or know what Shay’s Rebellion was all about. We study many things that don’t end up being directly applicable to our careers or day-to-day lives. That is part of becoming a well-rounded person and an intelligent citizen. There is nothing wrong with learning for the sake of learning.

However, there are differences in math. Failure to progress sufficiently in math prevents movement forward in our academic system – and prevents pursuit of formal education in fields that don’t require these skills. We don’t stop people from becoming welders, hair-cutters, or auto mechanics because they can’t grasp the nuances of literature, speak a foreign language, or have knowledge of US History. But, if they don’t know algebra, we don’t let them enroll in these programs.

This is in no way a criticism of the need to encourage capable students from studying advanced math. As we can all attest to whenever we drive over a bridge, drive a car, use social media, or receive medical treatment, having incredible engineers is essential to the quality of our life. We should all want the 5% of the workforce that needs advanced math skills to be as well trained as possible.Our future world depends on them. Fortunately, the academic world is set up for them and rewards them.

But, we do have to think of alternative educational paths for the significant number of young people who will, at some point, find math to be a stumbling block to their future.

I highly recommend reading this book. Even if you do not agree with its premise or conclusions, it is a good example of how we need to think critically about our public policy declarations and the unintended consequences they can cause.

If you don’t have the time or inclination to read the entire book, Hacker wrote an editorial for the NY Times that eventually spawned the book. It is linked below.

Is Algebra Necessary?