Archive Page 2

An Epic Fail: How Can Pollsters Get It So Wrong?


Perhaps the only bigger loser than Hillary Clinton in yesterday’s election was the polling industry itself. Those of us who conduct surveys for a living should be asking if we can’t even get something as simple as a Presidential election right, why should our clients have confidence in any data we provide?

First, a recap of how poorly the polls and pundits performed:

  • FiveThirtyEight’s model had Clinton’s likelihood of winning at 72%.
  • Betfair (a prediction market) had Clinton trading at an 83% chance of winning.
  • A quick scan of Real Clear Politics on Monday night showed 25 final national polls. 22 of these 25 polls had Clinton as the winner, and the most reputable ones almost all had her winning the popular vote by 3 to 5 points. (It should be noted that Clinton seems likely to win the popular vote.)

There will be claims that FiveThirtyEight “didn’t say her chances were 100%” or that Betfair had Trump with a “17% chance of winning.” Their predictions were never to be construed to be certain.  No prediction is ever 100% certain, but this is a case where almost all forecasters got it wrong.  That is pretty close to the definition of a bias – something systematic that affected all predictions must have happened.

The polls will claim that the outcome was in the margin of error. But, to claim a “margin of error” defense is statistically suspect, as margins of error only apply to random or probability samples and none of these polls can claim to have a random sample. FiveThirtyEight also had Clinton with 302 electoral votes, way beyond any reasonable error rate.

Regardless, the end result is going to end up barely within the margin of error of most of these polls erroneously use anyway. That is not a free pass for the pollsters at all. All it means is rather than their estimate being accurate 95% of the time, it was predicted to be accurate a little bit less:  between 80% and 90% of the time for most of these polls by my calculations.

Lightning can strike for sure. But this is a case of it hitting the same tree numerous times.

So, what happened? I am sure this will be the subject of many post mortems by the media and conferences from the research industry itself, but let me provide an initial perspective.

First, it seems that it had anything to do with the questions themselves. In reality, most pollsters use very similar questions to gather voter preferences and many of these questions have been in use for a long time.  Asking whom you will vote for is pretty simple. The question itself seems to be an unlikely culprit.

I think the mistakes the pollster’s made come down to some fairly basic things.

  1. Non-response bias. This has to be a major reason why the polls were wrong. In short, non-response bias means that the sample of people who took the time to answer the poll did not adequately represent the people who actually voted.  Clearly this must have occurred. There are many reasons this could happen.  Poor response rates is likely a key one, but poor selection of sampling frames, researchers getting too aggressive with weighting and balancing, and simply not being able to reach some key types of voters well all play into it.
  2. Social desirability bias. This tends to be more present in telephone and in-person polls that involve an interviewer but it happens in online polls as well. This is when the respondent tells you what you want to hear or what he or she thinks is socially acceptable. A good example of this is if you conduct a telephone poll and an online poll at the same time, more people will say they believe in God in the telephone poll.  People tend to answer how they think they are supposed to, especially when responding to an interviewer.   In this case, let’s take the response bias away.  Suppose pollsters reached every single voter who actually showed up in a poll. If we presume “Trump” was a socially unacceptable answer in the poll, he would do better in the actual election than in the poll.  There is evidence this could have happened, as polls with live interviewers had a wider Clinton to Trump gap than those that were self-administered.
  3. Third parties. It looks like Gary Johnson’s support is going to end up being about half of what the pollster’s predicted.  If this erosion benefited Trump, it could very well have made a difference. Those that switched their vote from Johnson in the last few weeks might have been more likely to switch to Trump than Clinton.
  4. Herding. This season had more polls than ever before and they often had widely divergent results.  But, if you look closely you will see that as the election neared, polling results started to converge.  The reason could be that if a pollster had a poll that looked like an outlier, they probably took a closer look at it, toyed with how the sample was weighted, or decided to bury the poll altogether.  It is possible that there were some accurate polls out there that declared a Trump victory, but the pollster’s didn’t release them.

I’d also submit that the reasons for the polling failure are likely not completely specific to the US and this election. We can’t forget that pollsters also missed the recent Brexit vote, the Mexican Presidency, and David Cameron’s original election in the UK.

So, what should the pollsters do? Well, they owe it to the industry to convene, share data, and attempt to figure it out. That will certainly be done via the trade organizations pollsters belong to, but I have been to a few of these events and they devolve pretty quickly into posturing, defensiveness, and salesmanship. Academics will take a look, but they move so slowly that the implications they draw will likely be outdated by the time they are published.  This doesn’t seem to be an industry that is poised to fix itself.

At minimum, I’d like to see the polling organizations re-contact all respondents from their final polls. That would shed a lot of light on any issues relating to social desirability or other subtle biases.

This is not the first time pollsters have gotten it wrong. President Hillary Clinton will be remembered in history along with President Thomas Dewey and President Alf Landon.  But, this time seems different.  There is so much information out there that seeing the signal to the noise is just plain difficult – and there are lessons in that for Big Data analyses and research departments everywhere.

We are left with an election result that half the country is ecstatic about and half is worried about.  However, everyone in the research industry should be deeply concerned. I am hopeful that this will cause more market research clients to ask questions about data quality, potential errors and biases, and that they will value quality more. Those conversations will go a long way to putting a great industry back on the right path.

Will Young People Vote?


Once again we are in an election cycle where the results could hinge on a simple question:  will young people vote? Galvanizing youth turnout is a key strategy for all candidates. It is perhaps not an exaggeration to say that Millennial voters hold the key to the future political leadership of the country.

But, this is nothing specific to Millennials and to this election. Young voters have effectively been the “swing vote” since the election of Kennedy in 1960. Yet, young voter turnout is consistently low relative to other age groups.

The 26th Amendment was ratified in 1971 giving 18-21 year olds the right to vote for the first time. This means that anyone born in 1953 or later has never been of age at a time when they could not vote in a Presidential election. So, only those who are currently 64 or older (approximately) will have turned 18 at a time when they were not enfranchised.

This right did not come easily. The debate about lowering the voting age started in earnest during World War II, as many soldiers under 21 (especially those drafted into the armed forces) didn’t understand how they could be expected to sacrifice so much for a country if they did not have a say in how it was governed. The movement gained steam during the cultural revolution of the 1960’s and culminated in the passage of the 26th Amendment.

Young people celebrated their new found right to vote, and then promptly failed to take advantage of it. The chart below shows 18-24 year old voter turnout compared to totalvoter turnout for all Presidential election years since the 26th Amendment was ratified.


Much was made of Obama’s success in galvanizing the young vote in 2008. However, there was only a 2 percentage point gain increase in young voter turnout in 2008 versus 2004. As the chart shows, there was a big falloff in young voter participation in 1996 and 2000, which were the last elections before Millennials comprised the bulk of the 18-24 age group.

It remains that young voters are far less likely to vote than older adults and that trend is likely to continue.

A Math Myth?


I just finished reading The Math Myth: And Other STEM Delusions by Andrew Hacker. I found the book to be so provocative and interesting that it merits the first ever book review on this blog.

The central thesis of the book is that in the US, we (meaning policy makers, educators, parents, and employers) have become obsessed with raising rigor and academic standards in math. This obsession has reached a point where we are convinced that our national security, international business competitiveness, and hegemony as an economic power rides on improving the math skills of all our high school and college graduates.

Hacker questions this national fixation. First, raising math standards has some serious costs. Not only has it caused significant disruption within schools and among educators and parents (ask any educator about the upheaval the Common Core has caused), but it has also cost significant money. But, most importantly, Hacker makes a strong case that raising of math standards has ensured that many students will be left behind and unprepared for the future.

Currently, about one in four high school students does not complete high school. Once enrolled in college, only a bit more than half of enrollees will graduate. While there are many reasons for these failures, Hacker points out that the chief ACADEMIC reason is math.

I think everyone can think of someone who struggled mightily in math. I personally took Calculus in high school and two further courses in college. I have often wondered why. It seemed to be more of a rite of passage than an academic pursuit with any realistic end in mind for me. It was certainly painful.

Math has humbled many a bright young person. I have a niece who was an outstanding high school student (an honors student, took multiple AP courses, etc.). She went to a reputable four-year college. In her first year at college, she failed a required math course in Calculus. This remains the only course she had gotten below a B in during her entire academic life. Her college-mandated math experience made her feel like a failure and reconsider whether she belonged in college. Fortunately for her she had good supports in place and succeeded in her second go round at the course. Many others are not so lucky.

And to what end? My niece has ended up in a quantitative field and is succeeding nicely. Yet, I doubt she has ever had to calculate the area under a curve, run a derivative, or understand a differential equation.

The reality is very few people do. Hacker, using Bureau of Labor Statistics data, estimates that about 5% of the US workforce currently uses math beyond basic arithmetic in their jobs. This means that only about 1 in 20 of our students will need to know basic algebra or beyond in their employment. 95% will do just fine with the math that most people master by the end of 8th grade.

And, despite the focus on STEM education, Hacker uses BLS data to show that the number of engineering jobs in the US is projected to grow at a slower rate than the economy as a whole. In addition, despite claims by policy makers that there is a dearth of qualified engineers, real wages for engineers have been falling and not rising, implying that supply is exceeding demand.

Yet, our high school standards and college entry standards require a mastery of not just algebra, but also geometry and trigonometry.

Most two-year colleges have a math test that all incoming students must pass – regardless of the program of study they intend to follow. As anyone who has worked with community colleges can attest to, remediation of math skills for incoming students is a major issue two-year institutions face. Hacker questions this. Why, for example, should a student intending to study cosmetology need to master algebra? When is the last time your haircutter needed to understand how to factor a polynomial?

The problem lies in what the requirement that all students master advanced math skills does to people’s lives unnecessarily. Many aspiring cosmetologists won’t pass this test and won’t end up enrolling in the program and will have to find new careers because they cannot get licensed. What interest does this serve?

Market research is a quantitative field. Perhaps not as much as engineering and sciences, but our field is focused on numbers and statistics and making sense of them. However, in about 30 years of working with researchers and hiring them, I can tell you that I have not once encountered a single researcher who doesn’t have the technical math background necessary to succeed. In fact, I’d say that most of the researchers I’ve known have mastered the math necessary for our field by the time they entered high school.

However, I have encountered many researchers who do not have the interpretive skills needed to draw insights from the data sets we gather. And, I’d say that MOST of the researchers I have encountered cannot write well and cannot communicate findings effectively to their clients.

Hacker calls these skills “numeracy” and advocates strongly for them. Numeracy skills are what the vast majority of our graduates truly need to master.  These are practical numerical skills, beyond the life skills that we are often concerned about (e.g. understanding the impact of debt, how compound interest works, how to establish a family budget).  Numeracy (which requires basic arithmetic skills) is making sense of the world by using numbers, and being able to critically understand the increasing amount of numerical data that we are exposed to.

Again, I have worked with researchers who have advanced skills in Calculus and multivariate statistical methods, yet have few skills in numeracy. Can you look at some basic cross-tabs and tell a story? Can you be presented with a marketing situation and think of how we can use research to gather data to make a decision more informed? These skills, rather than advanced mathematical or statistical skills, are what are truly valued in our field. If you are in our field for long, you’ll noticed that the true stars of the field (and the people being paid the most) are rarely the math and statistical jedis – they tend to be the people who have mastered both numeracy and communication.

This isn’t the first time our country has become obsessed with STEM achievement. I can think of three phases in the past century where we’ve become similarly single-minded about education. The first was the launch of Sputnik in 1957.This caused a near panic in the US that we were falling behind the Soviets and our educational system changed significantly as a result. The second was the release of the Coleman Report in 1966.This report criticized the way schools are funded and, based on a massive study, concluded that spending additional money on education did not necessarily create greater achievement. It once again produced a near-panic that our schools were not keeping up, and many educational reforms were made. The third “shock” came in the form of A Nation at Risk, which was published during the Gen X era in 1983. This governmental report basically stated that our nation’s schools were failing. Panicked policy makers responded with reforms, perhaps the most important being that the federal government started taking on an activist role in education. We now have the “Common Core Era” – which, if you take a long view, can be seen as history repeating itself.

Throughout all of these shocks, the American economy thrived. While other economies have become more competitive, for some reason we have come to believe that if we can just get more graduates that understand differential equations, we’ll somehow be able to embark on a second American century.

Many of the criticisms Hacker levies towards math have parallels in other subjects. Yes, I am in a highly quantitative field and I haven’t had to know what a quadratic equation is since I was 16 years old. But, I also haven’t had to conjugate French verbs, analyze Shakespearean sonnets, write poetry, or know what Shay’s Rebellion was all about. We study many things that don’t end up being directly applicable to our careers or day-to-day lives. That is part of becoming a well-rounded person and an intelligent citizen. There is nothing wrong with learning for the sake of learning.

However, there are differences in math. Failure to progress sufficiently in math prevents movement forward in our academic system – and prevents pursuit of formal education in fields that don’t require these skills. We don’t stop people from becoming welders, hair-cutters, or auto mechanics because they can’t grasp the nuances of literature, speak a foreign language, or have knowledge of US History. But, if they don’t know algebra, we don’t let them enroll in these programs.

This is in no way a criticism of the need to encourage capable students from studying advanced math. As we can all attest to whenever we drive over a bridge, drive a car, use social media, or receive medical treatment, having incredible engineers is essential to the quality of our life. We should all want the 5% of the workforce that needs advanced math skills to be as well trained as possible.Our future world depends on them. Fortunately, the academic world is set up for them and rewards them.

But, we do have to think of alternative educational paths for the significant number of young people who will, at some point, find math to be a stumbling block to their future.

I highly recommend reading this book. Even if you do not agree with its premise or conclusions, it is a good example of how we need to think critically about our public policy declarations and the unintended consequences they can cause.

If you don’t have the time or inclination to read the entire book, Hacker wrote an editorial for the NY Times that eventually spawned the book. It is linked below.

Is Algebra Necessary?


How to Be a Good Research Supplier

Crux Logo Final 2016

A while back, we posted “How to Be a Good Research Client” to help clients understand the makings of a successful partnership from the supplier perspective. Here, we’d like to do the opposite: advise suppliers on how to position for success with their clients.

Being an outstanding supplier goes beyond the technical abilities of understanding statistics, experimental design, business, and marketing. There are many researchers who have these skills, but are not great suppliers. They are necessary, but not sufficient skills.

It starts with empathy – a good supplier will understand not just the business situation the client is facing, but also the internal pressures he/she faces. We’ve found over time that suppliers who have spent time as clients themselves understand what happens to projects after the final presentation in a way that many suppliers just cannot.

So, here goes:  Our 10 tips on how to be a good research supplier.

  1. Begin by seeking out the right clients. There is simply too much pressure, especially at larger research firms, to take on every project that comes your way. It really helps if you have guidelines as to which clients you will accept: which ones match with your skills in a unique way, are doing things you are genuinely interested in studying, and have individuals who are good project managers.
  2. Be honest about what you are good at and not so good at. Research isn’t quite like law or medicine where every task seems to devolve to a specialty, but there are specialties. You are not good at everything and neither is your firm. Once you realize this, you can concentrate on where you provide unique value.
  3. Understand what is at stake. Some market research projects influence how millions of dollars are spent. Still others are a part of a substantial initiative within a company. Somewhere, there is somebody whose career hinges on the success of this initiative. While the research project might come and go in a matter of months to you as a supplier and be one of a dozen you are working on, the success of the project might make or break someone’s career. It is good to never lose sight of that.
  4. Price projects to be profitable. You should price projects to make a strong profit for your firm and then not waiver easily on price. Why? Because then you can put all thoughts of profitability out of your mind at the onset and focus on delivering a great project. Never, and we mean never, take on an unprofitable project because of the prospects of further projects coming down the road. It doesn’t serve you or your client well.
  5. Don’t nickel and dime clients. They will ask for things you didn’t bid on or anticipate. Not everything they need will be foreseeable. An extra banner table. A second presentation. A few extra interviews. Follow ups you didn’t expect. Just do it and don’t look back. Larger research firms are prone to charging for every little thing the client asks. After a while, the client stops asking and moves onto another firm. Projects can be expensive. Nickle and diming your clients for small requests is about as frustrating as buying a new car and having the dealer charge you extra to put floor mats in it.
  6. Never be the one your client is waiting on. If there is one rule here that we feel we have mastered at Crux it is this one. There are a lot of moving parts in a project. You often need things from clients and they need things from you along the way. Never be the one people are waiting on. Stay late if you have to, come in early, work from home … do anything but be the “rate limiting factor” on a project. Your clients will love you for it.
  7. Be around “forever” for follow ups. We have seen suppliers put in contracts that they are available for 3 months from when the project is over for follow up discussions. Why? We love it when clients call years after a project is over as it means the project is still having an influence on their business. Be there as long as it takes for them.
  8. Be human. It took us a little time to learn this one. We used to be very workmanlike and professional around clients to the point of being a bit “stiff.” Then we realized clients want to work with people who are professional about the task at hand, but also fun to be around, and well, human. Granted, they don’t want to hear about every stress of your personal life, but relax a little and be who you are. If that doesn’t work out well for you, you might not be in the right career.
  9. Make them feel like they are your only client. You might have a dozen projects on your plate, family commitments tugging at you, coworkers driving you crazy, and a myriad of other things competing for your time. Time management isn’t easy in a deadline driven field such as research. But it also isn’t your client’s problem. They should feel like you have nothing else to do all day but work on their project. The focus needs to be on them, and not your time management. When you are late for a meeting because you had another run over, you are telling your client they aren’t your number one priority.
  10. Follow up. The project might be over for you, but it lives on longer for your client.  Be sure to follow up a few weeks down the line to see if there is anything else you can do. You’ll be surprised at how often the next project comes up during this conversation.

Congratulations to Truth Initiative!

Congratulations again to our client, Truth Initiative!  Last week Truth won 4 Effies and 2 Big Apple awards for its anti-tobacco campaigns.

Read more about these wins here:  truth campaign grabs 4 effies, 2 big apple awards.

Cause Change. Be Changed.

Congratulations to Causewave Community Partners on their successful annual celebration last week.  It was a sellout!

This video does a great job of capturing what the organization is a about and the value of volunteering.  It also includes a cameo from Lisa on our staff!


Asking about gender and sexual orientation on surveys

When composing questionnaires, there are times when the simplest of questions have to adjust to fit the times. Questions we draft become catalysts for larger discussions. That has been the case with what was once the most basic of all questions – asking a respondent for their gender.

This is probably the most commonly asked question in the history of survey research. And it seems basic – we typically just ask:

  • Are you… male or female?

Or, if we are working with younger respondents, we ask:

  • Are you … a boy or a girl?

The question is almost never refused and I’ve never seen any research to suggest this is anything other than a highly reliable measure.

Simple, right?

But, we are in the midst of an important shift in the social norms towards alternative gender classifications. Traditionally, meaning up until a couple of years ago, if we wanted to classify homosexual respondents we wouldn’t come right out and ask the question, for fear that it would be refused or be found to be an offensive question for many respondents. Instead, we would tend to ask respondents to check off a list of causes that they support. If they chose “gay rights”, we would then go ahead and ask if they were gay or straight. Perhaps this was too politically correct, but it was an effective way to classify respondents in a way that wasn’t likely to offend.

We no longer ask it that way. We still ask if the respondent is male or female, but we follow up to ask if they are heterosexual, lesbian, gay, bisexual, transgender, etc.

We recently completed a study among 4-year college students where we posed this question.  Results were as follows:

  • Heterosexual = 81%
  • Bisexual = 8%
  • Lesbian = 3%
  • Gay = 2%
  • Transgender = 1%
  • Other = 2%
  • Refused to answer = 3%

First, it should be noted that 3% refused to answer is less than the 4% that refused to answer the race/ethnicity question on the same survey.  Conclusion:  asking today’s college students about sexual orientation is less sensitive than asking them about their race/ethnicity.

Second, it is more important than ever to ask this question. These data show that about 1 in 5 college students identify as NOT being heterosexual. Researchers need to start viewing these students as a segment, just as we do age or race. This is the reality of the Millennial market:  they are more likely to self-identify as not being heterosexual and more likely to be accepting of alternative lifestyles. Failure to understand this group results in a failure to truly understand the generation.

We have had three different clients ask us if we should start asking this question younger – to high school or middle school students. For now, we are advising against it unless the study has clear objectives that point to a need. Our reasoning for this is not that we feel the kids will find the question to be offensive, but that their parents and educators (whom we are often reliant on to be able to survey minors) might. We think that will change over time as well.

So, perhaps nothing is as simple as it seems.