Archive for the 'Methodology' Category

Will Big Data Kill Traditional Market Research?

Most of today’s research methods rely on a simple premise:  asking customers questions can yield insights that drive better decisions. This is traditionally called primary research because it involves gathering new data. It is often supplemented with secondary research, which involves looking at information that already exists, such as sales data, publicly available data, etc.  Primary and secondary research yield what I would call active data –individuals are providing data with their knowledge and consent.

We are moving to a passive data world. This involves analyzing data left behind as we live increasingly digital lives. When we breathe online we leave a trail of digital data crumbs everywhere – where we visit, what we post about, link to, the apps we use, etc. We also leave trails as to when and where we are when we do these things, what sequence we do them in, and even what those close to us do.

Our digital shadows are long. And these shadows provide an incredibly accurate version of ourselves. You may not remember what you had to eat a few days ago, but the Internet knows exactly what books you read, how fast you read them, and when you bought them. The Internet knows where you were when you looked up health information, your favorite places to travel, whether you lean liberal or conservative, and much more. Your digital shadow is alarmingly accurate.

Privacy issues aside, this creates exciting possibilities for market research.

The amount of information available is staggering.  It is estimated that the volume of digital information available is doubling about every 18 months. This means in the next year and a half we will create as much data as we have since the Internet was created. Clearly it is easy to drown in the noise of this data, and many certainly do. But, in some ways analyzing this data isn’t unlike what we have been doing for years. It is easy to drown in a data set if you don’t have clear hypotheses that you are pursuing.  Tapping into the power of Big Data is all about formulating the right questions before firing up the laptop.

So, how will Big Data change traditional, “active” research? Why would we need to ask people questions when we can track their actual behaviors more accurately?

Big Data will not obviate the need for traditional survey research. But, it will reposition it. Survey research will change and be reserved for marketing problems it is particularly well suited for.  Understanding underlying motivations of behavior will always require that we talk directly to consumers, if only to probe why their reported behavior differs from their actual behavior.

There are situations when Big Data techniques will triumph. We are noticing compelling examples of how Big Data analysis can save the world.  For instance, medical researchers are looking into diseases that are asymptomatic. Typically, an early doctor’s appointment for these diseases will consist of a patient struggling to remember symptoms and warning signs and when they might have had them.  An analysis of Google searches can look at people who can be inferred to have been diagnosed with the disease from their search behavior. Then, their previous search behavior can be analyzed to see if they were curious about symptoms and when.  In the hands of a skilled analyst, this can lead to new insights regarding the early warning signs of diseases that often are diagnosed too late.

There has been chatter that public health officials can track the early spread of the flu better each year by analyzing search trends than by using their traditional ways, which track doctor visits for the flu and prescriptions dispensed. The reason is that people Google for “flu symptoms” in advance of going to the doctor, and many who have symptoms don’t go to the doctor at all. A search trend analysis can help public health officials react faster to outbreaks.

This is all pretty cool. Marketers are all about delivering the right message to the right people at the right time, and understanding how prior online behavior predicts future decisions will be valued. Big Data is accurate in a way that surveys cannot be because memory is imperfect.

Let’s be clear. I don’t think that people lie on surveys, at least not purposefully. But there are memory errors that harm the ability of a survey to uncover the truth. For instance, I could ask on a survey what books you have read in the past month. But, sales data from the Kindle Store would probably be more accurate.

However, what proponents of “Big Data will take over the world” don’t realize is the errors that respondents make on surveys can be more valuable to marketers than the truth because their recollections are often more predictive of their future behavior than their actual past behavior. What you think you had for dinner two nights ago probably predicts what you will eat tonight better than what you actually may have eaten. Perceptions can be more important than reality and marketing is all about dealing with perceptions.

The key for skilled researchers is going to be to learn when Big Data techniques are superior and when traditional techniques will yield better insights. Big Data is a very big hammer, but isn’t suitable for every size nail.

It is an exciting time for our field. Data science and data analysis skills are going to become even more valuable in the labor market than they are today. While technical database and statistical skills will be important, in a Big Data era it will be even more important to have skills in developing the right questions to pursue in the first place and a solid understanding of the issues our clients face.

Let’s Make Research and Polling Great Again!

Crux Logo Final 2016

The day after the US Presidential election, we quickly wrote and posted about the market research industry’s failure to accurately predict the election.  Since this has been our widest-read post (by a factor of about 10!) we thought a follow-up was in order.

Some of what we predicted has come to pass. Pollsters are being defensive, claiming their polls really weren’t that far off, and are not reaching very deep to try to understand the core of why their predictions were poor. The industry has had a couple of confabs, where the major players have denied a problem exists.

We are at a watershed moment for our industry. Response rates continue to plummet, clients are losing confidence in the data we provide, and we are swimming in so much data our insights are often not able to find space to breathe. And the public has lost confidence in what we do.

Sometimes it is everyday conversations that can enlighten a problem. Recently, I was staying at an AirBnB in Florida. The host (Dan) was an ardent Trump supporter and at one point he asked me what I did for a living. When I told him I was a market researcher the conversation quickly turned to why the polls failed to accurately predict the winner of the election. By talking with Dan I quickly I realized the implications of Election 2016 polling to our industry. He felt that we can now safely ignore all polls – on issues, approval ratings, voter preferences, etc.

I found myself getting defensive. After all, the polls weren’t off that much.  In fact, they were actually off by more in 2012 than in 2016 – the problem being that this time the polling errors resulted in an incorrect prediction. Surely we can still trust polls to give a good sense of what our citizenry thinks about the issues of the day, right?

Not according to Dan. He didn’t feel our political leaders should pay attention to the polls at all because they can’t be trusted.

I’ve even seen a new term for this bandied about:  poll denialism. It is a refusal to believe any poll results because of their past failures. Just the fact that this has been named should be scary enough for researchers.

This is unnerving not just to the market research industry, but to our democracy in general.  It is rarely stated overtly, but poll results are a key way political leaders keep in touch with the needs of the public, and they shape public policy a lot more than many think. Ignoring them is ignoring public opinion.

Market research remains closely associated with political polling. While I don’t think clients have become as mistrustful about their market research as the public has become about polling, clients likely have their doubts. Much of what we do as market researchers is much more complicated than election polling. If we can’t successfully predict who will be President, why would a client believe our market forecasts?

We are at a defining moment for our industry – a time when clients and suppliers will realize this is an industry that has gone adrift and needs a righting of the course. So what can we do to make research great again?  We have a few ideas.

  1. First and foremost, if you are a client, make greater demands for data quality. Nothing will stimulate the research industry more to fix itself than market forces – if clients stop paying for low quality data and information, suppliers will react.
  2. Slow down! There is a famous saying about all projects.  They have three elements that clients want:  a) fast, b) good, and c) cheap, and on any project you can choose two of these.  In my nearly three decades in this industry I have seen this dynamic change considerably. These days, “fast” is almost always trumping the other two factors.  “Good” has been pushed aside.  “Cheap” has always been important, but to be honest budget considerations don’t seem to be the main issue (MR spending continues to grow slowly). Clients are insisting that studies are conducted at a breakneck pace and data quality is suffering badly.
  3. Insist that suppliers defend their methodologies. I’ve worked for corporate clients, but also many academic researchers. I have found that a key difference between them becomes apparent during results presentations. Corporate clients are impatient and want us to go as quickly as possible over the methodology section and get right into the results.  Academics are the opposite. They dwell on the methodology and I have noticed if you can get an academic comfortable with your methods it is rare that they will doubt your findings. Corporate researchers need to understand the importance of a sound methodology and care more about it.
  4. Be honest about the limitations of your methodology. We often like to say that everything you were ever taught about statistics assumed a random sample and we haven’t seen a study in at least 20 years that can credibly claim to have one.  That doesn’t mean a study without a random sample isn’t valuable, it just means that we have to think through the biases and errors it could contain and how that can be relevant to the results we present. I think every research report should have a page after the methodology summary that lists off the study’s limitations and potential implications to the conclusions we draw.
  5. Stop treating respondents so poorly. I believe this is a direct consequence of the movement from telephone to online data collection. Back in the heyday of telephone research, if you fielded a survey that was too long or was challenging for respondents to answer, it wasn’t long until you heard from your interviewers just how bad your questionnaire was. In an online world, this feedback never gets back to the questionnaire author – and we subsequently beat up our respondents pretty badly.  I have been involved in at least 2,000 studies and about 1 million respondents.  If each study averages 15 minutes that implies that people have spent about 28 and a half years filling out my surveys.  It is easy to lose respect for that – but let’s not forget the tremendous amount of time people spend on our surveys. In the end, this is a large threat to the research industry, as if people won’t respond, we have nothing to sell.
  6. Stop using technology for technology’s sake. Technology has greatly changed our business. But, it doesn’t supplant the basics of what we do or allow us to ignore the laws of statistics.  We still need to reach a representative sample of people, ask them intelligent questions, and interpret what it means for our clients.  Tech has made this much easier and much harder at the same time.  We often seem to do things because we can and not because we should.

The ultimate way to combat “poll denialism” in a “post-truth” world is to do better work, make better predictions, and deliver insightful interpretations. That is what we all strive to do, and it is more important than ever.

 

An Epic Fail: How Can Pollsters Get It So Wrong?

picture1

Perhaps the only bigger loser than Hillary Clinton in yesterday’s election was the polling industry itself. Those of us who conduct surveys for a living should be asking if we can’t even get something as simple as a Presidential election right, why should our clients have confidence in any data we provide?

First, a recap of how poorly the polls and pundits performed:

  • FiveThirtyEight’s model had Clinton’s likelihood of winning at 72%.
  • Betfair (a prediction market) had Clinton trading at an 83% chance of winning.
  • A quick scan of Real Clear Politics on Monday night showed 25 final national polls. 22 of these 25 polls had Clinton as the winner, and the most reputable ones almost all had her winning the popular vote by 3 to 5 points. (It should be noted that Clinton seems likely to win the popular vote.)

There will be claims that FiveThirtyEight “didn’t say her chances were 100%” or that Betfair had Trump with a “17% chance of winning.” Their predictions were never to be construed to be certain.  No prediction is ever 100% certain, but this is a case where almost all forecasters got it wrong.  That is pretty close to the definition of a bias – something systematic that affected all predictions must have happened.

The polls will claim that the outcome was in the margin of error. But, to claim a “margin of error” defense is statistically suspect, as margins of error only apply to random or probability samples and none of these polls can claim to have a random sample. FiveThirtyEight also had Clinton with 302 electoral votes, way beyond any reasonable error rate.

Regardless, the end result is going to end up barely within the margin of error of most of these polls erroneously use anyway. That is not a free pass for the pollsters at all. All it means is rather than their estimate being accurate 95% of the time, it was predicted to be accurate a little bit less:  between 80% and 90% of the time for most of these polls by my calculations.

Lightning can strike for sure. But this is a case of it hitting the same tree numerous times.

So, what happened? I am sure this will be the subject of many post mortems by the media and conferences from the research industry itself, but let me provide an initial perspective.

First, it seems that it had anything to do with the questions themselves. In reality, most pollsters use very similar questions to gather voter preferences and many of these questions have been in use for a long time.  Asking whom you will vote for is pretty simple. The question itself seems to be an unlikely culprit.

I think the mistakes the pollster’s made come down to some fairly basic things.

  1. Non-response bias. This has to be a major reason why the polls were wrong. In short, non-response bias means that the sample of people who took the time to answer the poll did not adequately represent the people who actually voted.  Clearly this must have occurred. There are many reasons this could happen.  Poor response rates is likely a key one, but poor selection of sampling frames, researchers getting too aggressive with weighting and balancing, and simply not being able to reach some key types of voters well all play into it.
  2. Social desirability bias. This tends to be more present in telephone and in-person polls that involve an interviewer but it happens in online polls as well. This is when the respondent tells you what you want to hear or what he or she thinks is socially acceptable. A good example of this is if you conduct a telephone poll and an online poll at the same time, more people will say they believe in God in the telephone poll.  People tend to answer how they think they are supposed to, especially when responding to an interviewer.   In this case, let’s take the response bias away.  Suppose pollsters reached every single voter who actually showed up in a poll. If we presume “Trump” was a socially unacceptable answer in the poll, he would do better in the actual election than in the poll.  There is evidence this could have happened, as polls with live interviewers had a wider Clinton to Trump gap than those that were self-administered.
  3. Third parties. It looks like Gary Johnson’s support is going to end up being about half of what the pollster’s predicted.  If this erosion benefited Trump, it could very well have made a difference. Those that switched their vote from Johnson in the last few weeks might have been more likely to switch to Trump than Clinton.
  4. Herding. This season had more polls than ever before and they often had widely divergent results.  But, if you look closely you will see that as the election neared, polling results started to converge.  The reason could be that if a pollster had a poll that looked like an outlier, they probably took a closer look at it, toyed with how the sample was weighted, or decided to bury the poll altogether.  It is possible that there were some accurate polls out there that declared a Trump victory, but the pollster’s didn’t release them.

I’d also submit that the reasons for the polling failure are likely not completely specific to the US and this election. We can’t forget that pollsters also missed the recent Brexit vote, the Mexican Presidency, and David Cameron’s original election in the UK.

So, what should the pollsters do? Well, they owe it to the industry to convene, share data, and attempt to figure it out. That will certainly be done via the trade organizations pollsters belong to, but I have been to a few of these events and they devolve pretty quickly into posturing, defensiveness, and salesmanship. Academics will take a look, but they move so slowly that the implications they draw will likely be outdated by the time they are published.  This doesn’t seem to be an industry that is poised to fix itself.

At minimum, I’d like to see the polling organizations re-contact all respondents from their final polls. That would shed a lot of light on any issues relating to social desirability or other subtle biases.

This is not the first time pollsters have gotten it wrong. President Hillary Clinton will be remembered in history along with President Thomas Dewey and President Alf Landon.  But, this time seems different.  There is so much information out there that seeing the signal to the noise is just plain difficult – and there are lessons in that for Big Data analyses and research departments everywhere.

We are left with an election result that half the country is ecstatic about and half is worried about.  However, everyone in the research industry should be deeply concerned. I am hopeful that this will cause more market research clients to ask questions about data quality, potential errors and biases, and that they will value quality more. Those conversations will go a long way to putting a great industry back on the right path.

A Math Myth?

math_symbols_m

I just finished reading The Math Myth: And Other STEM Delusions by Andrew Hacker. I found the book to be so provocative and interesting that it merits the first ever book review on this blog.

The central thesis of the book is that in the US, we (meaning policy makers, educators, parents, and employers) have become obsessed with raising rigor and academic standards in math. This obsession has reached a point where we are convinced that our national security, international business competitiveness, and hegemony as an economic power rides on improving the math skills of all our high school and college graduates.

Hacker questions this national fixation. First, raising math standards has some serious costs. Not only has it caused significant disruption within schools and among educators and parents (ask any educator about the upheaval the Common Core has caused), but it has also cost significant money. But, most importantly, Hacker makes a strong case that raising of math standards has ensured that many students will be left behind and unprepared for the future.

Currently, about one in four high school students does not complete high school. Once enrolled in college, only a bit more than half of enrollees will graduate. While there are many reasons for these failures, Hacker points out that the chief ACADEMIC reason is math.

I think everyone can think of someone who struggled mightily in math. I personally took Calculus in high school and two further courses in college. I have often wondered why. It seemed to be more of a rite of passage than an academic pursuit with any realistic end in mind for me. It was certainly painful.

Math has humbled many a bright young person. I have a niece who was an outstanding high school student (an honors student, took multiple AP courses, etc.). She went to a reputable four-year college. In her first year at college, she failed a required math course in Calculus. This remains the only course she had gotten below a B in during her entire academic life. Her college-mandated math experience made her feel like a failure and reconsider whether she belonged in college. Fortunately for her she had good supports in place and succeeded in her second go round at the course. Many others are not so lucky.

And to what end? My niece has ended up in a quantitative field and is succeeding nicely. Yet, I doubt she has ever had to calculate the area under a curve, run a derivative, or understand a differential equation.

The reality is very few people do. Hacker, using Bureau of Labor Statistics data, estimates that about 5% of the US workforce currently uses math beyond basic arithmetic in their jobs. This means that only about 1 in 20 of our students will need to know basic algebra or beyond in their employment. 95% will do just fine with the math that most people master by the end of 8th grade.

And, despite the focus on STEM education, Hacker uses BLS data to show that the number of engineering jobs in the US is projected to grow at a slower rate than the economy as a whole. In addition, despite claims by policy makers that there is a dearth of qualified engineers, real wages for engineers have been falling and not rising, implying that supply is exceeding demand.

Yet, our high school standards and college entry standards require a mastery of not just algebra, but also geometry and trigonometry.

Most two-year colleges have a math test that all incoming students must pass – regardless of the program of study they intend to follow. As anyone who has worked with community colleges can attest to, remediation of math skills for incoming students is a major issue two-year institutions face. Hacker questions this. Why, for example, should a student intending to study cosmetology need to master algebra? When is the last time your haircutter needed to understand how to factor a polynomial?

The problem lies in what the requirement that all students master advanced math skills does to people’s lives unnecessarily. Many aspiring cosmetologists won’t pass this test and won’t end up enrolling in the program and will have to find new careers because they cannot get licensed. What interest does this serve?

Market research is a quantitative field. Perhaps not as much as engineering and sciences, but our field is focused on numbers and statistics and making sense of them. However, in about 30 years of working with researchers and hiring them, I can tell you that I have not once encountered a single researcher who doesn’t have the technical math background necessary to succeed. In fact, I’d say that most of the researchers I’ve known have mastered the math necessary for our field by the time they entered high school.

However, I have encountered many researchers who do not have the interpretive skills needed to draw insights from the data sets we gather. And, I’d say that MOST of the researchers I have encountered cannot write well and cannot communicate findings effectively to their clients.

Hacker calls these skills “numeracy” and advocates strongly for them. Numeracy skills are what the vast majority of our graduates truly need to master.  These are practical numerical skills, beyond the life skills that we are often concerned about (e.g. understanding the impact of debt, how compound interest works, how to establish a family budget).  Numeracy (which requires basic arithmetic skills) is making sense of the world by using numbers, and being able to critically understand the increasing amount of numerical data that we are exposed to.

Again, I have worked with researchers who have advanced skills in Calculus and multivariate statistical methods, yet have few skills in numeracy. Can you look at some basic cross-tabs and tell a story? Can you be presented with a marketing situation and think of how we can use research to gather data to make a decision more informed? These skills, rather than advanced mathematical or statistical skills, are what are truly valued in our field. If you are in our field for long, you’ll noticed that the true stars of the field (and the people being paid the most) are rarely the math and statistical jedis – they tend to be the people who have mastered both numeracy and communication.

This isn’t the first time our country has become obsessed with STEM achievement. I can think of three phases in the past century where we’ve become similarly single-minded about education. The first was the launch of Sputnik in 1957.This caused a near panic in the US that we were falling behind the Soviets and our educational system changed significantly as a result. The second was the release of the Coleman Report in 1966.This report criticized the way schools are funded and, based on a massive study, concluded that spending additional money on education did not necessarily create greater achievement. It once again produced a near-panic that our schools were not keeping up, and many educational reforms were made. The third “shock” came in the form of A Nation at Risk, which was published during the Gen X era in 1983. This governmental report basically stated that our nation’s schools were failing. Panicked policy makers responded with reforms, perhaps the most important being that the federal government started taking on an activist role in education. We now have the “Common Core Era” – which, if you take a long view, can be seen as history repeating itself.

Throughout all of these shocks, the American economy thrived. While other economies have become more competitive, for some reason we have come to believe that if we can just get more graduates that understand differential equations, we’ll somehow be able to embark on a second American century.

Many of the criticisms Hacker levies towards math have parallels in other subjects. Yes, I am in a highly quantitative field and I haven’t had to know what a quadratic equation is since I was 16 years old. But, I also haven’t had to conjugate French verbs, analyze Shakespearean sonnets, write poetry, or know what Shay’s Rebellion was all about. We study many things that don’t end up being directly applicable to our careers or day-to-day lives. That is part of becoming a well-rounded person and an intelligent citizen. There is nothing wrong with learning for the sake of learning.

However, there are differences in math. Failure to progress sufficiently in math prevents movement forward in our academic system – and prevents pursuit of formal education in fields that don’t require these skills. We don’t stop people from becoming welders, hair-cutters, or auto mechanics because they can’t grasp the nuances of literature, speak a foreign language, or have knowledge of US History. But, if they don’t know algebra, we don’t let them enroll in these programs.

This is in no way a criticism of the need to encourage capable students from studying advanced math. As we can all attest to whenever we drive over a bridge, drive a car, use social media, or receive medical treatment, having incredible engineers is essential to the quality of our life. We should all want the 5% of the workforce that needs advanced math skills to be as well trained as possible.Our future world depends on them. Fortunately, the academic world is set up for them and rewards them.

But, we do have to think of alternative educational paths for the significant number of young people who will, at some point, find math to be a stumbling block to their future.

I highly recommend reading this book. Even if you do not agree with its premise or conclusions, it is a good example of how we need to think critically about our public policy declarations and the unintended consequences they can cause.

If you don’t have the time or inclination to read the entire book, Hacker wrote an editorial for the NY Times that eventually spawned the book. It is linked below.

Is Algebra Necessary?

 

Asking about gender and sexual orientation on surveys

When composing questionnaires, there are times when the simplest of questions have to adjust to fit the times. Questions we draft become catalysts for larger discussions. That has been the case with what was once the most basic of all questions – asking a respondent for their gender.

This is probably the most commonly asked question in the history of survey research. And it seems basic – we typically just ask:

  • Are you… male or female?

Or, if we are working with younger respondents, we ask:

  • Are you … a boy or a girl?

The question is almost never refused and I’ve never seen any research to suggest this is anything other than a highly reliable measure.

Simple, right?

But, we are in the midst of an important shift in the social norms towards alternative gender classifications. Traditionally, meaning up until a couple of years ago, if we wanted to classify homosexual respondents we wouldn’t come right out and ask the question, for fear that it would be refused or be found to be an offensive question for many respondents. Instead, we would tend to ask respondents to check off a list of causes that they support. If they chose “gay rights”, we would then go ahead and ask if they were gay or straight. Perhaps this was too politically correct, but it was an effective way to classify respondents in a way that wasn’t likely to offend.

We no longer ask it that way. We still ask if the respondent is male or female, but we follow up to ask if they are heterosexual, lesbian, gay, bisexual, transgender, etc.

We recently completed a study among 4-year college students where we posed this question.  Results were as follows:

  • Heterosexual = 81%
  • Bisexual = 8%
  • Lesbian = 3%
  • Gay = 2%
  • Transgender = 1%
  • Other = 2%
  • Refused to answer = 3%

First, it should be noted that 3% refused to answer is less than the 4% that refused to answer the race/ethnicity question on the same survey.  Conclusion:  asking today’s college students about sexual orientation is less sensitive than asking them about their race/ethnicity.

Second, it is more important than ever to ask this question. These data show that about 1 in 5 college students identify as NOT being heterosexual. Researchers need to start viewing these students as a segment, just as we do age or race. This is the reality of the Millennial market:  they are more likely to self-identify as not being heterosexual and more likely to be accepting of alternative lifestyles. Failure to understand this group results in a failure to truly understand the generation.

We have had three different clients ask us if we should start asking this question younger – to high school or middle school students. For now, we are advising against it unless the study has clear objectives that point to a need. Our reasoning for this is not that we feel the kids will find the question to be offensive, but that their parents and educators (whom we are often reliant on to be able to survey minors) might. We think that will change over time as well.

So, perhaps nothing is as simple as it seems.

Crux Research is Going to the Ogilvy’s!

Crux Research is excited to announce that our client, Truth Initiative, is a finalist for two David Ogilvy Awards. These awards are presented by the Advertising Research Foundation (ARF) annually to recognize excellence in advertising research. Ogilvy Awards honor the creative use of research in the advertising development process by research firms, advertising agencies and advertisers.

Truth Initiative is a longstanding client of Crux Research. Truth Initiative is America’s largest non-profit public health organization dedicated to making tobacco use a thing of the past. Truth is a finalist in two Ogilvy categories:

For both of these campaigns, Crux Research worked closely with CommSight and Truth Initiative to test the effectiveness of the approaches and executions prior to launch and to track the efficacy of the campaigns once in market.

We are honored and proud to be a part of these campaigns, to have had the opportunity to work with Truth Initiative and CommSight, and most importantly, to have played a supporting role in Truth’s mission to make youth smoking a thing of the past.

The 2016 ARF David Ogilvy Awards Ceremony will be held March 15 in New York.  More information can be found Ogilvy Awards.

How can you predict an election by interviewing only 400 people?

This might be the most commonly asked question researchers get at cocktail parties (to the extent that researchers go to cocktail parties). It is also a commonly unasked question among researchers themselves: how can we predict an election by only talking to 400 people? 

The short answer is we can’t. We can never predict anything with 100% certainty from a research study or poll. The only way we could predict the election with 100% certainty would be to interview every person who will end up voting. Even then, since people might change their mind between the poll and the election we couldn’t say our prediction was 100% likely to come true.

To provide an example, if I want to flip a coin 100 times, my best estimate before I do it would be that I will get “heads” 50 times. But, it isn’t 100% certain the coin will land on heads 50 times.

The reason it is hard to comprehend how we predict elections by talking to so few people is our brains aren’t trained to understand probability. If we interview 400 people and find that 53% will vote for Hillary Clinton and 47% for Donald Trump, as long as the poll was conducted well, this result becomes our best prediction for what the vote will be. It is similar to predicting we will get 50 heads out of 100 coin tosses.  53% is our best prediction given the information we have. But, it isn’t an infallible prediction.

Pollsters provide a sampling error, which is +/-5% in this case. 400 is a bit of a magic number. It results in a maximum possible sampling error of +/-5% which has long been an acceptable standard. (Actually, we need 384 interviews for that, but researchers will use 400 instead because it sounds better.)

What that means is that if we repeated this poll over and over, we would expect to find Clinton to receive between 48% and 58% of the intended vote, 95% of the time. We’d expect Trump to receive between 42% and 52% of the intended vote, 95% of the time. On average though, if we kept doing poll after poll, our best guess would be if we averaged Clinton’s result it would be 53%.

In the coin flipping example, if we repeatedly flipped the coin 400 times, we should get between 45% and 55% heads 95% of the time. But, our average and most common result will be 50% heads.

Because the ranges of the election poll (48%-58% for Clinton and 42%-52% for Trump) overlap, you will often see reporters (and the candidate that is in second place) say that the poll is a “statistical dead heat.” There is no such thing as a statistical dead heat in polling unless the exact number of respondents prefer each candidate, which may never have actually happened in the history of polling.

There is a much better way to report the findings of the poll. We can statistically determine the “odds” that the 53% for Clinton is actually higher than the 47% for Trump. If we repeated the poll many times, what is the probability that the percentage we found for Clinton would be higher than what we found for Trump? In other words, what is the probability that Clinton is going to win?

The answer in this case is 91%.  Based on our example poll, Clinton has a 91% chance of winning the election. Say that instead of 400 people we interviewed 1,000. The same finding would imply that Clinton has a 99% chance of winning. This is a much more powerful and interesting way to report polling results, and we are surprised we have never seen a news organization use polling data in this way.

Returning to our coin flipping example, if we flip a coin 400 times and get heads 53% of the time, there is a 91% chance that we have a coin that is unfair, and biased towards heads. If we did it 1,000 times and got heads 53% of the time, there would be a 99% chance that the coin is unfair. Of course, a poll is a snapshot in time. The closer it is to the election, the more likely it is that the numbers will not change.  And, polling predictions assume many things that are rarely true:  that we have a perfect random sample, that all subgroups respond at the same rate, that questions are clear, that people won’t change their mind on Election Day, etc.

So, I guess the correct answer to “how can we predict the election from surveying 400 people” is “we can’t, but we can make a pretty good guess.”

How to Be a Good Research Client

We’ve been involved in hundreds of client relationships, some more satisfying than others. Client-supplier relationships have all of the makings of a stressful partnership:  a lot of money is at stake, projects can make or break careers, and there can be strong personalities on both sides. But, when the client-supplier relationship really works, it can be long-lasting and productive.

As a supplier, we are always looking for clients and projects that hit on three dimensions at the same time: 1) projects that study topics or business situations that are interesting to work on, 2) projects that are led by individuals that are a pleasure to work with, and 3) projects that work out financially. The projects we complete that are of the highest quality are the ones that hit all three of these dimensions at the same time.

So, if you are a client, how can you manage your project to the greatest success with your suppliers?  In short, you want to be sure your projects hit on these dimensions.

You should also view the client-supplier relationship as a partnership. You are paying the bills and are ultimately the boss, but your suppliers provide two important capabilities you don’t have: 1) they are set up to efficiently fulfill projects, and 2) professionally, they bring a broader perspective to your project than you likely have. You want to take advantage of this perspective. The best projects combine a supplier’s knowledge of research and business situations from other contexts with a client’s knowledge of their industry, brands, and internal situations.

There is a balance of control of a project that can swing too far one way or another. On one extreme is when the client wants little involvement in the project. They seem to just want to write a check and get the project done but not too have to manage it. This is never a prescription for a quality project but it happens commonly. I once had a client that wrote a check for a project, gave me a list of objectives, and then traveled to Asia for 4 months and couldn’t be reached. While I appreciated and was flattered by his trust, the project would have been better served with more involvement from him.

Another scenario, which is more common, is the micro-managing client. This is one that wants to be involved in every research task. This can be debilitating for a supplier. Often, we try to push back and keep you informed, yet involved only in the most necessary elements of a study.  When clients push back and insist on too much involvement the supplier will capitulate. But, the supplier quickly devolves to be an “order taker” and mentally checks out of the project.  As a client you can tell this is happening if your supplier stops volunteering advice and if your conversations get shorter and shorter as the project moves along. Odds are you’ve reached a point where your supplier is frustrated with you and not telling you and just wants the project to be over.

The key is to keep yourself involved in all aspects where you bring more value to the project than the supplier possibly can. You will know your objectives best. You will know what has to happen when the project is over. But, you likely add little value to project execution.

We are blessed to have clients who largely strike the right balance. They are involved in key stages and always know the status of their study. But, they respect the advice we give along the way, understand the strengths we bring, and listen to our advice even if they choose not to take it. They come to us with questions that go beyond research to hear our perspective.

In short, we don’t like the micromanaging client or the absentee client. We do our best work with clients that are clearly in control of their project, but treat us as key partners along the way.