Archive for the 'Statistics and probability' Category

Oops, the polls did it again

Many people had trouble sleeping last night wondering if their candidate was going to be President. I couldn’t sleep because as the night wore on it was becoming clear that this wasn’t going to be a good night for the polls.

Four years ago on the day after the election I wrote about the “epic fail” of the 2016 polls. I couldn’t sleep last night because I realized I was going to have to write another post about another polling failure. While the final vote totals may not be in for some time, it is clear that the 2020 polls are going to be off on the national vote even more than the 2016 polls were.

Yesterday, on election day I received an email from a fellow market researcher and business owner. We are involved in a project together and he was lamenting how poor the data quality has been in his studies recently and was wondering if we were having the same problems.

In 2014 we wrote a blog post that cautioned our clients that we were detecting poor quality interviews that needed to be discarded about 10% of the time. We were having to throw away about 1 in 10 of the interviews we collected.

Six years later that percentage has moved to be between 33% and 45% and we tend to be conservative in the interviews we toss. It is fair to say that for most market research studies today, between a third and a half of the interviews being collected are, for a lack of a better term, junk.  

It has gotten so bad that new firms have sprung up that serve as a go-between from sample providers and online questionnaires in order to protect against junk interviews. They protect against bots, survey farms, duplicate interviews, etc. Just the fact that these firms and terms like “survey farms” exist should give researchers pause regarding data quality.

When I started in market research in the late 80s/early 90’s we had a spreadsheet program that was used to help us cost out projects. One parameter in this spreadsheet was “refusal rate” – the percent of respondents who would outright refuse to take part in a study. While the refusal rate varied by study, the beginning assumption in this program was 40%, meaning that on average we expected 60% of the time respondents would cooperate. 

According to Pew and AAPOR in 2018 the cooperation rate for telephone surveys was 6% and falling rapidly.

Cooperation rates in online surveys are much harder to calculate in a standardized way, but most estimates I have seen and my own experience suggest that typical cooperation rates are about 5%. That means for a 1,000-respondent study, at least 20,000 emails are sent, which is about four times the population of the town I live in.

This is all background to try to explain why the 2020 polls appear to be headed to a historic failure. Election polls are the public face of the market research industry. Relative to most research projects, they are very simple. The problems pollsters have faced in the last few cycles is emblematic of something those working in research know but rarely like to discuss: the quality of data collected for research and polls has been declining, and should be alarming to researchers.

I could go on about the causes of this. We’ve tortured our respondents for a long time. Despite claims to the contrary, we haven’t been able to generate anything close to a probability sample in years. Our methodologists have gotten cocky and feel like they can weight any sampling anomalies away. Clients are forcing us to conduct projects on timelines that make it impossible to guard against poor quality data. We focus on sampling error and ignore more consequential errors. The panels we use have become inbred and gather the same respondents across sources. Suppliers are happy to cash the check and move on to the next project.

This is the research conundrum of our times: in a world where we collect more data on people’s behavior and attitudes than ever before, the quality of the insights we glean from these data is in decline.

Post 2016 the polling industry brain trust rationalized and claimed that the polls actually did a good job, convened some conferences to discuss the polls, and made modest methodological changes. Almost all of these changes related to sampling and weighting. But, as it appears that the 2020 polling miss is going to be way beyond what can be explained by sampling (last night I remarked to my wife that “I bet the p-value of this being due to sampling is about 1 in 1,000”), I feel that pollsters have addressed the wrong problem.

None of the changes pollsters made addressed the long-term problems researchers face with data quality. When you have a response rate of 5% and up to half of those are interviews you need to throw away, errors that can arise are orders of magnitude greater than the errors that are generated by sampling and weighting mistakes.

I don’t want to sound like I have the answers.  Just a few days ago I posted that I thought that on balance there were more reasons to conclude that the polls would do a good job this time than to conclude that they would fail. When I look through my list of potential reasons the polls might fail, nothing leaps to me as an obvious cause, so perhaps the problem is multi-faceted.

What I do know is the market research industry has not done enough to address data quality issues. And every four years the polls seem to bring that into full view.

Will the polls be right this time?

The 2016 election was damaging to the market research industry. The popular perception has been that in 2016 the pollsters missed the mark and miscalled the winner. In reality, the 2016 polls were largely predictive of the national popular vote. But, 2016 was largely seen by non-researchers as disastrous. Pollsters and market researchers have a lot riding on the perceived accuracy of 2020 polls.

The 2016 polls did a good job of predicting the national vote total but in a large majority of cases final national polls were off in the direction of overpredicting the vote for Clinton and underpredicting the vote for Trump. That is pretty much a textbook definition of bias. Before the books are closed on the 2016 pollster’s performance, it is important to note that the 2012 polls were off even further and mostly in the direction of overpredicting the vote for Romney and underpredicting the vote for Obama. The “bias,” although small, has swung back and forth between parties.

Election Day 2020 is in a few days and we may not know the final results for a while. It won’t be possible to truly know how the polls did for some weeks or months.

That said, there are reasons to believe that the 2020 polls will do an excellent job of predicting voter behavior and there are reasons to believe they may miss the mark.  

There are specific reasons why it is reasonable to expect that the 2020 polls will be accurate. So, what is different in 2020? 

  • There have been fewer undecided voters at all stages of the process. Most voters have had their minds made up well in advance of election Tuesday. This makes things simpler from a pollster’s perspective. A polarized and engaged electorate is one whose behavior is predictable. Figuring out how to partition undecided voters moves polling more in a direction of “art” than “science.”
  • Perhaps because of this, polls have been remarkably stable for months. In 2016, there was movement in the polls throughout and particularly over the last two weeks of the campaign. This time, the polls look about like they did weeks and even months ago.
  • Turnout will be very high. The art in polling is in predicting who will turn out and a high turnout election is much easier to forecast than a low turnout election.
  • There has been considerable early voting. There is always less error in asking about what someone has recently done than what they intend to do in the future. Later polls could ask many respondents how they voted instead of how they intended to vote.
  • There have been more polls this time. As our sample size of polls increases so does the accuracy. Of course, there are also more bad polls out there this cycle as well.
  • There have been more and better polls in the swing states this time. The true problem pollsters had in 2016 was with state-level polls. There was less attention paid to them, and because the national pollsters and media didn’t invest much in them, the state-level polling is where it all went wrong. This time, there has been more investment in swing-state polling.
  • The media invested more in polls this time. A hidden secret in polling is that election polls rarely make money for the pollster. This keeps many excellent research organizations from getting involved in them or dedicating resources to them. The ones that do tend to do so solely for reputational reasons. An increased investment this time has helped to get more researchers involved in election polling.
  • Response rates are upslightly. 2020 is the first year where we have seen a long-term trend towards declining response rates on survey stabilize and even kick up a little. This is likely a minor factor in the success of the 2020 polls, but it is in the right direction.
  • The race isn’t as close as it was in 2016. This one might only be appreciated by statisticians. Since variability is maximized in a 50/50 distribution the further away from an even race it is the more accurate a poll will be. This is another small factor in the direction of the polls being accurate in 2020.
  • There has not been late breaking news that could influence voter behavior. In 2016, the FBI director’s decision to announce a probe into Clinton’s emails came late in the campaign. There haven’t been any similar bombshells this time.
  • Pollsters started setting quotas and weighting on education. In the past, pollsters would balance samples on characteristics known to correlate highly with voting behavior – characteristics like age, gender, political party affiliation, race/ethnicity, and past voting behavior. In 2016, pollsters learned the hard way that educational attainment had become an additional characteristic to consider when crafting samples because voter preferences vary by education level. The good polls fixed that this go round.
  • In a similar vein, there has been a tighter scrutiny of polling methodology. While the media can still be a cavalier about digging into methodology, this time they were more likely to insist that pollsters outline their methods. This is the first time I can remember seeing news stories where pollsters were asked questions about methodology.
  • The notion that there are Trump supporters who intentionally lie to pollsters has largely been disproven by studies from very credible sources, such as Yale and Pew. Much more relevant is the pollster’s ability to predict turnout from both sides.

There are a few things going on that give the polls some potential to lay an egg.

  • The election will be decided by a small number of swing states. Swing state polls are not as accurate and are often funded by local media and universities that don’t have the funding or the expertise to do them correctly. The polls are close and less stable in these states. There is some indication that swing state polls have been tightening, and Biden’s lead in many of them isn’t much different than Clinton’s lead in 2020.
  • Biden may be making the same mistake Clinton made. This is a political and not a research-related reason, but in 2016 Clinton failed to aggressively campaign in the key states late in the campaign while Trump went all in. History could be repeating itself. Field work for final polls is largely over now, so the polls will not reflect things that happen the last few days.
  • If there is a wild-card that will affect polling accuracy in 2020, it is likely to center around how people are voting. Pollsters have been predicting election day voting for decades. In this cycle votes have been coming in for weeks and the methods and rules around early voting vary widely by state. Pollsters just don’t have past experience with early voting.
  • There is really no way for pollsters to account for potential disqualifications for mail-in votes (improper signatures, late receipts, legal challenges, etc.) that may skew to one candidate or another.
  • Similarly, any systematic voter suppression would likely cause the polls to underpredict Trump. These voters are available to poll, but may not be able to cast a valid vote.
  • There has been little mention of third-party candidates in polling results. The Libertarian candidate is on the ballot in all 50 states. The Green Party candidate is on the ballot in 31 states. Other parties have candidates on the ballot in some states but not others. These candidates aren’t expected to garner a lot of votes, but in a close election even a few percentage points could matter to the results. I have seen national polls from reputable organizations where they weren’t included.
  • While there is little credible data supporting that there are “shy” Trump voters that are intentionally lying to pollsters, there still might be a social desirability bias that would undercount Trump’s support. That social desirability bias could be larger than it was in 2016, and it is still likely in the direction of under predicting Trump’s vote count.
  • Polls (and research surveys) tend to underrepresent rural areas. Folks in rural areas are less likely to be in online panels and to cooperate on surveys. Few pollsters take this into account. (I have never seen a corporate research client correcting for this, and it has been a pet peeve of mine for years.) This is a sample coverage issue that will likely undercount the Trump vote.
  • Sampling has continued to get harder. Cell phone penetration has continued to grow, online panel quality has fallen, and our best option (ABS sampling) is still far from random and so expensive it is beyond the reach of most polls.
  • “Herding” is a rarely discussed, but very real polling problem. Herding refers to pollsters who conduct a poll that doesn’t conform to what other polls are finding. These polls tend to get scrutinized and reweighted until they fit to expectations, or even worse, buried and never released. Think about it – if you are a respected polling organization that conducted a recent poll that showed Trump would win the popular vote, you’d review this poll intensely before releasing it and you might choose not to release it at all because it might put your firm’s reputation at risk to release a poll that looks different than the others. The only polls I have seen that appear to be out of range are ones from smaller organizations who are likely willing to run the risk of being viewed as predicting against the tide or who clearly have a political bias to them.

Once the dust settles, we will compose a post that analyzes how the 2020 polls did. For now, we feel there are a more credible reasons to believe the polls will be seen as predictive than to feel that we are on the edge of a polling mistake.  From a researcher’s standpoint, the biggest worry is that the polls will indeed be accurate, but won’t match the vote totals because of technicalities in vote counting and legal challenges. That would reflect unfairly on the polling and research industries.

Researchers should be mindful of “regression toward the mean”

There is a concept in statistics known as regression toward the mean that is important for researchers to consider as we look at how the COVID-19 pandemic might change future consumer behavior. This concept is as challenging to understand as it is interesting.

Regression toward the mean implies that an extreme example in a data set tends to be followed by an example that is less extreme and closer to the “average” value of the population. A common example is if two parents that are above average in height have a child, that child is demonstrably more likely to be closer to average height than the “extreme” height of their parents.

This is an important concept to keep in mind in the design of experiments and when analyzing market research data. I did a study once where we interviewed the “best” customers of a quick service restaurant, defined as those that had visited the restaurant 10 or more times in the past month. We gave each of them a coupon and interviewed them a month later to determine the effect of the coupon. We found that they actually went to the restaurant less often the month after receiving the coupon than the month before.

It would have been easy to conclude that the coupon caused customers to visit less frequently and that there was something wrong with it (which is what we initially thought). What really happened was a regression toward the mean. Surveying customers who had visited a large number of times in one month made it likely that these same customers would visit a more “average” amount in a following month whether they had a coupon or not. This was a poor research design because we couldn’t really assess the impact of the coupon which was our goal.

Personally, I’ve always had a hard time understanding and explaining regression toward the mean because the concept seems to be counter to another concept known as “independent trials”. You have a 50% chance of flipping a fair coin and having it come up heads regardless of what has happened in previous flips. You can’t guess whether the roulette wheel will come up red or black based on what has happened in previous spins. So, why would we expect a restaurant’s best customers to visit less in the future?

This happens when we begin with a skewed population. The most frequent customers are not “average” and have room to regress toward the mean in the future. Had we surveyed all customers across the full range of patronage there would be no mean to regress to and we could have done a better job of isolating the effect of the coupon.

Here is another example of regression toward the mean. Suppose the Buffalo Bills quarterback, Josh Allen, has a monster game when they play the New England Patriots. Allen, who has been averaging about 220 yards passing per game in his career goes off and burns the Patriots for 450 yards. After we are done celebrating and breaking tables in western NY, what would be our best prediction for the yards Allen will throw for the second time the Bills play the Patriots?

Well, you could say the best prediction is 450 yards as that is what he did the first time. But, regression toward the mean would imply that he’s more likely to throw close to his historic average of 220 yards the second time around. So, when he throws for 220 yards the second game it is important to not give undue credit to Bill Belichick for figuring out how to stop Allen.

Here is another sports example. I have played (poorly) in a fantasy baseball league for almost 30 years. In 2004, Derek Jeter entered the season as a career .317 hitter. After the first 100 games or so he was hitting under .200. The person in my league that owned him was frustrated so I traded for him. Jeter went on to hit well over .300 the rest of the season. This was predictable because there wasn’t any underlying reason (like injury) for his slump. His underlying average was much better than his current performance and because of the concept of regression toward the mean it was likely he would have a great second half of the season, which he did.

There are interesting HR examples of regression toward the mean. Say you have an employee that does a stellar job on an assignment – over and above what she normally does. You praise her and give her a bonus. Then, you notice that on the next assignment she doesn’t perform on the same level. It would be easy to conclude that the praise and bonus caused the poor performance when in reality her performance was just regressing back toward the mean. I know sales managers who have had this exact problem – they reward their highest performers with elaborate bonuses and trips and then notice that the following year they don’t perform as well. They then conclude that their incentives aren’t working.

The concept is hard at work in other settings. Mutual funds that outperform the market tend to fall back in line the next year. You tend to feel better the day after you go to the doctor. Companies profiled in “Good to Great” tend to have hard times later on.

Regression toward the mean is important to consider when designing sampling plans. If you are sampling an extreme portion of a population it can be a relevant consideration. Sample size is also important. When you have just a few cases of something, mathematically an extreme response can skew your mean.

The issue to be wary of is that when we fail to consider regression toward the mean, we tend to overstate the importance of correlation between two things. We think our mutual fund manager is a genius when he just got lucky, that our coupon isn’t working, or that Josh Allen is becoming the next Drew Brees. All of these could be true, but be careful in how you interpret data that result from extreme or small sample sizes.

How does this relate to COVID? Well, at the moment, I’d say we are still in an “inflated expectations” portion of a hype curve when we think of what permanent changes may take place resulting from the pandemic. There are a lot of examples. We hear that commercial real estate is dead because businesses will keep employees working from home. Higher education will move entirely online. In-person qualitative market research will never happen again. Business travel is gone forever. We will never again work in an office setting. Shaking hands is a thing of the past.

I’m not saying there won’t be a new normal that results from COVID, but if we believe in regression toward the mean and the hype curve we’d predict that the future will look more like the past than how it is currently being portrayed. The post-COVID world will certainly look more like the past than a more extreme version of the present. We will naturally regress back toward the past and not to a more extreme version of current behaviors. The “mean” being regressed to has likely changed, but not as much as the current, extreme situation implies.

“Margin of error” sort of explained (+/-5%)

It is now September of an election year. Get ready for a two-month deluge of polls and commentary on them. One thing you can count on is reporters and pundits misinterpreting the meaning behind “margin of error.” This post is meant to simplify the concept.

Margin of error refers to sampling error and is present on every poll or market research survey. It can be mathematically calculated. All polls seek to figure out what everybody thinks by asking a small sample of people. There is always some degree of error in this.

The formula for margin of error is fairly simple and depends mostly on two things: how many people are surveyed and their variability of response. The more people you interview, the lower (better) the margin of error. The more the people you interview give the same response (lower variability), the better the margin of error. If a poll interviews a lot of people and they all seem to be saying the same thing, the margin of error of the poll is low. If the poll interviews a small number of people and they disagree a lot, the margin of error is high.

Most reporters understand that a poll with a lot of respondents is better than one with fewer respondents. But most don’t understand the variability component.

There is another assumption used in the calculation for sampling error as well: the confidence level desired. Almost every pollster will use a 95% confidence level, so for this explanation we don’t have to worry too much about that.

What does it mean to be within the margin of error on a poll? It simply means that the two percentages being compared can be deemed different from one another with 95% confidence. Put another way, if the poll was repeated a zillion times, we’d expect that at least 19 out of 20 times the two numbers would be different.

If Biden is leading Trump in a poll by 8 points and the margin of error is 5 points, we can be confident he is really ahead because this lead is outside the margin of error. Not perfectly confident, but more than 95% confident.

Here is where reporters and pundits mess it up.  Say they are reporting on a poll with a 5-point margin of error and Biden is leading Trump by 4 points. Because this lead is within the margin of error, they will often call it a “statistical dead heat” or say something that implies that the race is tied.

Neither is true. The only way for a poll to have a statistical dead heat is for the exact same number of people to choose each candidate. In this example the race isn’t tied at all, we just have a less than 95% confidence that Biden is leading. In this example, we might be 90% sure that Biden is leading Trump. So, why would anyone call that a statistical dead heat? It would be way better to be reporting the level of confidence that we have that Biden is winning, or the p-value of the result. I have never seen a reporter do that, but some of the election prediction websites do.

Pollsters themselves will misinterpret the concept. They will deem their poll “accurate” as long as the election result is within the margin of error. In close elections this isn’t helpful, as what really matters is making a correct prediction of what will happen.

Most of the 2016 final polls were accurate if you define being accurate as coming within the margin of error. But, since almost all of them predicted the wrong winner, I don’t think we will see future textbooks holding 2016 out there as a zenith of polling accuracy.

Another mistake reporters (and researchers make) is not recognizing that the margin of error only refers to sampling error which is just one of many errors that can occur on a poll. The poor performance of the 2016 presidential polls really had nothing to do with sampling error at all.

I’ve always questioned why there is so much emphasis on sampling error for a couple of reasons. First, the calculation of sampling error assumes you are working with a random sample which in today’s polling world is almost never the case. Second, there are many other types of errors in survey research that are likely more relevant to a poll’s accuracy than sampling error. The focus on sampling error is driven largely because it is the easiest error to mathematically calculate. Margin of error is useful to consider, but needs to be put in context of all the other types of errors that can happen in a poll.

The myth of the random sample

Sampling is at the heart of market research. We ask a few people questions and then assume everyone else would have answered the same way.

Sampling works in all types of contexts. Your doctor doesn’t need to test all of your blood to determine your cholesterol level – a few ounces will do. Chefs taste a spoonful of their creations and then assume the rest of the pot will taste the same. And, we can predict an election by interviewing a fairly small number of people.

The mathematical procedures that are applied to samples that enable us to project to a broader population all assume that we have a random sample. Or, as I tell research analysts: everything they taught you in statistics assumes you have a random sample. T-tests, hypotheses tests, regressions, etc. all have a random sample as a requirement.

Here is the problem: We almost never have a random sample in market research studies. I say “almost” because I suppose it is possible to do, but over 30 years and 3,500 projects I don’t think I have been involved in even one project that can honestly claim a random sample. A random sample is sort of a Holy Grail of market research.

A random sample might be possible if you have a captive audience. You can random sample some the passengers on a flight or a few students in a classroom or prisoners in a detention facility. As long as you are not trying to project beyond that flight or that classroom or that jail, the math behind random sampling will apply.

Here is the bigger problem: Most researchers don’t recognize this, disclose this, or think through how to deal with it. Even worse, many purport that their samples are indeed random, when they are not.

For a bit of research history, once the market research industry really got going the telephone random digit dial (RDD) sample became standard. Telephone researchers could randomly call land line phones. When land line telephone penetration and response rates were both high, this provided excellent data. However, RDD still wasn’t providing a true random, or probability sample. Some households had more than one phone line (and few researchers corrected for this), many people lived in group situations (colleges, medical facilities) where they couldn’t be reached, some did not have a land line, and even at its peak, telephone response rates were only about 70%. Not bad. But, also, not random.

Once the Internet came of age, researchers were presented with new sampling opportunities and challenges. Telephone response rates plummeted (to 5-10%) making telephone research prohibitively expensive and of poor quality. Online, there was no national directory of email addresses or cell phone numbers and there were legal prohibitions against spamming, so researchers had to find new ways to contact people for surveys.

Initially, and this is still a dominant method today, research firms created opt-in panels of respondents. Potential research participants were asked to join a panel, filled out an extensive demographic survey, and were paid small incentives to take part in projects. These panels suffer from three response issues: 1) not everyone is online or online at the same frequency, 2) not everyone who is online wants to be in a panel, and 3) not everyone in the panel will take part in a study. The result is a convenience sample. Good researchers figured out sophisticated ways to handle the sampling challenges that result from panel-based samples, and they work well for most studies. But, in no way are they a random sample.

River sampling is a term often used to describe respondents who are “intercepted” on the Internet and asked to fill out a survey. Potential respondents are invited via online ads and offers placed on a range of websites. If interested, they are typically pre-screened and sent along to the online questionnaire.

Because so much is known about what people are doing online these days, sampling firms have some excellent science behind how they obtain respondents efficiently with river sampling. It can work well, but response rates are low and the nature of the online world is changing fast, so it is hard to get a consistent river sample over time. Nobody being honest would ever use the term “random sampling” when describing river samples.

Panel-based samples and river samples represent how the lion’s share of primary market research is being conducted today. They are fast and inexpensive and when conducted intelligently can approximate the findings of a random sample. They are far from perfect, but I like that the companies providing them don’t promote them as being random samples. They involve some biases and we deal with these biases as best we can methodologically. But, too often we forget that they violate a key assumption that the statistical tests we run require: that the sample is random. For most studies, they are truly “close enough,” but the problem is we usually fail to state the obvious – that we are using statistical tests that are technically not appropriate for the data sets we have gathered.

Which brings us to a newer, shiny object in the research sampling world: ABS samples. ABS (addressed-based samples) are purer from a methodological standpoint. While ABS samples have been around for quite some time, they are just now being used extensively in market research.

ABS samples are based on US Postal Service lists. Because USPS has a list of all US households, this list is an excellent sampling frame. (The Census Bureau also has an excellent list, but it is not available for researchers to use.) The USPS list is the starting point for ABS samples.

Research firms will take the USPS list and recruit respondents from it, either to be in a panel or to take part in an individual study. This recruitment can be done by mail, phone, or even online. They often append publicly-known information onto the list.

As you might expect, an ABS approach suffers from some of the same issues as other approaches. Cooperation rates are low and incentives (sometimes large) are necessary. Most surveys are conducted online, and not everyone in the USPS list is online or has the same level of online access. There are some groups (undocumented immigrants, homeless) that may not be in the USPS list at all. Some (RVers, college students, frequent travelers) are hard to reach. There is evidence that ABS approaches do not cover rural areas as well as urban areas. Some households use post office boxes and not residential addresses for their mail. Some use more than one address. So, although ABS lists cover about 97% of US households, the 3% that they do not cover are not randomly distributed.

The good news is, if done correctly, the biases that result from an ABS sample are more “correctable” than those from other types of samples because they are measurable.

A recent Pew study indicates that survey bias and the number of bogus respondents is a bit smaller for ABS samples than opt-in panel samples.

But ABS samples are not random samples either. I have seen articles that suggest that of all those approached to take part in a study based on an ABS sample, less than 10% end up in the survey data set.

The problem is not necessarily with ABS samples, as most researchers would concur that they are the best option we have and come the closest to a random sample. The problem is that many firms that are providing ABS samples are selling them as “random samples” and that is disingenuous at best. Just because the sampling frame used to recruit a survey panel can claim to be “random” does not imply that the respondents you end up in a research database constitute a random sample.

Does this matter? In many ways, it likely does not. There are biases and errors in all market research surveys. These biases and errors vary not just by how the study was sampled, but also by the topic of the question, its tone, the length of the survey, etc. Many times, survey errors are not the same throughout an individual survey. Biases in surveys tend to be “unknown knowns” – we know they are there, but aren’t sure what they are.

There are many potential sources of errors in survey research. I am always reminded of a quote from Humphrey Taylor, the past Chairman of the Harris Poll who said “On almost every occasion when we release a new survey, someone in the media will ask, “What is the margin of error for this survey?” There is only one honest and accurate answer to this question — which I sometimes use to the great confusion of my audience — and that is, “The possible margin of error is infinite.”  A few years ago, I wrote a post on biases and errors in research, and I was able to quickly name 15 of them before I even had to do an Internet search to learn more about them.

The reality is, the improvement in bias that is achieved by an ABS sample over a panel-based sample is small and likely inconsequential when considered next to the other sources of error that can creep into a research project. Because of this, and the fact that ABS sampling is really expensive, we tend to only recommend ABS panels in two cases: 1) if the study will result in academic publication, as academics are more accepting of data that comes from and ABS approach, and 2) if we are working in a small geography, where panel-based samples are not feasible.

Again, ABS samples are likely the best samples we have at this moment. But firms that provide them are often inappropriately portraying them as yielding random samples. For most projects, the small improvements in bias they provide is not worth the considerable increased budget and increased study time frame, which is why, for the moment, ABS samples are currently used in a small proportion of research studies. I consider ABS to be “state of the art” with the emphasis on “art” as sampling is often less of a science than people think.

Should we get rid of statistical significance?

There has been recent debate among academics and statisticians surrounding the concept of statistical significance. Some high-profile medical studies have just narrowly missed meeting the traditional statistical significance cutoff of 0.05. This has resulted in potentially life changing drugs not being approved by regulators or pursued for further development by pharma companies. These cases have led to a much-needed review and re-education as to what statistical significance means and how it should be applied.

In a 2014 blog post (Is This Study Significant?) we discussed common misunderstandings market researchers have regarding statistical significance. The recent debate suggests this misunderstanding isn’t limited to market researchers – it appears that academics and regulators have the same difficulty.

Statistical significance is a simple concept. However, it seems that the human brain just isn’t wired well to understand probability and that lies at the root of the problem.

A measure is typically classified as statistically significant if its p-value is 0.05 or less. This means that there is a less than 5% probability that the result came from chance or random fluctuation. Two measures are deemed to be statistically different if there is a 19 out of 20 chance or greater that they are.

There are real problems with this approach. Foremost, it is unclear how this 5% probability cutoff was chosen. Somewhere along the line it became a standard among academics. This standard could have just as easily been 4% or 6% or some other number. This cutoff was chosen subjectively.

What are the chances that this 5% cutoff is optimal for all studies, regardless of the situation?

Regulators should look beyond statistical significance when they are reviewing a new medication. Let’s say a study was only significant at 6%, not quite meeting the 5% standard. That shouldn’t automatically disqualify a promising medication from consideration. Instead, regulators should look at the situation more holistically. What will the drug do? What are its side effects? How much pain does it alleviate? What is the risk of making mistakes in approval: in approving a drug that doesn’t work or in failing to approve a drug that does work? We could argue that the level of significance required in the study should depend on the answers to these questions and shouldn’t be the same in all cases.

The same is true in market research. Suppose you are researching a new product and the study is only significant at 10% and not the 5% that is standard. Whether you should greenlight the product for development depends on considerations beyond statistical significance. What is the market potential of the product? What is the cost of its development? What is the risk of failing to greenlight a winning idea or greenlighting a bad idea? Currently, too many product managers rely too much on a research project to give them answers when the study is just one of many inputs into these decisions.

There is another reason to rethink the concept of statistical significance in market research projects. Statistical significance assumes a random or a probability sample. We can’t stress this enough – there hasn’t been a market research study conducted in at least 20 years that can credibly claim to have used a true probability sample of respondents. Some (most notably ABS samples) make a valiant attempt to do so but they still violate the very basis for statistical significance.

Given that, why do research suppliers (Crux Research included) continue to do statistical testing on projects? Well, one reason is clients have come to expect it. A more important reason is that statistical significance holds some meaning. On almost every study we need to draw a line and say that two data poworints are “different enough” to point out to clients and to draw conclusions from. Statistical significance is a useful tool for this. It just should no longer be viewed as a tool where we can say precise things like “these two data points have a 95% chance of actually being different”.

We’d rather use a probability approach and report to clients the chance that two data points would be different if we had been lucky enough to use a random sample. That is a much more useful way to look at data, but it probably won’t be used much until colleges start teaching it and a new generation of researchers emerges.

The current debate over the usefulness of statistical significance is a healthy one to have. Hopefully, it will cause researchers of all types to think deeper about how precise a study needs to be and we’ll move away from the current one-size-fits-all thinking that has been pervasive for decades.

Let’s Make Research and Polling Great Again!

Crux Logo Final 2016

The day after the US Presidential election, we quickly wrote and posted about the market research industry’s failure to accurately predict the election.  Since this has been our widest-read post (by a factor of about 10!) we thought a follow-up was in order.

Some of what we predicted has come to pass. Pollsters are being defensive, claiming their polls really weren’t that far off, and are not reaching very deep to try to understand the core of why their predictions were poor. The industry has had a couple of confabs, where the major players have denied a problem exists.

We are at a watershed moment for our industry. Response rates continue to plummet, clients are losing confidence in the data we provide, and we are swimming in so much data our insights are often not able to find space to breathe. And the public has lost confidence in what we do.

Sometimes it is everyday conversations that can enlighten a problem. Recently, I was staying at an AirBnB in Florida. The host (Dan) was an ardent Trump supporter and at one point he asked me what I did for a living. When I told him I was a market researcher the conversation quickly turned to why the polls failed to accurately predict the winner of the election. By talking with Dan I quickly I realized the implications of Election 2016 polling to our industry. He felt that we can now safely ignore all polls – on issues, approval ratings, voter preferences, etc.

I found myself getting defensive. After all, the polls weren’t off that much.  In fact, they were actually off by more in 2012 than in 2016 – the problem being that this time the polling errors resulted in an incorrect prediction. Surely we can still trust polls to give a good sense of what our citizenry thinks about the issues of the day, right?

Not according to Dan. He didn’t feel our political leaders should pay attention to the polls at all because they can’t be trusted.

I’ve even seen a new term for this bandied about:  poll denialism. It is a refusal to believe any poll results because of their past failures. Just the fact that this has been named should be scary enough for researchers.

This is unnerving not just to the market research industry, but to our democracy in general.  It is rarely stated overtly, but poll results are a key way political leaders keep in touch with the needs of the public, and they shape public policy a lot more than many think. Ignoring them is ignoring public opinion.

Market research remains closely associated with political polling. While I don’t think clients have become as mistrustful about their market research as the public has become about polling, clients likely have their doubts. Much of what we do as market researchers is much more complicated than election polling. If we can’t successfully predict who will be President, why would a client believe our market forecasts?

We are at a defining moment for our industry – a time when clients and suppliers will realize this is an industry that has gone adrift and needs a righting of the course. So what can we do to make research great again?  We have a few ideas.

  1. First and foremost, if you are a client, make greater demands for data quality. Nothing will stimulate the research industry more to fix itself than market forces – if clients stop paying for low quality data and information, suppliers will react.
  2. Slow down! There is a famous saying about all projects.  They have three elements that clients want:  a) fast, b) good, and c) cheap, and on any project you can choose two of these.  In my nearly three decades in this industry I have seen this dynamic change considerably. These days, “fast” is almost always trumping the other two factors.  “Good” has been pushed aside.  “Cheap” has always been important, but to be honest budget considerations don’t seem to be the main issue (MR spending continues to grow slowly). Clients are insisting that studies are conducted at a breakneck pace and data quality is suffering badly.
  3. Insist that suppliers defend their methodologies. I’ve worked for corporate clients, but also many academic researchers. I have found that a key difference between them becomes apparent during results presentations. Corporate clients are impatient and want us to go as quickly as possible over the methodology section and get right into the results.  Academics are the opposite. They dwell on the methodology and I have noticed if you can get an academic comfortable with your methods it is rare that they will doubt your findings. Corporate researchers need to understand the importance of a sound methodology and care more about it.
  4. Be honest about the limitations of your methodology. We often like to say that everything you were ever taught about statistics assumed a random sample and we haven’t seen a study in at least 20 years that can credibly claim to have one.  That doesn’t mean a study without a random sample isn’t valuable, it just means that we have to think through the biases and errors it could contain and how that can be relevant to the results we present. I think every research report should have a page after the methodology summary that lists off the study’s limitations and potential implications to the conclusions we draw.
  5. Stop treating respondents so poorly. I believe this is a direct consequence of the movement from telephone to online data collection. Back in the heyday of telephone research, if you fielded a survey that was too long or was challenging for respondents to answer, it wasn’t long until you heard from your interviewers just how bad your questionnaire was. In an online world, this feedback never gets back to the questionnaire author – and we subsequently beat up our respondents pretty badly.  I have been involved in at least 2,000 studies and about 1 million respondents.  If each study averages 15 minutes that implies that people have spent about 28 and a half years filling out my surveys.  It is easy to lose respect for that – but let’s not forget the tremendous amount of time people spend on our surveys. In the end, this is a large threat to the research industry, as if people won’t respond, we have nothing to sell.
  6. Stop using technology for technology’s sake. Technology has greatly changed our business. But, it doesn’t supplant the basics of what we do or allow us to ignore the laws of statistics.  We still need to reach a representative sample of people, ask them intelligent questions, and interpret what it means for our clients.  Tech has made this much easier and much harder at the same time.  We often seem to do things because we can and not because we should.

The ultimate way to combat “poll denialism” in a “post-truth” world is to do better work, make better predictions, and deliver insightful interpretations. That is what we all strive to do, and it is more important than ever.

 

An Epic Fail: How Can Pollsters Get It So Wrong?

picture1

Perhaps the only bigger loser than Hillary Clinton in yesterday’s election was the polling industry itself. Those of us who conduct surveys for a living should be asking if we can’t even get something as simple as a Presidential election right, why should our clients have confidence in any data we provide?

First, a recap of how poorly the polls and pundits performed:

  • FiveThirtyEight’s model had Clinton’s likelihood of winning at 72%.
  • Betfair (a prediction market) had Clinton trading at an 83% chance of winning.
  • A quick scan of Real Clear Politics on Monday night showed 25 final national polls. 22 of these 25 polls had Clinton as the winner, and the most reputable ones almost all had her winning the popular vote by 3 to 5 points. (It should be noted that Clinton seems likely to win the popular vote.)

There will be claims that FiveThirtyEight “didn’t say her chances were 100%” or that Betfair had Trump with a “17% chance of winning.” Their predictions were never to be construed to be certain.  No prediction is ever 100% certain, but this is a case where almost all forecasters got it wrong.  That is pretty close to the definition of a bias – something systematic that affected all predictions must have happened.

The polls will claim that the outcome was in the margin of error. But, to claim a “margin of error” defense is statistically suspect, as margins of error only apply to random or probability samples and none of these polls can claim to have a random sample. FiveThirtyEight also had Clinton with 302 electoral votes, way beyond any reasonable error rate.

Regardless, the end result is going to end up barely within the margin of error of most of these polls erroneously use anyway. That is not a free pass for the pollsters at all. All it means is rather than their estimate being accurate 95% of the time, it was predicted to be accurate a little bit less:  between 80% and 90% of the time for most of these polls by my calculations.

Lightning can strike for sure. But this is a case of it hitting the same tree numerous times.

So, what happened? I am sure this will be the subject of many post mortems by the media and conferences from the research industry itself, but let me provide an initial perspective.

First, it seems that it had anything to do with the questions themselves. In reality, most pollsters use very similar questions to gather voter preferences and many of these questions have been in use for a long time.  Asking whom you will vote for is pretty simple. The question itself seems to be an unlikely culprit.

I think the mistakes the pollster’s made come down to some fairly basic things.

  1. Non-response bias. This has to be a major reason why the polls were wrong. In short, non-response bias means that the sample of people who took the time to answer the poll did not adequately represent the people who actually voted.  Clearly this must have occurred. There are many reasons this could happen.  Poor response rates is likely a key one, but poor selection of sampling frames, researchers getting too aggressive with weighting and balancing, and simply not being able to reach some key types of voters well all play into it.
  2. Social desirability bias. This tends to be more present in telephone and in-person polls that involve an interviewer but it happens in online polls as well. This is when the respondent tells you what you want to hear or what he or she thinks is socially acceptable. A good example of this is if you conduct a telephone poll and an online poll at the same time, more people will say they believe in God in the telephone poll.  People tend to answer how they think they are supposed to, especially when responding to an interviewer.   In this case, let’s take the response bias away.  Suppose pollsters reached every single voter who actually showed up in a poll. If we presume “Trump” was a socially unacceptable answer in the poll, he would do better in the actual election than in the poll.  There is evidence this could have happened, as polls with live interviewers had a wider Clinton to Trump gap than those that were self-administered.
  3. Third parties. It looks like Gary Johnson’s support is going to end up being about half of what the pollster’s predicted.  If this erosion benefited Trump, it could very well have made a difference. Those that switched their vote from Johnson in the last few weeks might have been more likely to switch to Trump than Clinton.
  4. Herding. This season had more polls than ever before and they often had widely divergent results.  But, if you look closely you will see that as the election neared, polling results started to converge.  The reason could be that if a pollster had a poll that looked like an outlier, they probably took a closer look at it, toyed with how the sample was weighted, or decided to bury the poll altogether.  It is possible that there were some accurate polls out there that declared a Trump victory, but the pollster’s didn’t release them.

I’d also submit that the reasons for the polling failure are likely not completely specific to the US and this election. We can’t forget that pollsters also missed the recent Brexit vote, the Mexican Presidency, and David Cameron’s original election in the UK.

So, what should the pollsters do? Well, they owe it to the industry to convene, share data, and attempt to figure it out. That will certainly be done via the trade organizations pollsters belong to, but I have been to a few of these events and they devolve pretty quickly into posturing, defensiveness, and salesmanship. Academics will take a look, but they move so slowly that the implications they draw will likely be outdated by the time they are published.  This doesn’t seem to be an industry that is poised to fix itself.

At minimum, I’d like to see the polling organizations re-contact all respondents from their final polls. That would shed a lot of light on any issues relating to social desirability or other subtle biases.

This is not the first time pollsters have gotten it wrong. President Hillary Clinton will be remembered in history along with President Thomas Dewey and President Alf Landon.  But, this time seems different.  There is so much information out there that seeing the signal to the noise is just plain difficult – and there are lessons in that for Big Data analyses and research departments everywhere.

We are left with an election result that half the country is ecstatic about and half is worried about.  However, everyone in the research industry should be deeply concerned. I am hopeful that this will cause more market research clients to ask questions about data quality, potential errors and biases, and that they will value quality more. Those conversations will go a long way to putting a great industry back on the right path.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.