Archive for the 'Statistics and probability' Category

What is p-hacking, and why do most researchers do it?

What sets good researchers apart is their ability to find a compelling story in a data set. It is what we do – we review various data points, combine that with our knowledge of a client’s business, and craft a story that leads to market insight.

Unfortunately, researchers can be too good at this. We have a running joke in our firm that we could probably hand a random data set to an analyst, and they could come up with a story that was every bit as convincing as the story they would develop from actual data.

Market researchers need to be wary of something well-known among academic researchers: a phenomenon known as “p-hacking.” It is a tendency to run and re-run analyses until we discover a statistically significant result.

A “p-value” is one of the most important statistics in research. It can be tricky to define precisely — it is the probability that your effect (research result) is due to chance and not the difference between your test and control. It is the chance that your hypothesis will be falsely rejected. We say the result is statistically significant when a p-value is less than 5%. We mean there is less than 5% we got this result by chance.

Researchers widely use p-values to determine if a result is worth mentioning. In academia, most papers will not be published in a peer-reviewed journal if their p-value is not below 5%. Most quant analysts will not highlight a finding in market research if the p-value isn’t under 5%.

P-hacking is what happens when the initial analysis doesn’t hit this threshold. Researchers will do things such as:

  • Change the variable. Our result doesn’t hit the threshold, so we search for a new measure where it does.
  • Redefine our variables. Using the full range of the response didn’t work, so we look at the top box, the top 2 boxes, the mean, etc., until the result we want pans out.
  • Change the population. It didn’t work with all respondents, but is there something among a subgroup, such as males, young respondents, or customers?
  • Run a table that does statistical testing of all subgroups compared to each other. (Guaranteeing that one in 20 of these significant findings will be due to chance.)
  • Relax the threshold. The findings didn’t work at 5%, so we go ahead and report them anyway and say they are “directional.”’

These tactics are all inappropriate and common. If you are a market researcher and reading this, I’d be surprised if you haven’t done all of these at some point in your career. I have done them all.

P-hacking happens for understandable reasons. Other information outside the study points towards a result we should be getting. Our clients pressure us to do it. And, with today’s sample sizes being so large, p-hacking is easy to do. Give me a random data set with 2,000 respondents, and I will guarantee that I can find statistically significant results and create a story around them that will wow your marketing team.

I learned about p-hacking the hard way. Early in my career, I gathered an extensive data set for a college professor who was well-known and well-published within his field. He asked me to run some statistical analyses for him. When the ones he specified didn’t pan out, I started running the data on subgroups, changing how some variables were defined, etc., until I could present him with significant statistical output.

Fortunately, rather than chastise me, he went into teaching mode. He told me that just fishing around in the data set until you find something that works statistically is not how data analysis should be done. With a big data set and enough hooks in the water, you will always find some insight ready to bite.

Instead, he taught me that you always start with a hypothesis. If that hypothesis doesn’t pan out, first recognize that there is some learning in that. And it is okay to use that learning to adjust your hypothesis and test again, but your analysis has to be driven by the theory instead of the theory being driven by the data.

Good analysis is not about tinkering with data through trial and error. Too many researchers do this until something works. They fail to report on the many unproductive rabbit holes they dug. But, by definition, you’d randomly get a statistically significant result about one time in 20.

This sounds obscure, but I would say that it is the most common mistake I see marketing analysts make. Clients will press us to redefine variables to make a regression work better. We’ll use “top box” measures rather than the full variable range, with no real reason except that it makes our models fit. We relax the level of statistical significance. We p-hack.

In general, market researchers “fish in the data” a lot. I sometimes wonder how many lousy marketing decisions have been made over time due to p-hacking.

I used to sit next to an incredible statistician. As good a data analyst as he was, he was one of the worst questionnaire writers I have ever met. He didn’t seem to care too much, as he felt he could wrangle almost any data into submission with his talent. He was a world-class p-hacker.

I was the opposite. I’ve never been a great statistician. So, I’ve learned to compensate by developing design talent, as I quickly noticed that a well-written questionnaire makes data analysis easy and often obviates the need for complex statistics. I learned over time that a good questionnaire is an antidote to p-hacking. 

Start with hypotheses and think about alternative hypotheses when you design the project. And develop these before you even compose a questionnaire. Never believe that the story will magically appear in your data – instead, start with a range of potential stories and then, in your design, allow for data to support or refute each of them. Be balanced in how you go about it, but be directed as well.

It is vital to push for the time upfront to accomplish this, as the collapsed time frames for today’s projects are a key cause of p-hacking.

Of course, nobody wants to conduct a project and be unable to conclude anything. If that happens, you likely went wrong at the project’s design stage – you didn’t lay out objectives and potential hypotheses well. Resist the tendency to p-hack, be mindful of this issue, and design your studies well so you won’t be tempted to do it.

Pre-Election Polling and Baseball Share a Lot in Common

The goal of a pre-election poll is to predict which candidate will win an election and by how much. Pollsters work towards this goal by 1) obtaining a representative sample of respondents, 2) determining which candidate a respondent will vote for, and 3) predicting the chances each respondent will take the time to vote.

All three of these steps involve error. It is the first one, obtaining a representative sample of respondents, which has changed the most in the past decade or so.

It is the third characteristic that separates pre-election polling from other forms of polling and survey research. Statisticians must predict how likely each person they interview will be to vote. This is called their “Likely Voter Model.”

As I state in POLL-ARIZED, this is perhaps the most subjective part of the polling process. The biggest irony in polling is that it becomes an art when we hand the data to the scientists (methodologists) to apply a Likely Voter Model.

It is challenging to understand what pollsters do in their Likely Voter Models and perhaps even more challenging to explain.  

An example from baseball might provide a sense of what pollsters are trying to do with these models.

Suppose Mike Trout (arguably the most underappreciated sports megastar in history) is stepping up to the plate. Your job is to predict Trout’s chances of getting a hit. What is your best guess?

You could take a random guess between 0 and 100%. But, since that would give you a 1% chance of being correct, there must be a better way.

A helpful approach comes from a subset of statistical theory called Bayesian statistics. This theory says we can start with a baseline of Trout’s hit probability based on past data.

For instance, we might see that so far this year, the overall major league batting average is .242. So, we might guess that Trout’s probability of getting a hit is 24%.

This is better than a random guess. But, we can do better, as Mike Trout is no ordinary hitter.

We might notice there is even better information out there. Year-to-date, Trout is batting .291. So, our guess for his chances might be 29%. Even better.

Or, we might see that Trout’s lifetime average is .301 and that he hit .333 last year. Since we believe in a concept called regression to the mean, that would lead us to think that his batting average should be better for the rest of the season than it is currently. So, we revise our estimate upward to 31%.

There is still more information we can use. The opposing pitcher is Justin Verlander. Verlander is a rare pitcher who has owned Trout in the past – Trout’s average is just .116 against Verlander. This causes us to revise our estimate downward a bit. Perhaps we take it to about 25%.

We can find even more information. The bases are loaded. Trout is a clutch hitter, and his career average with men on base is about 10 points higher than when the bases are empty. So, we move our estimate back up to about 28%.

But it is August. Trout has a history of batting well early in and late in the season, but he tends to cool off during the dog days of summer. So, we decide to end this and settle on a probability of 25%.

This sort of analysis could go on forever. Every bit of information we gather about Trout can conceivably help make a better prediction for his chances. Is it raining? What is the score? What did he have for breakfast? Is he in his home ballpark? Did he shave this morning? How has Verlander pitched so far in this game? What is his pitch count?

There are pre-election polling analogies in this baseball example, particularly if you follow the probabilistic election models created by organizations like FiveThirtyEight and The Economist.

Just as we might use Trout’s lifetime average as our “prior” probability, these models will start with macro variables for their election predictions. They will look at the past implications of things like incumbency, approval ratings, past turnout, and economic indicators like inflation, unemployment, etc. In theory, these can adjust our assumptions of who will win the election before we even include polling data.

Of course, using Trout’s lifetime average or these macro variables in polling will only be helpful to the extent that the future behaves like the past. And therein lies the rub – overreliance on past experience makes these models inaccurate during dynamic times.

Part of why pollsters missed badly in 2020 is unique things were going on – a global pandemic, changed methods of voting, increased turnout, etc. In baseball, perhaps this is a year with a juiced baseball, or Trout is dealing with an injury.

The point is that while unprecedented things are unpredictable, they happen with predictable regularity. There is always something unique about an election cycle or a Mike Trout at bat.

The most common question I am getting from readers of POLL-ARIZED is, “will the pollsters get it right in 2024?” My answer is that since pollsters are applying past assumptions in their model, they will get it right to the extent that the world in 2024 looks like the world did in 2020, and I would not put my own money on it.

I make a point in POLL-ARIZED that pollsters’ models have become too complex. While in theory, the predictive value of a model never gets worse when you add in more variables, in practice, this has made these models uninterpretable. Pollsters include so many variables in their likely voter models that many of their adjustments cancel each other out. They are left with a model with no discernable underlying theory.

If you look closely, we started with a probability of 24% for Trout. Even after looking at a lot of other information and making reasonable adjustments, we still ended up with a prediction of 25%. The election models are the same way. They include so many variables that they can cancel out each other’s effects and end up with a prediction that looks much like the raw data did before the methodologists applied their wizardry.

This effort is better spent at getting better input for the models by investing in generating the trust needed to increase the response rates we get to our surveys and polls. Improving the quality of our data input will increase the predictive quality of the polls more than coming up with more complicated ways to weight the data.

Of course, in the end, one candidate wins, and the other loses, and Mike Trout either gets a hit, or he doesn’t, so the actual probability moves to 0% or 100%. Trout cannot get 25% of a hit, and a candidate cannot win 79% of an election.

As I write this, I looked up the last time Trout faced Verlander. It turns out Verlander struck him out!

The Insight that Insights Technology is Missing

The market research insights industry has long been characterized by a resistance to change. This likely results from the academic nature of what we do. We don’t like to adopt new ways of doing things until they have been proven and studied.

I would posit that the insights industry has not seen much change since the transition from telephone to online research occurred in the early 2000s. And even that transition created discord within the industry, with many traditional firms resistant to moving on from telephone studies because online data collection had not been thoroughly studied and vetted.

In the past few years, the insights industry has seen an influx of capital, mostly from private equity and venture capital firms. The conditions for this cash infusion have been ripe: a strong and growing demand for insights, a conservative industry that is slow to adapt, and new technologies arising that automate many parts of a research project have all come together simultaneously.

Investing organizations see this enormous business opportunity. Research revenues are growing, and new technologies are lowering costs and shortening project timeframes. It is a combustible business situation that needs a capital accelerant.

Old school researchers, such as myself, are becoming nervous. We worry that automation will harm our businesses and that the trend toward DIY projects will result in poor-quality studies. Technology is threatening the business models under which we operate.

The trends toward investment in automation in the insights industry are clear. Insights professionals need to embrace this and not fight it.

However, although the movement toward automation will result in faster and cheaper studies, this investment ignores the threats that declining data quality creates. In the long run, this automation will accelerate the decline in data quality rather than improve it.

It is great that we are finding ways to automate time-consuming research tasks, such as questionnaire authoring, sampling, weighting, and reporting. This frees up researchers to concentrate on drawing insights out of the data. But, we can apply all the automation in the world to the process, yet if we do not do something about data quality, it will not increase the value clients receive.

I argue in POLL-ARIZED that the elephant in the research room is the fact that very few people want to take our surveys anymore. When I began in this industry, I routinely fielded telephone projects with 70-80% response rates. Currently, telephone and online response rates are between 3-4% for most projects.

Response rates are not everything. You can make a compelling argument that they do not matter at all. There is no problem as long as the 3-4% response we get is representative. I would rather have a representative 3% answer a study than a biased 50%.

But, the fundamental problem is that this 3-4% is not representative. Only about 10% of the US population is currently willing to take surveys. What is happening is that this same 10% is being surveyed repeatedly. In the most recent project Crux fielded, respondents had taken an average of 8 surveys in the past two weeks. So, we have about 10% of the population taking surveys every other day, and our challenge is to make them represent the rest of the population.

Automate all you want, but the data that are the backbone of the insights we are producing quickly and cheaply is of historically low quality.

The new investment flooding into research technology will contribute to this problem. More studies will be done that are poorly designed, with long, tortuous questionnaires. Many more surveys will be conducted, fewer people will be willing to take them, and response rates will continue to fall.

There are plenty of methodologists working on these problems. But, for the most part, they are working on new ways to weight the data we can obtain rather than on ways to compel more response. They are improving data quality, but only slightly, and the insights field continues to ignore the most fundamental problem we have: people do not want to take our surveys.

For the long-term health of our field, that is where the investment should go.

In POLL-ARIZED, I list ten potential solutions to this problem. I am not optimistic that any of them will be able to stem the trend toward poor data quality. But, I am continually frustrated that our industry has not come together to work towards expanding respondent trust and the base of people willing to take part in our projects.

The trend towards research technology and automation is inevitable. It will be profitable. But, unless we address data quality issues, it will ultimately hasten the decline of this field.

POLL-ARIZED available on May 10

I’m excited to announce that my book, POLL-ARIZED, will be available on May 10.
 
After the last two presidential elections, I was fearful my clients would ask a question I didn’t know how to answer: “If pollsters can’t predict something as simple as an election, why should I believe my market research surveys are accurate?”
 
POLL-ARIZED results from a year-long rabbit hole that question led me down! In the process, I learned a lot about why polls matter, how today’s pollsters are struggling, and what the insights industry should do to improve data quality.
 
I am looking for a few more people to read an advance copy of the book and write an Amazon review on May 10. If you are interested, please send me a message at poll-arized@cruxresearch.com.

Let’s Appreciate Statisticians Who Make Data Understandable

Statistical analyses are amazing, underrated tools. All scientific fields depend on discoveries in statistics to make inferences and draw conclusions. Without statistics, advances in engineering, medicine, and science that have greatly improved the quality of life would not have been possible. Statistics is the Rodney Dangerfield of academic subjects – it never gets the respect it deserves.

Statistics is central to market research and polling. We use statistics to describe our findings and understand the relationships between variables in our data sets. Statistics are the most important tools we have as researchers.

However, we often misuse these tools. I firmly believe that pollsters and market researchers overdo it with statistics. Basic, statistical analyses are easy to understand, but complicated ones are not. Researchers like to get into complex statistics because it lends an air of expertise to what we do.

Unfortunately, most sophisticated techniques are impossible to convey to “normal” people who may not have a statistical background, and this tends to describe the decision-makers we support.

I learned long ago that when working with a dataset, any result that will be meaningful will likely be uncovered by using simple descriptive statistics and cross-tabulations. Multivariate techniques can tease out more subtle relationships in the data. Still, the clients (primarily marketers) we work with are not looking for subtleties – they want some conclusions that leap off the page from the data.

If a result is so subtle that it needs complicated statistics to find, it is likely not a large enough result to be acted upon by a client.

Because of this, we tend to use multivariate techniques to confirm what we see with more straightforward methods. Not always – as there are certainly times when the client objectives call for sophisticated techniques. But, as researchers, our default should be to use the most straightforward designs possible.

I always admire researchers who make complicated things understandable. That should be the goal of statistical analyses. George Terhanian of Electric Insights has developed a way to use sophisticated statistical techniques to answer some of the most fundamental questions a marketer will ask.

In his article “Hit? Stand? Double? Master’ likely effects’ to make the right call”, George describes his revolutionary process. It is sophisticated behind the scenes, but I like the simplicity in the questions it can address.

He has created a simulation technique that makes sense of complicated data sets. You may measure hundreds of things on a survey and have an excellent profile of the attitudes and behaviors of your customer base. But, where should you focus your investments? This technique demonstrates the likely effects of changes.

As marketers, we cannot directly increase sales. But we can establish and influence attitudes and behaviors that result in sales. Our problem is often to identify which of these attitudes and behaviors to address.

For instance, if I can convince my customer base that my product is environmentally responsible, how many of them can I count on to buy more of my product? The type of simulator described in this article can answer this question, and as a marketer, I can then weigh if the investment necessary is worth the probable payoff.

George created a simulator on some data from a recent Crux Poll. Our poll showed that 17% of Americans trust pollsters. George’s analysis shows that trust in pollsters is directly related to their performance in predicting elections.

Modeling the Crux Poll data showed that if all Americans “strongly agreed” that presidential election polls do a good job of predicting who will win, trust in pollsters/polling organizations would increase by 44 million adults. If Americans feel “extremely confident” that pollsters will accurately predict the 2024 election, trust in pollsters will increase by an additional 40 million adults.

If we are worried that pollsters are untrusted, this suggests that improving the quality of our predictions should address the issue.

Putting research findings in these sorts of terms is what gets our clients’ attention. 

Marketers need this type of quantification because it can plug right into financial plans. Researchers often hear that the reports we provide are not “actionable” enough. There is not much more actionable than showing how many customers would be expected to change their behavior if we successfully invest in a marketing campaign to change an attitude.

Successful marketing is all about putting the probabilities in your favor. Nothing is certain, but as a marketer, your job is to decide where best place your resources (money and time). This type of modeling is a step in the right direction for market researchers.

Oops, the polls did it again

Many people had trouble sleeping last night wondering if their candidate was going to be President. I couldn’t sleep because as the night wore on it was becoming clear that this wasn’t going to be a good night for the polls.

Four years ago on the day after the election I wrote about the “epic fail” of the 2016 polls. I couldn’t sleep last night because I realized I was going to have to write another post about another polling failure. While the final vote totals may not be in for some time, it is clear that the 2020 polls are going to be off on the national vote even more than the 2016 polls were.

Yesterday, on election day I received an email from a fellow market researcher and business owner. We are involved in a project together and he was lamenting how poor the data quality has been in his studies recently and was wondering if we were having the same problems.

In 2014 we wrote a blog post that cautioned our clients that we were detecting poor quality interviews that needed to be discarded about 10% of the time. We were having to throw away about 1 in 10 of the interviews we collected.

Six years later that percentage has moved to be between 33% and 45% and we tend to be conservative in the interviews we toss. It is fair to say that for most market research studies today, between a third and a half of the interviews being collected are, for a lack of a better term, junk.  

It has gotten so bad that new firms have sprung up that serve as a go-between from sample providers and online questionnaires in order to protect against junk interviews. They protect against bots, survey farms, duplicate interviews, etc. Just the fact that these firms and terms like “survey farms” exist should give researchers pause regarding data quality.

When I started in market research in the late 80s/early 90’s we had a spreadsheet program that was used to help us cost out projects. One parameter in this spreadsheet was “refusal rate” – the percent of respondents who would outright refuse to take part in a study. While the refusal rate varied by study, the beginning assumption in this program was 40%, meaning that on average we expected 60% of the time respondents would cooperate. 

According to Pew and AAPOR in 2018 the cooperation rate for telephone surveys was 6% and falling rapidly.

Cooperation rates in online surveys are much harder to calculate in a standardized way, but most estimates I have seen and my own experience suggest that typical cooperation rates are about 5%. That means for a 1,000-respondent study, at least 20,000 emails are sent, which is about four times the population of the town I live in.

This is all background to try to explain why the 2020 polls appear to be headed to a historic failure. Election polls are the public face of the market research industry. Relative to most research projects, they are very simple. The problems pollsters have faced in the last few cycles is emblematic of something those working in research know but rarely like to discuss: the quality of data collected for research and polls has been declining, and should be alarming to researchers.

I could go on about the causes of this. We’ve tortured our respondents for a long time. Despite claims to the contrary, we haven’t been able to generate anything close to a probability sample in years. Our methodologists have gotten cocky and feel like they can weight any sampling anomalies away. Clients are forcing us to conduct projects on timelines that make it impossible to guard against poor quality data. We focus on sampling error and ignore more consequential errors. The panels we use have become inbred and gather the same respondents across sources. Suppliers are happy to cash the check and move on to the next project.

This is the research conundrum of our times: in a world where we collect more data on people’s behavior and attitudes than ever before, the quality of the insights we glean from these data is in decline.

Post 2016 the polling industry brain trust rationalized and claimed that the polls actually did a good job, convened some conferences to discuss the polls, and made modest methodological changes. Almost all of these changes related to sampling and weighting. But, as it appears that the 2020 polling miss is going to be way beyond what can be explained by sampling (last night I remarked to my wife that “I bet the p-value of this being due to sampling is about 1 in 1,000”), I feel that pollsters have addressed the wrong problem.

None of the changes pollsters made addressed the long-term problems researchers face with data quality. When you have a response rate of 5% and up to half of those are interviews you need to throw away, errors that can arise are orders of magnitude greater than the errors that are generated by sampling and weighting mistakes.

I don’t want to sound like I have the answers.  Just a few days ago I posted that I thought that on balance there were more reasons to conclude that the polls would do a good job this time than to conclude that they would fail. When I look through my list of potential reasons the polls might fail, nothing leaps to me as an obvious cause, so perhaps the problem is multi-faceted.

What I do know is the market research industry has not done enough to address data quality issues. And every four years the polls seem to bring that into full view.

Will the polls be right this time?

The 2016 election was damaging to the market research industry. The popular perception has been that in 2016 the pollsters missed the mark and miscalled the winner. In reality, the 2016 polls were largely predictive of the national popular vote. But, 2016 was largely seen by non-researchers as disastrous. Pollsters and market researchers have a lot riding on the perceived accuracy of 2020 polls.

The 2016 polls did a good job of predicting the national vote total but in a large majority of cases final national polls were off in the direction of overpredicting the vote for Clinton and underpredicting the vote for Trump. That is pretty much a textbook definition of bias. Before the books are closed on the 2016 pollster’s performance, it is important to note that the 2012 polls were off even further and mostly in the direction of overpredicting the vote for Romney and underpredicting the vote for Obama. The “bias,” although small, has swung back and forth between parties.

Election Day 2020 is in a few days and we may not know the final results for a while. It won’t be possible to truly know how the polls did for some weeks or months.

That said, there are reasons to believe that the 2020 polls will do an excellent job of predicting voter behavior and there are reasons to believe they may miss the mark.  

There are specific reasons why it is reasonable to expect that the 2020 polls will be accurate. So, what is different in 2020? 

  • There have been fewer undecided voters at all stages of the process. Most voters have had their minds made up well in advance of election Tuesday. This makes things simpler from a pollster’s perspective. A polarized and engaged electorate is one whose behavior is predictable. Figuring out how to partition undecided voters moves polling more in a direction of “art” than “science.”
  • Perhaps because of this, polls have been remarkably stable for months. In 2016, there was movement in the polls throughout and particularly over the last two weeks of the campaign. This time, the polls look about like they did weeks and even months ago.
  • Turnout will be very high. The art in polling is in predicting who will turn out and a high turnout election is much easier to forecast than a low turnout election.
  • There has been considerable early voting. There is always less error in asking about what someone has recently done than what they intend to do in the future. Later polls could ask many respondents how they voted instead of how they intended to vote.
  • There have been more polls this time. As our sample size of polls increases so does the accuracy. Of course, there are also more bad polls out there this cycle as well.
  • There have been more and better polls in the swing states this time. The true problem pollsters had in 2016 was with state-level polls. There was less attention paid to them, and because the national pollsters and media didn’t invest much in them, the state-level polling is where it all went wrong. This time, there has been more investment in swing-state polling.
  • The media invested more in polls this time. A hidden secret in polling is that election polls rarely make money for the pollster. This keeps many excellent research organizations from getting involved in them or dedicating resources to them. The ones that do tend to do so solely for reputational reasons. An increased investment this time has helped to get more researchers involved in election polling.
  • Response rates are upslightly. 2020 is the first year where we have seen a long-term trend towards declining response rates on survey stabilize and even kick up a little. This is likely a minor factor in the success of the 2020 polls, but it is in the right direction.
  • The race isn’t as close as it was in 2016. This one might only be appreciated by statisticians. Since variability is maximized in a 50/50 distribution the further away from an even race it is the more accurate a poll will be. This is another small factor in the direction of the polls being accurate in 2020.
  • There has not been late breaking news that could influence voter behavior. In 2016, the FBI director’s decision to announce a probe into Clinton’s emails came late in the campaign. There haven’t been any similar bombshells this time.
  • Pollsters started setting quotas and weighting on education. In the past, pollsters would balance samples on characteristics known to correlate highly with voting behavior – characteristics like age, gender, political party affiliation, race/ethnicity, and past voting behavior. In 2016, pollsters learned the hard way that educational attainment had become an additional characteristic to consider when crafting samples because voter preferences vary by education level. The good polls fixed that this go round.
  • In a similar vein, there has been a tighter scrutiny of polling methodology. While the media can still be a cavalier about digging into methodology, this time they were more likely to insist that pollsters outline their methods. This is the first time I can remember seeing news stories where pollsters were asked questions about methodology.
  • The notion that there are Trump supporters who intentionally lie to pollsters has largely been disproven by studies from very credible sources, such as Yale and Pew. Much more relevant is the pollster’s ability to predict turnout from both sides.

There are a few things going on that give the polls some potential to lay an egg.

  • The election will be decided by a small number of swing states. Swing state polls are not as accurate and are often funded by local media and universities that don’t have the funding or the expertise to do them correctly. The polls are close and less stable in these states. There is some indication that swing state polls have been tightening, and Biden’s lead in many of them isn’t much different than Clinton’s lead in 2020.
  • Biden may be making the same mistake Clinton made. This is a political and not a research-related reason, but in 2016 Clinton failed to aggressively campaign in the key states late in the campaign while Trump went all in. History could be repeating itself. Field work for final polls is largely over now, so the polls will not reflect things that happen the last few days.
  • If there is a wild-card that will affect polling accuracy in 2020, it is likely to center around how people are voting. Pollsters have been predicting election day voting for decades. In this cycle votes have been coming in for weeks and the methods and rules around early voting vary widely by state. Pollsters just don’t have past experience with early voting.
  • There is really no way for pollsters to account for potential disqualifications for mail-in votes (improper signatures, late receipts, legal challenges, etc.) that may skew to one candidate or another.
  • Similarly, any systematic voter suppression would likely cause the polls to underpredict Trump. These voters are available to poll, but may not be able to cast a valid vote.
  • There has been little mention of third-party candidates in polling results. The Libertarian candidate is on the ballot in all 50 states. The Green Party candidate is on the ballot in 31 states. Other parties have candidates on the ballot in some states but not others. These candidates aren’t expected to garner a lot of votes, but in a close election even a few percentage points could matter to the results. I have seen national polls from reputable organizations where they weren’t included.
  • While there is little credible data supporting that there are “shy” Trump voters that are intentionally lying to pollsters, there still might be a social desirability bias that would undercount Trump’s support. That social desirability bias could be larger than it was in 2016, and it is still likely in the direction of under predicting Trump’s vote count.
  • Polls (and research surveys) tend to underrepresent rural areas. Folks in rural areas are less likely to be in online panels and to cooperate on surveys. Few pollsters take this into account. (I have never seen a corporate research client correcting for this, and it has been a pet peeve of mine for years.) This is a sample coverage issue that will likely undercount the Trump vote.
  • Sampling has continued to get harder. Cell phone penetration has continued to grow, online panel quality has fallen, and our best option (ABS sampling) is still far from random and so expensive it is beyond the reach of most polls.
  • “Herding” is a rarely discussed, but very real polling problem. Herding refers to pollsters who conduct a poll that doesn’t conform to what other polls are finding. These polls tend to get scrutinized and reweighted until they fit to expectations, or even worse, buried and never released. Think about it – if you are a respected polling organization that conducted a recent poll that showed Trump would win the popular vote, you’d review this poll intensely before releasing it and you might choose not to release it at all because it might put your firm’s reputation at risk to release a poll that looks different than the others. The only polls I have seen that appear to be out of range are ones from smaller organizations who are likely willing to run the risk of being viewed as predicting against the tide or who clearly have a political bias to them.

Once the dust settles, we will compose a post that analyzes how the 2020 polls did. For now, we feel there are a more credible reasons to believe the polls will be seen as predictive than to feel that we are on the edge of a polling mistake.  From a researcher’s standpoint, the biggest worry is that the polls will indeed be accurate, but won’t match the vote totals because of technicalities in vote counting and legal challenges. That would reflect unfairly on the polling and research industries.

Researchers should be mindful of “regression toward the mean”

There is a concept in statistics known as regression toward the mean that is important for researchers to consider as we look at how the COVID-19 pandemic might change future consumer behavior. This concept is as challenging to understand as it is interesting.

Regression toward the mean implies that an extreme example in a data set tends to be followed by an example that is less extreme and closer to the “average” value of the population. A common example is if two parents that are above average in height have a child, that child is demonstrably more likely to be closer to average height than the “extreme” height of their parents.

This is an important concept to keep in mind in the design of experiments and when analyzing market research data. I did a study once where we interviewed the “best” customers of a quick service restaurant, defined as those that had visited the restaurant 10 or more times in the past month. We gave each of them a coupon and interviewed them a month later to determine the effect of the coupon. We found that they actually went to the restaurant less often the month after receiving the coupon than the month before.

It would have been easy to conclude that the coupon caused customers to visit less frequently and that there was something wrong with it (which is what we initially thought). What really happened was a regression toward the mean. Surveying customers who had visited a large number of times in one month made it likely that these same customers would visit a more “average” amount in a following month whether they had a coupon or not. This was a poor research design because we couldn’t really assess the impact of the coupon which was our goal.

Personally, I’ve always had a hard time understanding and explaining regression toward the mean because the concept seems to be counter to another concept known as “independent trials”. You have a 50% chance of flipping a fair coin and having it come up heads regardless of what has happened in previous flips. You can’t guess whether the roulette wheel will come up red or black based on what has happened in previous spins. So, why would we expect a restaurant’s best customers to visit less in the future?

This happens when we begin with a skewed population. The most frequent customers are not “average” and have room to regress toward the mean in the future. Had we surveyed all customers across the full range of patronage there would be no mean to regress to and we could have done a better job of isolating the effect of the coupon.

Here is another example of regression toward the mean. Suppose the Buffalo Bills quarterback, Josh Allen, has a monster game when they play the New England Patriots. Allen, who has been averaging about 220 yards passing per game in his career goes off and burns the Patriots for 450 yards. After we are done celebrating and breaking tables in western NY, what would be our best prediction for the yards Allen will throw for the second time the Bills play the Patriots?

Well, you could say the best prediction is 450 yards as that is what he did the first time. But, regression toward the mean would imply that he’s more likely to throw close to his historic average of 220 yards the second time around. So, when he throws for 220 yards the second game it is important to not give undue credit to Bill Belichick for figuring out how to stop Allen.

Here is another sports example. I have played (poorly) in a fantasy baseball league for almost 30 years. In 2004, Derek Jeter entered the season as a career .317 hitter. After the first 100 games or so he was hitting under .200. The person in my league that owned him was frustrated so I traded for him. Jeter went on to hit well over .300 the rest of the season. This was predictable because there wasn’t any underlying reason (like injury) for his slump. His underlying average was much better than his current performance and because of the concept of regression toward the mean it was likely he would have a great second half of the season, which he did.

There are interesting HR examples of regression toward the mean. Say you have an employee that does a stellar job on an assignment – over and above what she normally does. You praise her and give her a bonus. Then, you notice that on the next assignment she doesn’t perform on the same level. It would be easy to conclude that the praise and bonus caused the poor performance when in reality her performance was just regressing back toward the mean. I know sales managers who have had this exact problem – they reward their highest performers with elaborate bonuses and trips and then notice that the following year they don’t perform as well. They then conclude that their incentives aren’t working.

The concept is hard at work in other settings. Mutual funds that outperform the market tend to fall back in line the next year. You tend to feel better the day after you go to the doctor. Companies profiled in “Good to Great” tend to have hard times later on.

Regression toward the mean is important to consider when designing sampling plans. If you are sampling an extreme portion of a population it can be a relevant consideration. Sample size is also important. When you have just a few cases of something, mathematically an extreme response can skew your mean.

The issue to be wary of is that when we fail to consider regression toward the mean, we tend to overstate the importance of correlation between two things. We think our mutual fund manager is a genius when he just got lucky, that our coupon isn’t working, or that Josh Allen is becoming the next Drew Brees. All of these could be true, but be careful in how you interpret data that result from extreme or small sample sizes.

How does this relate to COVID? Well, at the moment, I’d say we are still in an “inflated expectations” portion of a hype curve when we think of what permanent changes may take place resulting from the pandemic. There are a lot of examples. We hear that commercial real estate is dead because businesses will keep employees working from home. Higher education will move entirely online. In-person qualitative market research will never happen again. Business travel is gone forever. We will never again work in an office setting. Shaking hands is a thing of the past.

I’m not saying there won’t be a new normal that results from COVID, but if we believe in regression toward the mean and the hype curve we’d predict that the future will look more like the past than how it is currently being portrayed. The post-COVID world will certainly look more like the past than a more extreme version of the present. We will naturally regress back toward the past and not to a more extreme version of current behaviors. The “mean” being regressed to has likely changed, but not as much as the current, extreme situation implies.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.