What are the best COVID-19 polls?

The COVID-19 crisis is affecting all types of organizations. Many, including some of our clients, are commissioning private polls to help predict the specific impact of the pandemic on their business. Fortunately, there are a number of well-regarded research and polling organizations conducting polls that are publicly released. Unfortunately, there are also disreputable polls out there and can be challenging to sort out the good from the bad.

We’ve been closely watching the COVID-19 polls and have found some that stand out from the rest. We felt it would be a good idea to list them here to save you some time as you look for polling information.

  • First, although it is not a poll, a useful site to look at is the Institute for Health Metrics and Evaluation at the University of Washington. This site contains the results of a model projecting numbers of deaths from COVID-19, beds needed versus hospital capacity, etc. This is one of the most credible models out there, and the one that seems to be cited the most in the media and by the federal government.
  • Johns Hopkins University maintains a coronavirus tracking center which is the definitive place to go to track cases, hospitalizations, and deaths from COVID-19. 

Below is a list of opinion polls we’ve found most interesting. There are a lot of polls out there. The ones listed below are from trusted organizations and would be a good place to start your search. There are many polls available that concentrate on things like the public’s opinion of the government’s handling of the crisis. The polls below go a bit deeper and are far more interesting in our view. There are likely other good polls out there, but these are the best ones we have found thus far.

  • The Harris Poll COVID-19 trackerThis is perhaps the most comprehensive COVID-19 polling we have discovered, and it tracks back to early March. If you have time for just one polling site this is probably the one to check out.
  • PewPew is a widely-respected organization that has conducted many polls on COVID-19 topics.
  • The COVID Impact Survey. This is an independent, non-governmental survey being conducted by NORC along with some respected foundations. 
  • Dynata. Dynata has a tracking poll going on COVID-19 that is interesting because it spans multiple countries. Dynata is also doing a “symptom map” based on their polling worldwide. This is interesting as it shows how symptoms are trending around the world, in the US by state, and even in NYC by neighborhood. However, we feel that a Google Trends search would provide better data that survey research on symptoms. 
  • IPSOS.  IPSOS is also conducting worldwide polls
  • Simpson Scarborough. This poll is specific to higher education and the implications of COVID-19 on college students. If you work in higher education or have a college-aged child, you are likely to find this one interesting.
  • University of Massachusetts Amherst. This one is different and interesting. It shows the results of an ongoing survey of infectious disease experts, containing their predictions for the impacts of the disease.  It is updated weekly.  FiveThirtyEight is summarizing this work and it is probably easiest to read their summaries than to go to the original source. I must say, though, I have been watching this poll carefully, and the experts haven’t been all that accurate in their predictions, missing on the high side consistently.

There are undoubtedly many more good polls out there. Those mentioned above are from non-partisan, trusted organizations.

How COVID-19 may change Market Research

Business life is changing as COVID-19 spreads in the US and the world. In the market research and insights field there will be both short-term and long-term effects. It is important that clients and suppliers begin preparing for them.

This has been a challenging post to write. First, in the context of what many people are going though in their personal and business lives as a result of this disruption, writing about what might happen to one small sector of the business world can come across as uncaring and tone-deaf, which is not the intention. Second, this is a quickly changing situation and this post has been rewritten a number of times in the past week. I have a feeling it may not age well.

Nonetheless, market research will be highly impacted by this situation. Below are some things we think will likely happen to the market research industry.

  • An upcoming recession will hit the MR industry hard. Market research is not an investment that typically pays off quickly. Companies that are forced to pare back will cut their research spending and likely their staffs.
  • Cuts will affect clients more than suppliers. In previous recessions, clients have cut MR staff and outsourced work to suppliers. This is an opportunity for suppliers that know their clients’ businesses well and can step up to help.
  • Unlike a lot of other types of industries, it is the large suppliers that are most at risk of losing work. Publicly-held research suppliers will be under even more intense pressure from their investors than usual. There will most certainly be cost cutting at these firms, and if the concerns over the virus persist, it will lead to layoffs.
  • The smallest suppliers could face an existential risk. Many independent contractors and small firms are dependent on one or two clients for the bulk of their revenue. If those clients are in highly affected sectors, these small suppliers will be at risk of going out of business.
  • Smallish to mid-sized suppliers may emerge stronger. Clients are going to be under cost pressures due to a receding economy and smaller research suppliers tend to be less expensive. Smaller research firms did well post 9/11 and during the recession of 2008-09 because clients moved work from higher priced larger firms to them. Smaller research firms would be wise to build tight relationships so that when the storm over the virus abates, they will have won their clients trust for future projects.
  • New small firms will emerge as larger firms cut staff and create refugees who will launch new companies.

Those are all items that might pertain to any sort of sudden business downturn. There are also some things that we think will happen that are specific to the COVID-19 situation:

  • Market research conferences will never be the same. Conferences are going to have difficulty drawing speakers and attendees. Down the line, conferences will be smaller and more targeted and there will be more virtual conferences and training sessions scheduled. At a minimum, companies will send fewer people to research conferences.
  • This will greatly affect MR trade associations as these conferences are important revenue sources for them. They will rethink their missions and revenue models, and will become less dependent on their signature events. The associations will have more frequent, smaller, more targeted online events. The days of the large, comprehensive research conference may be over.
  • Business travel will not return to its previous level. There will be fewer in-person meetings between clients and suppliers and those that are held will have fewer participants. Video conferencing will become an even more important way to reach clients.
  • Clients and suppliers will allow much more “work from home.” It may become the norm that employees are only expected to be in the office for key meetings. The situation with COVID-19 will give companies who don’t have a lot of experience allowing employees to work from home the opportunity to see the value in it. When the virus is under control, they will embrace telecommuting. We will see this crisis kick-start an already existing movement towards allowing more employees to work from home. The amount of office space needed will shrink.
  • Research companies will review and revise their sick-leave policies and there will be pressure on them to make them more generous.
  • Companies that did the right thing during the crisis will be rewarded with employee loyalty. Employees will become more attached and appreciative of suppliers that showed flexibility, did what they could to maintain payroll, and expressed genuine concerns for their employees.

Probably the biggest change we will see in market research projects is to qualitative research.

  • While there will always be great value in traditional, in-person focus groups , the situation around COVID-19 is going to cause online qualitative to become the standard approach. We are at a time where the technologies available for online qualitative are well-developed, yet clients and suppliers have clung to traditional methods. To date, the technology has been ahead of the demand. Companies will be forced by travel restrictions to embrace online methods and this will be at the expense of traditional groups. This is an excellent time to be in the online qualitative technology business. It is not such a great time to be in the focus group facility management business.
  • Independent moderators, who work exclusively with traditional groups, are going to be in trouble and not just in the short term. Many of these individuals will retire or look for work elsewhere or leave research. Others will necessarily adapt to online methods. Of course, there will continue to be independent moderators but we are predicting the demand for in-person groups will be permanently affected, and this portion of the industry will significantly shrink.
  • There is a risk that by not commissioning as much in-person qualitative, marketers may become further removed from direct human interaction with their customer base. This is a very real concern. We wouldn’t be in market research if we didn’t have an affinity for data and algorithms, but qualitative research is what keeps all of our efforts grounded. I’d caution clients to think carefully before removing all in-person interaction from your research plans.

What will happen to quantitative research? In the short-run, most studies will continue. Respondents are home, have free time, and thus far have shown they are willing to take part in studies. Some projects, typically in highly affected industries like travel and entertainment, are being postponed or canceled. All current data sets need to be viewed with a careful eye as the tumult around the virus can affect results. For instance, we conduct a lot of research with young respondents, and we now know for sure that their parents are likely nearby when they are taking our surveys, and that can influence our findings for some subjects.

Particular care needs to be taken in ongoing tracking studies. It makes sense for many trackers to add questions in to see how the situation has affected the brand in question.

But, in the longer term there will be too much change in quantitative research methods that result directly from this situation. If anything, there will be a greater need to understand consumers.

Tough times for sure. It has been heartening to see how our industry has reacted. Research panel and technology providers have reached out to help keep projects afloat. We’ve had subcontractors tell us we can delay payments if we need to. Calls with clients have become more “human” as we hear their kids and pets in the background and see the stresses they are facing. Respondents have continued to fill out our surveys.

There is a lot of uncertainty right now. At its core, market research is a way to reduce uncertainty for decision makers by making the future more predictable, so we are needed now more than ever. Research will adapt as it always does, and I believe in the long-run it may become even more valued as a result of this crisis.

The myth of the random sample

Sampling is at the heart of market research. We ask a few people questions and then assume everyone else would have answered the same way.

Sampling works in all types of contexts. Your doctor doesn’t need to test all of your blood to determine your cholesterol level – a few ounces will do. Chefs taste a spoonful of their creations and then assume the rest of the pot will taste the same. And, we can predict an election by interviewing a fairly small number of people.

The mathematical procedures that are applied to samples that enable us to project to a broader population all assume that we have a random sample. Or, as I tell research analysts: everything they taught you in statistics assumes you have a random sample. T-tests, hypotheses tests, regressions, etc. all have a random sample as a requirement.

Here is the problem: We almost never have a random sample in market research studies. I say “almost” because I suppose it is possible to do, but over 30 years and 3,500 projects I don’t think I have been involved in even one project that can honestly claim a random sample. A random sample is sort of a Holy Grail of market research.

A random sample might be possible if you have a captive audience. You can random sample some the passengers on a flight or a few students in a classroom or prisoners in a detention facility. As long as you are not trying to project beyond that flight or that classroom or that jail, the math behind random sampling will apply.

Here is the bigger problem: Most researchers don’t recognize this, disclose this, or think through how to deal with it. Even worse, many purport that their samples are indeed random, when they are not.

For a bit of research history, once the market research industry really got going the telephone random digit dial (RDD) sample became standard. Telephone researchers could randomly call land line phones. When land line telephone penetration and response rates were both high, this provided excellent data. However, RDD still wasn’t providing a true random, or probability sample. Some households had more than one phone line (and few researchers corrected for this), many people lived in group situations (colleges, medical facilities) where they couldn’t be reached, some did not have a land line, and even at its peak, telephone response rates were only about 70%. Not bad. But, also, not random.

Once the Internet came of age, researchers were presented with new sampling opportunities and challenges. Telephone response rates plummeted (to 5-10%) making telephone research prohibitively expensive and of poor quality. Online, there was no national directory of email addresses or cell phone numbers and there were legal prohibitions against spamming, so researchers had to find new ways to contact people for surveys.

Initially, and this is still a dominant method today, research firms created opt-in panels of respondents. Potential research participants were asked to join a panel, filled out an extensive demographic survey, and were paid small incentives to take part in projects. These panels suffer from three response issues: 1) not everyone is online or online at the same frequency, 2) not everyone who is online wants to be in a panel, and 3) not everyone in the panel will take part in a study. The result is a convenience sample. Good researchers figured out sophisticated ways to handle the sampling challenges that result from panel-based samples, and they work well for most studies. But, in no way are they a random sample.

River sampling is a term often used to describe respondents who are “intercepted” on the Internet and asked to fill out a survey. Potential respondents are invited via online ads and offers placed on a range of websites. If interested, they are typically pre-screened and sent along to the online questionnaire.

Because so much is known about what people are doing online these days, sampling firms have some excellent science behind how they obtain respondents efficiently with river sampling. It can work well, but response rates are low and the nature of the online world is changing fast, so it is hard to get a consistent river sample over time. Nobody being honest would ever use the term “random sampling” when describing river samples.

Panel-based samples and river samples represent how the lion’s share of primary market research is being conducted today. They are fast and inexpensive and when conducted intelligently can approximate the findings of a random sample. They are far from perfect, but I like that the companies providing them don’t promote them as being random samples. They involve some biases and we deal with these biases as best we can methodologically. But, too often we forget that they violate a key assumption that the statistical tests we run require: that the sample is random. For most studies, they are truly “close enough,” but the problem is we usually fail to state the obvious – that we are using statistical tests that are technically not appropriate for the data sets we have gathered.

Which brings us to a newer, shiny object in the research sampling world: ABS samples. ABS (addressed-based samples) are purer from a methodological standpoint. While ABS samples have been around for quite some time, they are just now being used extensively in market research.

ABS samples are based on US Postal Service lists. Because USPS has a list of all US households, this list is an excellent sampling frame. (The Census Bureau also has an excellent list, but it is not available for researchers to use.) The USPS list is the starting point for ABS samples.

Research firms will take the USPS list and recruit respondents from it, either to be in a panel or to take part in an individual study. This recruitment can be done by mail, phone, or even online. They often append publicly-known information onto the list.

As you might expect, an ABS approach suffers from some of the same issues as other approaches. Cooperation rates are low and incentives (sometimes large) are necessary. Most surveys are conducted online, and not everyone in the USPS list is online or has the same level of online access. There are some groups (undocumented immigrants, homeless) that may not be in the USPS list at all. Some (RVers, college students, frequent travelers) are hard to reach. There is evidence that ABS approaches do not cover rural areas as well as urban areas. Some households use post office boxes and not residential addresses for their mail. Some use more than one address. So, although ABS lists cover about 97% of US households, the 3% that they do not cover are not randomly distributed.

The good news is, if done correctly, the biases that result from an ABS sample are more “correctable” than those from other types of samples because they are measurable.

A recent Pew study indicates that survey bias and the number of bogus respondents is a bit smaller for ABS samples than opt-in panel samples.

But ABS samples are not random samples either. I have seen articles that suggest that of all those approached to take part in a study based on an ABS sample, less than 10% end up in the survey data set.

The problem is not necessarily with ABS samples, as most researchers would concur that they are the best option we have and come the closest to a random sample. The problem is that many firms that are providing ABS samples are selling them as “random samples” and that is disingenuous at best. Just because the sampling frame used to recruit a survey panel can claim to be “random” does not imply that the respondents you end up in a research database constitute a random sample.

Does this matter? In many ways, it likely does not. There are biases and errors in all market research surveys. These biases and errors vary not just by how the study was sampled, but also by the topic of the question, its tone, the length of the survey, etc. Many times, survey errors are not the same throughout an individual survey. Biases in surveys tend to be “unknown knowns” – we know they are there, but aren’t sure what they are.

There are many potential sources of errors in survey research. I am always reminded of a quote from Humphrey Taylor, the past Chairman of the Harris Poll who said “On almost every occasion when we release a new survey, someone in the media will ask, “What is the margin of error for this survey?” There is only one honest and accurate answer to this question — which I sometimes use to the great confusion of my audience — and that is, “The possible margin of error is infinite.”  A few years ago, I wrote a post on biases and errors in research, and I was able to quickly name 15 of them before I even had to do an Internet search to learn more about them.

The reality is, the improvement in bias that is achieved by an ABS sample over a panel-based sample is small and likely inconsequential when considered next to the other sources of error that can creep into a research project. Because of this, and the fact that ABS sampling is really expensive, we tend to only recommend ABS panels in two cases: 1) if the study will result in academic publication, as academics are more accepting of data that comes from and ABS approach, and 2) if we are working in a small geography, where panel-based samples are not feasible.

Again, ABS samples are likely the best samples we have at this moment. But firms that provide them are often inappropriately portraying them as yielding random samples. For most projects, the small improvements in bias they provide is not worth the considerable increased budget and increased study time frame, which is why, for the moment, ABS samples are currently used in a small proportion of research studies. I consider ABS to be “state of the art” with the emphasis on “art” as sampling is often less of a science than people think.

Common Misperceptions About Millennials

We’ve been researching Millennials literally since they have been old enough to fill out surveys. Over time, we have found that clients cling to common misperceptions of this generation and that the nature of these misperceptions haven’t evolved as Millennials have come of age.

Millennials are the most studied generation in history, likely because they are such a large group (there are now more Millennials in the US than Boomers) and because they are poised to soon become a dominant force in the economy, in politics, and in our culture.

There are enduring misconceptions about Millennials. Many stem from our inability to grasp that Millennials are distinctly different from their Gen X predecessors. Perhaps the worst mistake we can make is to assume that Millennials will behave in an “X” fashion rather than view them as a separate group.

Below are some common misconceptions we see that relate to Millennials.

  • Today’s kids and teens are Millennials. This is false as Millennials have now largely grown up. If you use the Howe/Strauss Millennial birth years Millennials currently range from about 16 to 38 years old. If you prefer Pew’s breaks Millennials are currently aged 23 to 38. Either way, Millennials are better thought of as being in a young adult/early career life stage than as teenagers.
  • Millennials are “digital natives” who know more about technology than other generations. This is, at best, partially true. The first half of the generation, born in 1982, hardly grew up with today’s interactive technology. The iPhone came out in 2007 when the first Millennial was 25 years old. Millennials discovered these technologies along with the rest of us. A recent Pew study on technological ownership showed that Millennials do own more technology than Boomers and Xers, but that the gap isn’t all that large. For years we have counseled clients that parents and teachers are more technologically advanced than commonly thought. Don’t forget that the entrepreneurial creators of this technology are mainly Boomers and Xers, and not Millennials.
  • Millennials are all saddled with college debt. We want to tread lightly here, as we would not want to minimize the issue of college debt which affects many young people and constrains their lives in many ways. But we do want to put college debt in the proper perspective. The average Millennial has significant debt, but the reality is the bulk of the debt they hold is credit card debt and not college debt. College debt is just 16% of the total debt held by Millennials. According to the College Board 29% of bachelor’s degree graduates have no college debt at all, 24% have under $20,000 in debt, 30% have between $20,000 and $30,000 in debt, and 31% have over $30,000 in college debt. The College Board also reports that a 4-year college graduate can expect to make about $25,000 per year more than a non-graduate. It is natural for people of all generations to have debt in their young adult/early professional life stage and this isn’t unique to Millennials. What is unique is their debt levels are high and multi-faceted. Our view is that college debt per se is not the core issue for Millennials, as most have manageable levels of college debt and college is a financially worthwhile investment for most of them. But college debt levels continue to grow and have a cascading effect and lead to other types of debts. College debt is a problem, but mostly because it is a catalyst for other problems facing Millennials. So, this statement is true, but is more nuanced than is commonly perceived.
  • Millennials are fickle and not loyal to brands. This myth has held sway since before the generation was named. I cannot tell you how many market research projects I have conducted that have shown that Millennials are more brand loyal than other generations. They express positive views of products online at a rate many times greater than the level of complaints they express. Of course, they have typical young person behaviors of variety-seeking and exploration, but they live in a crazy world of information, misinformation, and choice. Brand loyalty is a defense mechanism for them.
  • Millennials are fickle and not loyal to employers. On the employer side, surveys show that Millennials seek stability in employment. They want to be continuously challenged and stay on a learning curve. We feel that issues with employer loyalty for Millennials go both ways and employers have become less paternalistic and value young employees less than in past times. That is the primary driver of Millennials switching employers. There are studies that suggest that Millennials are staying with employers longer than Gen X employees did.
  • Millennials are entrepreneurial. In reality, we expect Millennials to be perhaps the least entrepreneurial of all the modern generations. (We wrote an entire blog post on this issue.)
  • Millennials seek constant praise. This is the generation that grew up with participation trophies and gold stars on everything (provided by their Boomer parents). However, praise is not really what Millennials seek. Feedback is. They come from a world of online reviews, constant educational testing, and close supervision. The result is Millennials have a constant need to know where they stand. This is not the same as praise.
  • Millennials were poorly parented. The generation that was poorly parented was Gen X. These were the latch-key kids who were lightly supervised. Millennials have been close with their parents from birth. At college, the “typical” Millennial has contact with their parent more than 10 times per week. Upon graduation, many of them choose to live with, or nearby their parents even when there is no financial need to do so. Their family ties are strong.
  • Millennials are all the same. Whenever we look at segments, we run a risk of typecasting people and assuming all segment members are alike.  The “art” of segmentation in a market research study is to balance the variability between segments with the variability within them in a way that informs marketers. Millennials are diverse. They are the most racially diverse generation in American history, they span a wide age range, they cover a range of economic backgrounds, and are represented across the political spectrum. The result is while there is value in understanding Millennials as a segment, there is no typical Millennial.

When composing this post, I typed “Millennials are …” into a Google search box. The first thing that came up to complete my query was “Millennials are lazy entitled narcissists.” When I typed “Boomers are …” the first result was “Boomers are thriving.”  When I typed “Gen X is …” the first result was “Gen X is tired.” This alone should convince you that there are serious misconceptions of all generations.

Millennials are the most educated, most connected generation ever. I believe that history will show that Millennials effectively corrected for the excesses of Boomers and set the country and the world on a better course.

Should you use DIY market research tools?

A market research innovation has occurred over the past decade that is talked about in hushed tones among research suppliers:  the rise of DIY market research tools. Researchers and clients need to become more educated on what these DIY tools are and when it is appropriate to used them.

DIY tools come in a number of flavors. At their core, they allow anybody to log into a system, author a survey, select sample parameters, and hit “go.” Many also provide the ability to tabulate data and graph results. These tools reduce the complexity of fielding studies. For the most part, these tools are created by outside panel and research technology companies but some end clients have invested in their own tools.

Many research suppliers view DIY tools as an existential threat. After all, if clients can do all this themselves what do they need us for? Will our fielding and programming departments become obsolete? Will we have a large portion of what we do automated?

Maybe. But more likely our fielding and programming departments will become smaller and have to adapt to a changing technological world.

There is a clear analogy here to DIY household projects. The tools and materials needed for most home improvement projects are available at big box retailers. Some homeowners are well-equipped to take on projects themselves, others are not, and the key to a successful project is often understanding when it is important to call for professional help. The same is true for market research projects.

Where the analogy fails is when you take on a project you aren’t equipped to handle. If it is a home project you will probably discover that you got in a bit over your head along the way. In market research, however, you can complete an entire project that has serious errors in it but never really notice. The project will result in sub-optimal decision making and nobody may really notice.

In days gone by, the decision of whether to use a research supplier or not was straightforward. If the project was meaningful or complex, clients used suppliers. For many projects, the choice used to be between using a supplier or not doing the project at all. The rise of DIY tools has changed that.

Here are some instances where DIY research makes sense:

  • If the project is relatively simple. By simple, we mean from both a questionnaire design and a sampling perspective.
  • If the risk of making a suboptimal decision based on the information is low. Perhaps the best aspect of DIY tools is they permit clients to research issues that otherwise may have gone unresearched because of time and budget considerations.
  • When getting it done quickly is important. For many projects, there is something to be said for getting it 90% right and getting it done today rather than taking months to get it perfect.
  • If you have someone with supplier-side experience on staff. Suppliers are likely to be a bit more attuned to the nuances of study design and may notice mistakes others might miss.
  • If you have thought through the potential limitations of the DIY approach and have communicated this to your internal client.
  • When you are using the DIY project to pre-test or pilot a study. This is an excellent use of DIY tools: to be sure your questioning and scales are going to work before committing significant resources to a project. A DIY project can make the subsequent project more efficient.

Here are cases when we would caution against using DIY tools:

  • If a consequential decision will be made based on the results. Having the backing of a third-party supplier is important in this case and the investment is likely worth it.
  • When research results need to motivate people internally. Internal decision makers will typically listen more to research results if the study was conducted by a third-party.
  • When a broader perspective is needed. As a client, you know your firm and industry better than most suppliers will. But there are many times when having a broader perspective on a project provides substantial value to it.
  • If the sampling is complicated. If your target audience is obscure and hard to define in a few words, suppliers can be very helpful in getting your sampling right. In a previous post we mention that it is the sampling aspects of projects that most clients don’t think through enough. We have found that the most serious mistakes made in market research deal with sampling, and often these mistakes are hard to notice.
  • If you are conducting a business-to-business study. DIY sampling resources aren’t yet of the same quality for b-to-b research as they are for consumer studies.

DIY studies clearly have their place. They will augment current studies in some cases and replace them in others. I don’t see them as a threat to the highly-customized types of studies Crux Research tends to conduct. Market research spending will continue to grow slowly, but less will be spent on data collection and more on higher value-added aspects of projects.

In the 30 years I have worked in research, the cost of data collection has dropped considerably – I’d say it is about one-third what it used to be. But, during this time the price of research projects has increased. The implication is that clients have come to value the consultative aspects of studies more and have become more reliant on their suppliers to do things that previously clients did for themselves.

That presents a bit of a conundrum: clients are outsourcing more to suppliers at a time when tools are being developed that allow them to do many projects without a supplier. For many clients, money and time would be saved by hiring someone on staff that knows how to use these tools recognizes when a third-party supplier is necessary.

Should we get rid of statistical significance?

There has been recent debate among academics and statisticians surrounding the concept of statistical significance. Some high-profile medical studies have just narrowly missed meeting the traditional statistical significance cutoff of 0.05. This has resulted in potentially life changing drugs not being approved by regulators or pursued for further development by pharma companies. These cases have led to a much-needed review and re-education as to what statistical significance means and how it should be applied.

In a 2014 blog post (Is This Study Significant?) we discussed common misunderstandings market researchers have regarding statistical significance. The recent debate suggests this misunderstanding isn’t limited to market researchers – it appears that academics and regulators have the same difficulty.

Statistical significance is a simple concept. However, it seems that the human brain just isn’t wired well to understand probability and that lies at the root of the problem.

A measure is typically classified as statistically significant if its p-value is 0.05 or less. This means that there is a less than 5% probability that the result came from chance or random fluctuation. Two measures are deemed to be statistically different if there is a 19 out of 20 chance or greater that they are.

There are real problems with this approach. Foremost, it is unclear how this 5% probability cutoff was chosen. Somewhere along the line it became a standard among academics. This standard could have just as easily been 4% or 6% or some other number. This cutoff was chosen subjectively.

What are the chances that this 5% cutoff is optimal for all studies, regardless of the situation?

Regulators should look beyond statistical significance when they are reviewing a new medication. Let’s say a study was only significant at 6%, not quite meeting the 5% standard. That shouldn’t automatically disqualify a promising medication from consideration. Instead, regulators should look at the situation more holistically. What will the drug do? What are its side effects? How much pain does it alleviate? What is the risk of making mistakes in approval: in approving a drug that doesn’t work or in failing to approve a drug that does work? We could argue that the level of significance required in the study should depend on the answers to these questions and shouldn’t be the same in all cases.

The same is true in market research. Suppose you are researching a new product and the study is only significant at 10% and not the 5% that is standard. Whether you should greenlight the product for development depends on considerations beyond statistical significance. What is the market potential of the product? What is the cost of its development? What is the risk of failing to greenlight a winning idea or greenlighting a bad idea? Currently, too many product managers rely too much on a research project to give them answers when the study is just one of many inputs into these decisions.

There is another reason to rethink the concept of statistical significance in market research projects. Statistical significance assumes a random or a probability sample. We can’t stress this enough – there hasn’t been a market research study conducted in at least 20 years that can credibly claim to have used a true probability sample of respondents. Some (most notably ABS samples) make a valiant attempt to do so but they still violate the very basis for statistical significance.

Given that, why do research suppliers (Crux Research included) continue to do statistical testing on projects? Well, one reason is clients have come to expect it. A more important reason is that statistical significance holds some meaning. On almost every study we need to draw a line and say that two data poworints are “different enough” to point out to clients and to draw conclusions from. Statistical significance is a useful tool for this. It just should no longer be viewed as a tool where we can say precise things like “these two data points have a 95% chance of actually being different”.

We’d rather use a probability approach and report to clients the chance that two data points would be different if we had been lucky enough to use a random sample. That is a much more useful way to look at data, but it probably won’t be used much until colleges start teaching it and a new generation of researchers emerges.

The current debate over the usefulness of statistical significance is a healthy one to have. Hopefully, it will cause researchers of all types to think deeper about how precise a study needs to be and we’ll move away from the current one-size-fits-all thinking that has been pervasive for decades.

Did Apple just kill telephone market research?

A recent issue of The Economist contained an article that describes a potential threat to the accuracy of opinion polling. The latest iPhones have a software feature that doesn’t just block robocalls but sends all calls from unknown callers automatically to voice mail. This feature combats unwanted calls on mobile phones.

Matching sampling frames to populations of interest is increasingly difficult to accomplish in survey research, particularly telephone studies. I will always remember my first day on the job in 1989 when my supervisor was teaching me how to bid projects. The spreadsheet we used assumed our telephone polls would have a 60% cooperation rate. So, at that time about 6 in 10 phone calls we made resulted in a willing respondent. Currently, telephone studies rarely achieve a cooperation rate above 5%. That is 1 in 20 calls. If you are lucky.

The Do Not Call Registry took effect in 2003. At this time, most survey research was still being conducted by telephone (online research was growing but still represented only about 20% of the market research industry’s revenues). Researchers were initially relieved that market research and polls were exempt from the law but in the end that didn’t matter. People stopped cooperating with telephone studies because they thought they had opted out of research calls when they signed up for the Registry. Response rates plummeted.

The rise of mobile phones caused even more headaches for telephone researchers. There was initially no great way to generate random numbers of cell phones in the same way that could be done for land lines and publicly-available directories of cell phone numbers did not exist. For quite some time, telephone studies were underrepresenting mobile phone users and had no great solution for how to interview respondents who did not even have a land line. Eventually, the industry figured this out and methods for including mobile phones became standard.

This new development of automatically routing mobile calls to voice mail could well signify the end of telephone-based research. If consumers like this feature on iPhones it won’t be long until Android-based phones do the same thing. It will preclude pollsters from effectively reaching mobile-only households. Believe it or not, about 45% of US households still have a land line, but the 55% who do not skew young, urban, and liberal.

Pollsters will figure this out and will oversample mobile only households and weight them up in samples. But that won’t really fix the problem. Samples will miss those that have the latest phones and will eventually miss everybody once all current phones are replaced. Oversampling and weighting can help balance under-represented groups, but can’t fix a problem when a group is not represented at all. Weighting can actually magnify biases in samples.

Implications to this?  Here are a few:

  1. More polls and market research project will be conducted online. This is a good thing as there is evidence that in the 2016 election the online polls were more accurate than the telephone polls. It is hard to believe, but we are at a stage where telephone polls are almost always slower, more expensive, and less accurate than their online counterparts.
  2. Researchers will use more mixed samples, using both telephone and online. In our view this tends to be needlessly complicated and introduces mode effects into these samples. We tend to only recommend mixed-mode data collection in business-to-business projects, where we use the phone to screen to a qualified respondent and then send the questionnaire electronically.
  3. Costs of telephone polls will go up. They are already almost criminally expensive and this will get even worse. For those not in the know, the cost per interview for a telephone poll is often 20 to 30 times the cost of an online interview.
  4. Address Based Samples (ABS) will gain in popularity. As telephone response rates decline, systematic biases in telephone samples increase. ABS, when properly operationalized, is a good alternative (although ABS has its limitations as well). ABS still isn’t really probability sampling, but it is the closest thing we have.
  5. The increased cost of telephone polls will spur even more investment in online panels. The quality of online research will be better off because of it. If there is a silver lining for researchers, this is probably it.

Technology has always tended to move faster than the market research industry has been able to adapt to it, probably because researchers have an academic mindset (thorough, but slow). Research methodologists always seem to eventually come up with a solution, but not always quickly. For now, we’d recommend against trusting any opinion poll that is based on a telephone sample, unless the researchers behind it have specifically made a case for how they are going to address this new issue of software blocking their calls to mobile phones. The good news is push polls and robo polls will soon become almost impossible to conduct.

Among college students, Bernie Sanders is the overwhelming choice for the Democratic nomination

Crux Research poll of college students shows Sanders at 23%, Biden at 16%, and all other candidates under 10%

ROCHESTER, NY – October 10, 2019 – Polling results released today by Crux Research show that if it was up to college students, Bernie Sanders would win the Democratic nomination the US Presidency. Sanders is the favored candidate for the nomination among 23% of college students compared to 16% for Joe Biden. Elizabeth Warren is favored by 8% of college students followed by 7% support for Andrew Yang.

  • Bernie Sanders: 23%
  • Joe Biden: 16%
  • Elizabeth Warren: 8%
  • Andrew Yang: 7%
  • Kamala Harris: 6%
  • Beto O’Rourke: 5%
  • Pete Buttigieg: 4%
  • Tom Steyer: 3%
  • Cory Booker: 3%
  • Michael Bennet: 2%
  • Tulsi Gabbard: 2%
  • Amy Klobuchar: 2%
  • Julian Castro: 1%
  • None of these: 5%
  • Unsure: 10%
  • I won’t vote: 4%

The poll also presented five head-to-head match-ups. Each match-up suggests that the Democratic candidate currently has a strong edge over President Trump, with Sanders having the largest edge.

  • Sanders versus Trump: 61% Sanders; 17% Trump; 12% Someone Else; 7% Not Sure; 3% would not vote
  • Warren versus Trump: 53% Warren; 18% Trump; 15% Someone Else; 9% Not Sure; 5% would not vote
  • Biden versus Trump: 51% Biden; 18% Trump; 19% Someone Else; 8% Not Sure; 4% would not vote
  • Harris versus Trump: 48% Harris; 18% Trump; 20% Someone Else; 10% Not Sure; 4% would not vote
  • Buttigieg versus Trump: 44% Buttigieg; 18% Trump; 22% Someone Else; 11% Not Sure; 5% would not vote

The 2020 election could very well be determined on the voter turnout among young people, which has traditionally been much lower than among older age groups.

###

Methodology
This poll was conducted online between October 1 and October 8, 2019. The sample size was 555 US college students (aged 18 to 29). Quota sampling and weighting were employed to ensure that respondent proportions for age group, sex, race/ethnicity, and region matched their actual proportions in the US college student population.

This poll did not have a sponsor and was conducted and funded by Crux Research, an independent market research firm that is not in any way associated with political parties, candidates, or the media.

All surveys and polls are subject to many sources of error. The term “margin of error” is misleading for online polls, which are not based on a probability sample which is a requirement for margin of error calculations. If this study did use probability sampling, the margin of error would be +/-4%.

About Crux Research Inc.
Crux Research partners with clients to develop winning products and services, build powerful brands, create engaging marketing strategies, enhance customer satisfaction and loyalty, improve products and services, and get the most out of their advertising.

Using quantitative and qualitative methods, Crux connects organizations with their customers in a wide range of industries, including health care, education, consumer goods, financial services, media and advertising, automotive, technology, retail, business-to-business, and non-profits.
Crux connects decision makers with customers, uses data to inspire new thinking, and assures clients they are being served by experienced, senior level researchers who set the standard for customer service from a survey research and polling consultant.

To learn more about Crux Research, visit http://www.cruxresearch.com.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.