Among college students, Bernie Sanders is the overwhelming choice for the Democratic nomination

Crux Research poll of college students shows Sanders at 23%, Biden at 16%, and all other candidates under 10%

ROCHESTER, NY – October 10, 2019 – Polling results released today by Crux Research show that if it was up to college students, Bernie Sanders would win the Democratic nomination the US Presidency. Sanders is the favored candidate for the nomination among 23% of college students compared to 16% for Joe Biden. Elizabeth Warren is favored by 8% of college students followed by 7% support for Andrew Yang.

  • Bernie Sanders: 23%
  • Joe Biden: 16%
  • Elizabeth Warren: 8%
  • Andrew Yang: 7%
  • Kamala Harris: 6%
  • Beto O’Rourke: 5%
  • Pete Buttigieg: 4%
  • Tom Steyer: 3%
  • Cory Booker: 3%
  • Michael Bennet: 2%
  • Tulsi Gabbard: 2%
  • Amy Klobuchar: 2%
  • Julian Castro: 1%
  • None of these: 5%
  • Unsure: 10%
  • I won’t vote: 4%

The poll also presented five head-to-head match-ups. Each match-up suggests that the Democratic candidate currently has a strong edge over President Trump, with Sanders having the largest edge.

  • Sanders versus Trump: 61% Sanders; 17% Trump; 12% Someone Else; 7% Not Sure; 3% would not vote
  • Warren versus Trump: 53% Warren; 18% Trump; 15% Someone Else; 9% Not Sure; 5% would not vote
  • Biden versus Trump: 51% Biden; 18% Trump; 19% Someone Else; 8% Not Sure; 4% would not vote
  • Harris versus Trump: 48% Harris; 18% Trump; 20% Someone Else; 10% Not Sure; 4% would not vote
  • Buttigieg versus Trump: 44% Buttigieg; 18% Trump; 22% Someone Else; 11% Not Sure; 5% would not vote

The 2020 election could very well be determined on the voter turnout among young people, which has traditionally been much lower than among older age groups.

###

Methodology
This poll was conducted online between October 1 and October 8, 2019. The sample size was 555 US college students (aged 18 to 29). Quota sampling and weighting were employed to ensure that respondent proportions for age group, sex, race/ethnicity, and region matched their actual proportions in the US college student population.

This poll did not have a sponsor and was conducted and funded by Crux Research, an independent market research firm that is not in any way associated with political parties, candidates, or the media.

All surveys and polls are subject to many sources of error. The term “margin of error” is misleading for online polls, which are not based on a probability sample which is a requirement for margin of error calculations. If this study did use probability sampling, the margin of error would be +/-4%.

About Crux Research Inc.
Crux Research partners with clients to develop winning products and services, build powerful brands, create engaging marketing strategies, enhance customer satisfaction and loyalty, improve products and services, and get the most out of their advertising.

Using quantitative and qualitative methods, Crux connects organizations with their customers in a wide range of industries, including health care, education, consumer goods, financial services, media and advertising, automotive, technology, retail, business-to-business, and non-profits.
Crux connects decision makers with customers, uses data to inspire new thinking, and assures clients they are being served by experienced, senior level researchers who set the standard for customer service from a survey research and polling consultant.

To learn more about Crux Research, visit http://www.cruxresearch.com.

Truth Initiative wins Ogilvy for Opioid Campaign

Truth Initiative has won two 2019 Ogilvy awards for its campaign against opioid misuse.

The ARF David Ogilvy Awards is the only award that honors the research and analytics insights behind the most successful advertising campaigns. Crux Research, along with our research partners at CommSight, provided the research services for this campaign.

A case study of the campaign can be found here

You can view spots from the campaign here and here.

We are very proud to have provided Truth Initiative with research support for this important campaign.

How to be an intelligent consumer of political polls

As the days get shorter and the air gets cooler, we are on the edge of a cool, colorful season. We are not talking about autumn — instead, “polling season” is upon us! As the US Presidential race heats up, one thing we can count on is being inundated with polls and pundits spinning polling results.

Most market researchers are interested in polls. Political polling pre-dates the modern market research industry and most market research techniques used today have antecedents from the polling world. And, as we have stated in a previous post, polls can be as important as the election itself.

The polls themselves influence voting behavior which should place polling organizations in an ethical quandary. Our view is that polls, when properly done, are an important facet of modern democracy. Polls can inform our leaders as to what the electorate cares about and keep them accountable. This season, polls are determining which candidates get on the debate stage and are driving which issues candidates are discussing most prominently.

The sheer number of polls that we are about to see will be overwhelming. Some will be well-conducted, some will be shams, and many will be in between. To help, we thought we’d write this post on how be an intelligent consumer of polls and what to look out for when reading the polls or hearing about them in the media.

  • First, and this is harder than it sounds, you have to put your own biases aside. Maybe you are a staunch conservative or liberal or maybe you are in the middle. Whatever your leaning, your political views are likely going to get in the way of you becoming a good reader of the polls. It is hard to not have a confirmation bias when viewing polls, where you tend to accept a polling result that confirms what you believe or hope will happen and question a result that doesn’t fit with your map of the world. I have found the best way to do this is to first try to view the poll from the other side. Say you are a conservative. Start by thinking about how you would view the poll if you leaned left instead.
  • Next, always, and I mean ALWAYS, discover who paid for the poll. If it is an entity that has a vested interest in the results, such as a campaign, a PAC, and industry group or lobbyist, go no further. Don’t even look at the poll. In fact, if the sponsor of the poll isn’t clearly identified, move on and spend your time elsewhere. Good polls always disclose who paid for it.
  • Don’t just look to who released the poll, review which organization executed it. For the most part, polls executed by major polling organizations (Gallup, Harris, ORC, Pew, etc.) will be worth reviewing as will polls done by colleges with polling centers (Marist, Quinnipiac, Sienna, etc.). But there are some excellent polling firms out there you likely have never heard of. When in doubt, remember that Five Thirty Eight gives pollsters grades based on their past performances.  Despite what you may hear, polls done by major media organizations are sound. They have polling editors that understand all the nuances and have standards for how the polls are conducted. These organizations tend to partner with major polling organizations that likewise have the methodological muscle that is necessary.
  • Never, and I mean NEVER, trust a poll that comes from a campaign itself. At their best, campaigns will cherry pick results from well executed polls to make their candidate look better. At their worst, they will implement a biased poll intentionally. Why? Because much of the media, even established mainstream media, will cover these polls. (As an aside, if you are a researcher don’t trust the campaigns either. From my experience, you have about a 1 in 3 chance of being paid by a campaign for conducting their poll.)
  • Ignore any talk about the margin of error. The margin of error on a poll has become a meaningless statistic that is almost always misinterpreted by the media. A margin of error really only makes sense when a random or probability sample is being used. Without going into detail, there isn’t a single polling methodology in use today that can credibly claim to be using a probability sample. Regardless, being within the margin of error does not mean a race is too close to call anyway. It really just means it is too close to call with 95% certainty.
  • When reading stories on polls in the media, read beyond the headline. Remember, headlines are not written by reporters or pollsters. They are written by editors that in many ways have had their journalistic integrity questioned and have become “click hunters.” Their job is to get you to click on the story and not necessarily to accurately summarize the poll. Headlines are bound to be more sensational that the polling results merit.

All is not lost though. There are plenty of good polls out there worth looking at. Here is the routine I use when I have a few minutes and want to discover what the polls are saying.

  • First, I start at the Polling Report. This is an independent site that compiles credible polls. It has a long history. I remember reading it in the 90’s when it was a monthly mailed newsletter. I start here because it is nothing more than raw poll results with no spin whatsoever. Their Twitter feed shows the most recently submitted polls.
  • I sometimes will also look at Real Clear Politics. They also curate polls, but they also provide analysis. I tend to just stay on their poll page and ignore the analysis.
  • FiveThirtyEight doesn’t provide polling results in great detail, but usually draws longitudinal graphs on the probability of each candidate winning the nomination and the election. Their predictions have valid science behind them and the site is non-partisan. This is usually the first site I look at to discover how others are viewing the polls.
  • For fun, I take a peek at BetFair which is an UK online betting site that allows wagers on elections. It takes a little training to understand what the current prices mean, but in essence this site tells you which candidates people are putting their actual money on. Prediction markets fascinate me; using this site to predict who might win is fun and geeky.
  • I will often check out Pew’s politics site. Pew tends to poll more on issues than “horse race” matchups on who is winning. Pew is perhaps the most highly respected source within the research field.
  • Finally, I go to the media. I tend to start with major media sites that seem to be somewhat neutral (the BBC, NPR, USA TODAY). After reviewing these sites, I then look at Fox News and MSNBC’s website because it is interesting to see how their biases cause them to say very different things about the same polls. I stay away from the cable channels (CNN, Fox, MSNBC) just because I can’t stand hearing boomers argue back and forth for hours on end.

This is, admittedly, way harder than it used to be. We used to just be able to let Peter Jennings or Walter Cronkite tell us what the polls said. Now, there is so much out there that to truly get an objective handle on what is going on takes serious work. I truly think that if you can become an intelligent, unbiased consumer of polls it will make you a better market researcher. Reading polls objectively takes a skill that applies well to data analysis and insight generation, which is what market research is all about.

Why Your Child Hates Sports

It surprises many to learn that on most measures of well-being today’s youth are the healthiest generation in history. Violent crime against and by young people is historically low. Teen pregnancy and birth rates continue to decline. Most measures of drug and alcohol use among teens and young adults show significant declines from a generation ago. Tobacco use is at a low point. In short, most problems that are a result of choices young people make have shown marked improvement since information on Millennials entered the data sets.

But an important measure of well-being has tracked significantly worse during the Millennial and post-Millennial era:  childhood obesity. According to the CDC, the prevalence of obesity has roughly tripled in the past 40 years. This is a frightful statistic.

This is not new news as many books, documentaries, and scholars have presented possible reasons for the spike in youth obesity. Beyond genetics, there are two likely determinants of obesity: 1) nutrition and 2) physical activity. Discussions of obesity’s “nutritional” causes are fraught with controversy. The food industry involves a lot of interests and money, nutritional science is rarely definitive, and seemingly everyone has their own opinions on what is healthy or unhealthy to eat. The nutritional roots of obesity (while likely very significant) are far from settled.

However, the “physical activity” side of the discussion tends to not be so heated. Nearly everyone agrees that today’s youth aren’t as physically active as they should be. There are likely many causes for this as well, but I believe the way youth sports operate merit some discussion.

When I was young, sports were every bit as important to my life as they became to my Millennial children. The difference is my sports experiences as a child were mostly kid-directed. Almost daily, we gathered in the largest yard in the neighborhood and played whichever sport was in season. It took up an hour or two on most days and sometimes the entire weekend. The biggest difference to today’s youth sports environment is there wasn’t an adult in sight. There were arguments, injuries, and conflicts, all of which got resolved without adult mediation.

Contrast this to today’s youth sports environment. Today’s kids specialize in one sport year-round and from a very young age join travel and elite leagues organized by adults. There is a general dearth of unstructured play time. Correlation and causation are never the same thing but the rise in youth obesity has correlated closely with the rise in youth sports leagues organized by adults. Once adults started making the decisions about sports, our kids got fatter.

As a matter of personal perspective, I have two adult children and I can count six sports (baseball, soccer, ice hockey, track, skiing, cross country) that they played in an adult-organized fashion while growing up. We encountered situations where I had a child who was one of the least talented kids on a team, others where I had a child that was the star of the team, and many others where my child was somewhere in the middle. Between them, my kids were on teams that dominated their leagues and went undefeated, they were on some that lost almost every game, and they were on some teams that both won and lost. I coached for a while and my wife was “team mom” for most teams they were on.

Along the way I noticed that kids seemed to have the most fun when they won just a few more games than they lost. The kids didn’t seem to think it was as fun to dominate the competition and it was even less fun to be constantly on the losing end. 

I remember once when in the car after a hockey game I asked my son what he wanted to happen when he had the puck. He said, “I want to score.” I asked him “suppose you scored every single time you touched the puck. Would that be any fun?” At 10 years old, he didn’t have to think long to say that wouldn’t be very fun at all. But, that is what most hockey dads are hoping will happen.

There seems to be a natural force kids apply to sports equality when adults get out of the way. Left to their own devices, the first things kids will do when choosing up teams is to try to get the teams to be evenly matched. Then, if the game starts to get too one-sided the next thing they will do is swap some players around to balance it out. This seems to be ingrained – nobody teaches kids to do this, but left on their own this is what they tend to do. They will also change the rules of the game to make it more fun.

I’ve encountered many parents who are delusional when it comes to the athletic capabilities of their children. I don’t think I have ever met a dad (including myself) who didn’t think their child was better than he/she really was. We want our kids to succeed of course. But we have to have the right definition of success. Are they having fun? Are they improving? Learning how to work as a team and treat competition with respect? Making friendships? That is what is going to matter down the line.

Far too many parents look to the future too much and don’t let their kids enjoy the moment. They will spend thousands and sacrifice nearly every weekend to send their kid to a camp that might get them noticed by college recruiters. The reality is, their child probably won’t get an athletic scholarship, and if he/she does it probably won’t come close to offsetting the money spent getting him/her to all of the camps and travel league games. Parents also don’t realize that most kids don’t find participating in college sports to be as fun as participating in them was in high school.

When I coached Little League baseball, I used to tell the kids to play catch with their mom or dad every day. I remember a mom once asking me why I was pushing them to do this so much. I told her that playing catch with a baseball in the backyard with your kid is one of the great moments in parenthood. It forces you to talk and listen to your kid. I told her that her son would remember that time with his mom or dad far more than playing on our team.

There are debates over rewards for participation in sports. In my day, you had to win to get the trophy and sometimes you didn’t even get that. Now, kids get trophies for showing up. That is not necessarily a bad thing. As Woody Allen says, “80% of success is showing up.” So, why not reward it?

My youngest son was fortunate to run cross country for a coach that most would classify as a local legend. He has coached the team for 30+ years, has had many state championship teams and individuals, and is widely respected. My favorite memory of him was something I observed when he didn’t know I was looking and it had nothing to do with championships and developing elite athletes.  For the first race of a new season, he took the varsity teams to an out-of-state invitational. The girls team was quite good, and for his 7th (and slowest) runner he brought a freshman girl who was inexperienced and running her very first race. She didn’t do very well and came in about 120th place in the race. I saw the coach come up to her right afterword with a beaming smile on his face. The first thing he said to her was “was that FUN or what?” as he gave her a hug. She smiled, hugged him back and ended up staying on the team for all four years of high school and last weekend (8 years later) I saw her jogging in a local park. She didn’t excel at running in high school, but the coach sparked a lifelong interest in fitness in her.

To me, that signified not just what sports should all be about, but what adults’ role in sports should be all about. We have a real problem with childhood obesity. The cure is to make sports and physical activity more fun, and many times that means getting the adults out of the way.

Demand Curves Always Slope Downward

Last month marked 30 years since I received my MBA. This anniversary has made me think critically about what I learned in business school and to judge what proved helpful and what did not. I will be the first to admit I have a good, yet sometimes selective memory for these things.

I had many outstanding business professors. They were far superior to the teachers I encountered as an undergraduate. There was one professor in particular I will always remember. I took an economics course from him and later took a business law course from him. I had little interest in business law and took the course solely because he was such a great lecturer.

He devoted the final lecture of his business law course to a topic that had nothing to do with law. He stated that there was a simple tenet we should always keep in mind. If we learned nothing else from our time in the program it should be this: demand curves always slope downward.

He then predicted that during our careers we will encounter many situations where people will try to convince us otherwise. We will see things in the business and popular press that ignore this basic concept. But, unlike the others, we will never fall for it because it is the one mistake he was on a crusade to ensure that none of his students would ever get wrong.

Demand curves always slope downward.

What does this mean? It is a simple concept most kindergartners can explain: If the cost of something goes up, fewer people will want it and less of it will be sold. Simple, huh?

My professor was prescient. In the past 30 years, I have encountered dozens, perhaps hundreds of cases where somebody was convinced that a cost change won’t have an effect on volume.

I’ve seen it a lot in business planning. I worked for a consumer goods firm for a short time. One year, a product manager decided to take a price increase. The business plan she created showed that in the current year we had sold 1 million units at $3 for a revenue of $3 million. Her planning assumed a 10 percent price increase would increase revenue by $300K. Not! If the price goes to $3.30 the only guarantee I can think of is that we would sell less than 1 million units. Nevertheless, her plan got through.

I’ve seen public policy makers forget this simple concept as well. They will propose tax changes and then assume that the change won’t affect consumer behavior.

The error is most commonly made when people only consider price and not “cost” in a broader since. Cost involves price but is also comprised of other things, such as the value of your time, the inconvenience of traveling to make a purchase, etc. As Adam Smith said, the real price of something is the “toil and trouble of acquiring it.”

A good example of this happened earlier this year in the county I live in. Our county legislature decided against raising the legal smoking age from 18 to 21. A quote from my representative indicated that because surrounding counties sell tobacco to those 18 and older, raising the age to 21 in our county would not change smoking behavior because young smokers will simply drive elsewhere.

Wrong! Demand curves slope downward. Raising the age to 21 in our county most certainly will decrease tobacco use because we have raised the cost of obtaining cigarettes by making it more inconvenient. 18-21-year olds would now have to drive further to get tobacco. They might have to bug someone of legal age to buy them for them. They may need to risk buying while underage. This all increases the cost to them and they will buy less. It is okay if you are against raising the legal age but it is not okay to use flawed logic to get there.

It is fine to argue that raising the age won’t have a large effect, but arguing that it won’t have any effect at all ignores a basic economic tenet. Demand curves slope downward. Thinking otherwise would sort of be like trying to convince a physicist that gravity only exists in some cases.

To illustrate this point, look to CVS. In 2014, CVS decided to stop selling tobacco products. This increased the cost of buying cigarettes because it became a bit less convenient to find them. Although many felt that this wouldn’t do anything to overall smoking behavior (thus ignoring that demand curves slope downward), a recent study by CVS concluded that 95 million fewer packs of cigarettes were bought by smokers in an 8-month period studied. If we pro-rate that over the 5 years since CVS has stopped selling cigarettes, the implication is that as a result of CVS’s decision, about 3 billion fewer cigarettes have been smoked per year.

Without wading too deeply into a one of the hottest of hot-button political issues, I do hear things from the pro-gun lobby that clearly shows they don’t recognize that demand curves slope downward. I think it is legitimate to be against gun restrictions from a philosophical viewpoint (e.g. gun ownership is a citizen right, guaranteed in the constitution, etc.). But the pro-gun lobby often claims that restricting which weapons that can be sold, taxing them, making it more onerous to register them, etc. will have no effect on “bad guys” getting guns.

Of course it will. Raise the cost of something and people will do it less. I have no idea how to resolve the gun debate in the US, but I am 100% confident that if we make guns harder to obtain fewer guns will be obtained. By good guys and bad guys. Whether that is a good or bad thing depends on which side of the debate you are on.

This concept can influence behavior in unexpected ways. There are studies that show how frequently employees interact varies inversely with the geographic distance their desks are from one another. I noticed this first-hand. At one point, my office was moved literally about 30 feet down a hallway, further away from where the bulk of the people on my team sat. I noticed right away that I conversed with them about half as often as a result. Why? Because the cost for us to interact increased and our behavior changed. It became a bit more inconvenient to interact with them.

“Demand curves slope downward” can have a converse affect: lower the cost of something and people will buy more. The rise of online retail demonstrates this. Shopping online is so convenient and easy that consumers have moved to it quickly – because their cost of shopping has come down. However, in my experience you don’t see people making mistakes on this side of the argument. People seem to know that lowering cost increases volume. It is more that they often fail to see that increasing costs lowers volume.

So, 30 years later, I am going to send a link to this post to my former professor. He will be pleased that at least one of his students remembered his advice.

NOTE: Economic geeks will note that there are some cases where demand curves don’t slope downward. There are “Giffen goods” – items where consumers will buy more of when price goes up. In reality, it is rare to ever see a discussion of this outside of an Economics class.

Jeff Bezos is right about market research

In an annual shareholder letter, Amazon’s Jeff Bezos recently stated that market research isn’t helpful. That created some backlash among researchers, who reacted defensively to the comment.

For context, below is the text of Bezos’ comment:

No customer was asking for Echo. This was definitely us wandering. Market research doesn’t help. If you had gone to a customer in 2013 and said “Would you like a black, always-on cylinder in your kitchen about the size of a Pringles can that you can talk to and ask questions, that also turns on your lights and plays music?” I guarantee you they’d have looked at you strangely and said “No, thank you.”

This comment is reflective of someone who understands the role market research can play for new products as well as its limitations.

We have been saying for years that market research does a poor job of predicting the success of truly breakthrough products. What was the demand for television sets in the 1920’s and 1930’s before there was even content to broadcast or a way to broadcast it? Just a decade ago, did consumers know they wanted a smartphone they would carry around with them all day and constantly monitor? Henry Ford once said that if he had asked customers what they wanted they would have wanted faster horses and not cars.

In 2014, we wrote a post (Writing a Good Questionnaire is Just Like Brian Surgery) that touched on this issue. In short, consumer research works best when the consumer has a clear frame-of-reference from which to draw. New product studies on line extensions or easily understandable and relatable new ideas tend to be accurate. When the new product idea is harder to understand or is outside the consumer’s frame-of-reference research isn’t as predictive.

Research can sometimes provide the necessary frame-of-reference. We put a lot of effort to be sure that concept descriptions are understandable. We often go beyond words to do this and produce short videos instead of traditional concept statements. But even then, if the new product being tested is truly revolutionary the research will probably predict demand inaccurately. The good news is few new product ideas are actually breakthroughs – they are usually refinements on existing ideas.

Failure to provide a frame-of-reference or realize that one doesn’t exist leads to costly research errors. Because this error is not quantifiable (like a sample error) it gets little attention.

The mistake people are making when reacting to Bezos’ comment is they are viewing it as an indictment of market research in general. It is not. Research still works quite well for most new product forecasting studies. For new products, companies are often investing millions or tens of millions in development, production, and marketing. It usually makes sense to invest in market research to be confident these investments will pay off and to optimize the product.

It is just important to recognize that there are cases where respondents don’t have a good frame-of-reference and the research won’t accurately predict demand. Truly innovative ideas are where this is most likely to happen.

I’ve learned recently that this anti-research mentality pervades the companies in Silicon Valley. Rather than use a traditional marketing approach of identifying a need and then developing a product to fulfill the need, tech firms often concern themselves first with the technology. They develop a technology and then look for a market for it. This is a risky strategy and likely fails more than it succeeds, but the successes, like the Amazon Echo, can be massive.

I own an Amazon Echo. I bought it shortly after it was launched having little idea what it was or what it could do. Even now I am still not quite sure what it is capable of doing. It probably has a lot of potential that I can’t even conceive of. I think it is still the type of product that might not be improved much by market research, even today, when it has been on the market for years.

Will adding a citizenship question to the Census harm the Market Research Industry?

The US Supreme Court appears likely to allow the Department of Commerce to reinstate a citizenship question on the 2020 Census. This is largely viewed as a political controversy at the moment. The inclusion of a citizenship question has proven to dampen response rates among non-citizens, who tend to be people of color. The result will be gains in representation for Republicans at the expense of Democrats (political district lines are redrawn every 10 years as a result of the Census). Federal funding will likely decrease for states with large immigrant populations.

It should be noted that the Census bureau itself has come out against this change, arguing that it will result in an undercount of about 6.5 million people. Yet, the administration has pressed forward and has not committed funds needed by the Census Bureau to fully research the implications. The concern isn’t just about non-response from non-citizens. In tests done by the Census Bureau, non-citizens are also more likely to inaccurately respond to this question than citizens, meaning the resulting data will be inaccurate.

Clearly this is a hot-button political issue. However, there is not much talk of how this change may affect research. Census data are used to calibrate most research studies in the US, including academic research, social surveys, and consumer market research. Changes to the Census may have profound effects on data quality.

The Census serves as a hidden backbone for most research studies whether researchers or clients realize it or not. Census information helps us make our data representative. In a business climate that is becoming more and more data-driven the implications of an inaccurate Census are potentially dire.

We should be primarily concerned that the Census is accurate regardless of the political implications. Adding questions that temper response will not help accuracy. Errors in the Census have a tendency to become magnified in research. For example, in new product research it is common to project study data from about a thousand respondents to a universe of millions of potential consumers. Even a small error in the Census numbers can lead businesses to make erroneous investments. These errors create inefficiencies that reverberate throughout the economy. Political concerns aside, US businesses undoubtably suffer from a flawed Census. Marketing becomes less efficient.

All is not lost though. We can make a strong case that there are better, less costly ways to conduct the Census. Methodologists have long suggested that a sampling approach would be more accurate than the current attempt at enumeration. This may never happen for the decennial Census because the Census methodology is encoded in the US Constitution and it might take an amendment to change it.

So, what will happen if this change is made? I suspect that market research firms will switch to using data that come from the Census’ survey programs, such as the American Community Survey (ACS). Researchers will rely less on the actual decennial census. In fact, many research firms already use the ACS rather than the decennial census (and the ACS currently contains the citizenship question).

The Census bureau will find ways to correct for resulting error, and to be honest, this may not be too difficult from a methodological standpoint. Business will adjust because there will be economic benefits to learning how to deal with a flawed Census, but in the end, this change will take some time for the research industry to address. Figuring things like this out is what good researchers do. While it is unfortunate that this change looks likely to be made, its implications are likely more consequential politically than it will be to the research field.

Long Live the Focus Group!

Market research has changed over the past two decades. Telephone research has faded away, mail studies are rarely considered, and younger researchers have likely never conducted a central location test in a mall. However, there is an old-school type of research that has largely survived this upheaval:  the traditional, in-person focus group.

There has been extensive technological progress in qualitative research. We can now conduct groups entirely online, in real-time, with participants around the globe. We can conduct bulletin board style online groups that take place over days. Respondents can respond via text or live video, can upload assignments we give them, and can take part in their own homes or workplaces. We can intercept them when they enter a store and gather insights “in the moment.” We even use technology to help make sense of the results, as text analytics has come a long way and is starting to prove its use in market research.

These new, online qualitative approaches are very useful. They save on travel costs, can be done quickly, and are often less expensive than traditional focus groups. But we have found that they are not a substitute for traditional focus groups, at least not in the way that online surveys have substituted for telephone surveys. Instead, online qualitative techniques are new tools that can do new things, but traditional focus groups are still the preferred method for many projects.

There is just no real substitute for the traditional focus group that allows clients to see actual customers interact around their product or issue. In some ways, as our world has become more digital traditional focus groups provide a rare opportunity to see and hear from customers. They are often the closest clients get to actually seeing their customers in a live setting.

I’ve attended hundreds of focus groups. I used to think that the key to a successful focus group was the skill of the moderator followed by a cleverly designed question guide. Clients spend a lot of time on the question guide. But they spend very little time on something that is critical to every group’s success: the proper screening of participants.

Seating the right participants is every bit as important as constructing a good question guide. Yet, screening is given passing attention by researchers and clients. Typically, once we decide to conduct groups a screener is turned around within a day because we need to get moving on the recruitment. In contrast, a discussion guide is usually developed over a full week or two.

Developing an outstanding screener starts by having a clear sense of objectives. What decisions are being made as a result of the project? Who is making them? What is already known? How will the decision path differ based on what we find? I am always surprised that in probably half of our qualitative projects our clients don’t have answers to these questions.

Next, it is important to remind clients that focus groups are qualitative research and we shouldn’t be attempting to gather a “representative” sample. Focus groups happen with a limited number of participants in a handful of cities and we shouldn’t be trying to project findings to a larger audience. If that is needed, a follow-up quantitative phase is required. Instead, in groups we are trying to delve deeply into motivations, explore ideas, and develop with new hypotheses we can test later.

It is a common mistake to try to involve enough participants to make findings “valid.” This is important, as we are looking for thoughtful participants and not necessarily “typical” customers. We want folks that will expand our knowledge of a subject and of customers will help us explore deeply into topics and develop new lines of inquiry we haven’t considered.

“Representative” participants can be quiet and reserved and not necessarily useful to this phase of research. For this reason, we always use articulation screening questions which raise the odds that we will get a talkative participant who enjoys sharing his/her opinions.

An important part of the screening process is determining how to segment the groups. It is almost never a good idea to hold all of your sessions with the same audience. We tend to segment on age, potentially gender, and often by the participants’ experience level with the product or issue. Contrasting findings from these groups is often where the key qualitative insights lie.

It is also necessary to over-recruit. Most researchers overrecruit to protect against participants who fail to show up to the sessions. We do it for another reason. We like to have a couple of extra participants in the waiting area. Before the groups start, the moderator spends some time with them. This accomplishes two things. First, the groups are off and running the moment participants enter the focus group room because a rapport with the moderator has been established. Second, spending a few minutes with participants before groups begin allows the moderator to determine in advance which participants are going to be quiet or difficult, and allows us to pay them the incentive and send them home.

Clients tend to insist on group sizes that are too large. I have viewed groups with as many as 12 respondents. Even in a two-hour session, the average participant will be talking for just 10 minutes in this case and that is if there are no silences or the moderator doesn’t talk! In reality, with 12 participants you will get maybe five minutes out of each one. How is that useful?

Group dynamics are different in smaller groups. We like to target having about six participants. This group size is small enough that all must participate and engage, but large enough to get a diversity of views.  We also prefer to have groups run for 90 minutes or less.

We like to schedule some downtime in between groups. The moderator needs this to recharge (and eat!), but this also gives time for a short debrief and to adjust the discussion guide on the fly. I have observed groups where the moderator is literally doing back-to-back sessions for six hours and it isn’t productive. Similarly, it is ideal to have a rest day in between cities to regroup to provide an opportunity to develop new questions. (Although, this is rarely done in practice.)

Clients also need to learn to leave the moderator alone for at least 30 minutes before the first group begins. Moderating is stressful, even for moderators who have led thousands of groups. They need time to review the guide and converse with the participants. Too many times, clients are peppering the moderator with last second changes to the guide and in general are stressing the moderator right before the first session. These discussions need to be held before focus group day.

We’d also caution against conducting too many groups. I remember working on a proposal many years ago when our qualitative director was suggesting we conduct 24 focus groups. She was genuinely angry at me when I asked her “what are we going to learn in that 24th group that we didn’t learn in the first 23?”.

In all candor, in my experience you learn about 80% of what you will learn in the first evening of groups. It is useful to conduct another evening or two to confirm what you have heard. But it is uncommon for a new insight arises after the first few groups. It is a rare project that needs more than about two cities’ worth of groups.

It is also critical to have the right people from the clients attending the sessions. With the right people present discussions behind the mirror become insightful and can be the most important part of the project. Too often, clients send just one or two people from the research team and the internal decision makers stay home. I have attended groups where the client hasn’t shown up at all and it is just the research supplier who is there. If the session isn’t important enough to send decision makers to attend, it probably isn’t important enough to be doing in the first place.

I have mixed feelings about live streaming sessions. This can be really expensive and watching the groups at home is not the same as being behind the mirror with your colleagues. Live streaming is definitely better than not watching them at all. But I would say about half the time our clients pay for live streaming nobody actually logs in to watch them.

Focus groups are often a lead-in to a quantitative study. We typically enter into the groups with an outline of the quantitative questionnaire at the ready. We listen purposefully at the sessions to determine how we need to refine our questionnaire. This is more effective than waiting for the qualitative to be over before starting the quantitative design. We can usually have the quant questionnaire ready for review before the report for the groups is available because we take this approach.

Finally, it is critical to debrief at the end of each evening. This is often skipped. Everyone is tired, has been sitting in the dark for hours, and have to get back to a hotel and get up early for a flight. But, a quick discussion to agree on the key takeaways while they are fresh in mind is very helpful. We try to get clients to agree to these debriefings before the groups are held.

Traditional groups provide more amazing moments and unexpected insights than any other research method. I think this may be why, despite all the new options for qualitative, clients are conducting just as many focus groups as ever.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.