Posts Tagged 'Market Research'

Less useful research questions

Questionnaire “real estate” is limited and valuable. Most surveys fielded today are too long and this causes problems with respondent fatigue and trust. Researchers tend to start the questionnaire design process with good intent and aim to keep survey experiences short and compelling for respondents. However, it is rare to see a questionnaire get shorter as it undergoes revision and review, and many times the result is impossibly long surveys.

One way to guard against this is to be mindful. All questions included should have a clear purpose and tie back to study objectives. Many times, researchers include some questions and options simply out of habit, and not because these questions will add value to the project.

Below are examples of question types that, more often or not, add little to most questionnaires. These questions are common and used out of habit. There are certainly exceptions when it makes sense to include these questions, but for the most part we advise against using them unless there is a specific reason to include them.

Marital status

Somewhere along the way, asking a respondent’s marital status became standard on most consumer questionnaires. Across thousands of studies, I can only recall a few times when I have actually used it for anything. It is appropriate to ask if it is relevant. Perhaps your client is a jewelry company or in the bridal industry. Or, maybe you are studying relationships. However, I would nominate marital status as being the least used question in survey research history.

Other (specify)

Many multiple response questions ask a respondent to select all that apply from a list, and then as a final option will have “other.” Clients constantly pressure researchers to leave a space for respondents to type out what this “other” option is. We rarely look at what they type in. I tell clients that if we expect a lot of respondents to select the other option, it probably means that we have not done a good job at developing the list. It may also mean that we should be asking the question in an open-ended fashion. Even when it is included, most of the respondents who select other will not type anything into the little box anyway.

Don’t Know Options

We recently composed an entire post about when to include a Don’t Know option on a question. To sum it up, the incoming assumption should be that you will not use a Don’t Know option unless you have an explicit reason to do so. Including Don’t Know as an option can make a data set hard to analyze. However, there are exceptions to this rule, as Don’t Know can be an appropriate choice. That said, it is overused on surveys currently.

Open-Ends

The transition from telephone to online research has completely changed how researchers can ask open-ended questions. In the telephone days, we could pose questions that were very open-ended because we had trained interviewers who could probe for meaningful answers. With online surveys, open-ended questions that are too loose rarely produce useful information. Open-ends need to be specific and targeted. We favor the inclusion of just a handful of open-ends in each survey, and that they are a bit less “open-ended” than what has been traditionally asked.

Grid questions with long lists

We have all seen these. These are long lists of items that require a scaled response, perhaps a 5-point agree/disagree scale. The most common abandon point on a survey is the first time a respondent encounters a grid question with a long list. Ideally, these lists are about 4 to 6 items and there are no more than two or three of them on a questionnaire.

We currently field a study that has a list like this with 28 items in it. There is no way we are getting good information from this question and we are fatiguing the respondent for the remainder of the survey.

Specifying time frames

Survey research often seeks to find out about a behavior across a specified time frame. For instance, we might want to know if a consumer has used a product in the past day, past week, past month, etc. The issue here is not so much the time frame, it is when we consider the responses to be literal. I have seen clients take past day usage and multiply it by 365 and assume that will equate to past year usage. Technically and mathematically, that might be true, but it isn’t how respondents react to questions.

In reality, it is likely accurate to ask if a respondent has done something in the past day. But, once the time frames get longer, we are really asking about “ever” usage. It depends a bit on the purchase cycle of the product and its cost, but for most products, asking if they have used in the past month, 6 months, year, etc. will yield similar responses.

Some researchers work around this by just asking “ever used” and “recently used.” There are times when that works, but we tend to set a reasonable time frame for recent use and go with that, typically within the past week.

Household income

Researchers have asked household income as long as the survey research field has been around. There are at least three serious problems with it. First, many respondents are not knowledgeable about what their household income is. Most households have a “family CFO” who takes the lead on financial issues, and even this person often will not know what the family income is. 

Second, the categories chosen affect the response to the income question, indicating just how unstable it is. Asking household income in say, ten categories versus five categories will not result in comparable data. Respondents tend to assume the middle of the range given is normal, and respond using that as a reference point.

Third, and most importantly, household income is a lousy measure of socio-economic status (SES). Many young people have low annual incomes but a wealthy lifestyle as they are still being supported by their parents. Many older people are retired and may have almost non-existent incomes, yet live a wealthy lifestyle off of their savings. Household income tends to only be a reasonable measure of SES for respondents aged about 30 to 60,

There are better measures of SES. Education level can work, and a particularly good question is to ask the respondent about their mother’s level of education, which has been shown to correlate strongly with SES. We also ask about their attitudes towards their income – whether they have all the money they need, just enough, or if they struggle to meet basic expenses.

Attention spans are getting shorter and as more and more surveys are being completed on mobile devices there are plenty of distractions as respondents answer questionnaires. Engage them, get their attention, and keep the questionnaire short. There may be no such thing as a dumb question, but there are certainly questions that when asked on a survey do not yield useful information.

Should you include a “Don’t Know” option on your survey question?

Questionnaire writers construct a bridge between client objectives and a line of questioning that a respondent can understand. This is an underappreciated skill.

The best questionnaire writers empathize with respondents and think deeply about tasks respondents are asked to perform. We want to strike a balance between the level of cognitive effort required and a need to efficiently gather large amounts of data. If the cognitive effort required is too low, the data captured is not of high quality. If it is too high, respondents get fatigued and stop attending to our questions.

One of the most common decisions researchers have to make is whether or not to allow for a Don’t Know (DK) option on a question. This is often a difficult choice, and the correct answer on whether to include a DK option might be the worst possible answer: “It depends.”

Researchers have genuine disagreements about the value of a DK option. I lean strongly towards not using DK’s unless there is a clear and considered reason for doing so.

Clients pay us to get answers from respondents and to find out what they know, not what they don’t know. Pragmatically, whenever you are considering adding a DK option your first inclination should be that you perhaps have not designed the question well. If a large proportion of your respondent base will potentially choose “don’t know,” odds are high that you are not asking a good question to begin with, but there are exceptions.

If you get in a situation where you are not sure if you should include a DK option, the right thing to do is to think broadly and reconsider your goal: why are you asking the question in the first place? Here is an example which shows how the DK decision can actually be more complicated than it first appears.

We recently had a client that wanted us to ask a question similar to this: “Think about the last soft drink you consumed. Did this soft drink have any artificial ingredients?”

Our quandary was whether we should just ask this as a Yes/No question or to also give the respondent a DK option. There was some discussion back and forth, as we initially favored not including DK, but our client wanted it.

Then it dawned on us that whether or not to include DK depended on what the client wanted to get out of the question. On one hand, the client might want to truly understand if the last soft drink consumed had any artificial ingredients in it, which is ostensibly what the question asks. If this was the goal, we felt it was necessary to better educate the respondent on what an “artificial ingredient” was so they could provide an informed answer and so all respondents would be working from a common definition. Or, alternatively, we could ask for the exact brand and type of soft drink they consumed and then on the back-end code which ones have artificial ingredients and which do not, and thus get a good estimate for the client.

The other option was to realize that respondents might have their own definitions of “artificial ingredients” that may or may not match our client’s definition. Or, they may have no clue what is artificial and what is not.

In the end, we decided to use the DK option in this case because understanding how many people are ignorant to artificial ingredients fit well with our objectives. When we pressed the client, we learned that they wanted to document this ambiguity. If a third of consumers don’t know whether or not their soft drinks have artificial ingredients in them, this would be useful information for our client to know.

This is a good example on how a seemingly simple question can have a lot of thinking behind it and how it is important to contextualize this reasoning when reporting results. In this case, we are not really measuring whether people are drinking soft drinks with artificial ingredients. We are measuring what they think they are doing, which is not the same thing and likely more relevant from a marketing point-of-view.

There are other times when a DK option makes sense to include. For instance, some researchers will conflate the lack of an option (a DK response) with a neutral opinion and these are not the same thing. For example, we could be asking “how would you rate the job Joe Biden is doing as President?” Someone who answers in the middle of the response scale likely has a considered, neutral opinion of Joe Biden. Someone answering DK has not considered the issue and should not be assumed to have a neutral opinion of the president. This is another case where it might make sense to use DK.

However, there are probably more times when including a DK option is a result of lazy questionnaire design than any deep thought regarding objectives. In practice, I have found that it tends to be clients who are inexperienced in market research that press hardest to include DK options.

There are at least a couple of serious problems with including DK options on questionnaires. The first is “satisficing” – which is a tendency respondents have to not place a lot of effort on responding and instead choose the option that requires the least cognitive effort. The DK option encourages satisficing. A DK option also allows respondents to disengage with the survey and can lead to inattention on subsequent items.

DK responses create difficulties when analyzing data. We like to look at questions on a common base of respondents, and that becomes hard to comprehend when respondents choose DK on some questions but not others. Including DK makes it harder to compare results across questions. DK options also limit the ability to use multivariate statistics, as a DK response does not fit neatly on a scale.

Critics would say that researchers should not force respondents to express and opinion they do not have and therefore should provide DK options. I would counter by saying that if you expect a substantial amount of people to not have an opinion, odds are high you should reframe the question and ask them about something they do know about. It is usually (but not always) the case that we want to find out more about what people know than what they don’t know.

“Don’t know” can be a plausible response. But, more often than not, even when it is a plausible response if we feel a lot of people will choose it, we should reconsider why we are asking the question. Yes, we don’t want to force people to express an option they don’t have. But rather than include DK, it is better to rewrite a question to be more inclusive of everybody.

As an extreme example, here is a scenario that shows how a DK can be designed out of a question:

We might start with a question the client provides us: “How many minutes does your child spend doing homework on a typical night?” For this question, it wouldn’t take much pretesting to realize that many parents don’t really know the answer to this, so our initial reaction might be to include a DK option. If we don’t, parents may give an uninformed answer.

However, upon further thought, we should realize that we may not really care about how many minutes the child spends on homework and we don’t really need to know whether the parent knows this precisely or not. Thinking even deeper, some kids are much more efficient in their homework time than others, so measuring quantity isn’t really what we want at all. What we really want to know is, is the child’s homework level appropriate and effective from the parent’s perspective?

This probing may lead us down a road to consider better questions, such as “in your opinion, does your child have too much, too little, or about the right amount of homework?” or “does the time your child spends on homework help enhance his/her understanding of the material?” This is another case when thinking more about why we are asking the question tends to result in better questions being posed.

This sort of scenario happens a lot when we start out thinking we want to ask about a behavior, when what we really want to do is ask about an attitude.

The academic research on this topic is fairly inconclusive and sometimes contradictory. I think that is because academic researchers don’t consider the most basic question, which is whether or not including DK will better serve the client’s needs. There are times that understanding that respondents don’t know is useful. But, in my experience, more often than not if a lot of respondents choose DK it means that the question wasn’t designed well. 

The two (or three) types of research projects every organization needs

Every once and awhile I’ll get a call from a former client or colleague who has started a new market research job. They will be in their first role as a research director or VP with a client-side organization. As they are now in a position to set their organization’s research agenda, they ask for my thoughts on how to structure their research spending. I have received calls like this about a dozen times over the years.

I advise these researchers that two types of research stand above all others, and that their initial focus should be to get them set up correctly. The first is tracking their product volume. Most organizations know how many products they are producing and shipping, but it is surprising to see how many lose track of where their products go from there. To do a good job, marketers must know how their products move through the distribution system all the way to their end consumer. So, that becomes my first recommendation: know precisely whom is buying and using your products at every step along the way, in as much detail as possible.

The second type of research I suggest is customer satisfaction research. Understanding how customers use products and measuring their satisfaction is critical. Better yet, the customer satisfaction measuring system should be prescriptive and indicate what is driving satisfaction and what is detracting from it.

Most marketing decisions can be made if these two types of research systems are well-designed. If a marketer has a handle on precisely whom is using their products and what is enhancing and detracting from their satisfaction, most of them are smart enough to make solid decisions.

When pressed for what the third type of research should be, I usually would say that qualitative research is important. I’d put in place a regular program of in-person focus groups or usability projects, and compel key decision makers to attend them. I once consulted for a consumer packaged goods client and discovered that not a single person in their marketing department had spoken directly with a consumer of their products in the past year. There is too much of a gulf between the corporate office and the real world sometimes, and qualitative research can help close that void.

Only when these three things are in place and being well-utilized would I recommend that we move forward with other types of research projects. Competitive studies, new product forecasting, advertising testing, etc. probably take up the lion’s share of most research budgets currently. They are important, but in my view should only be pursued after these first three types of research are fully implemented.

Many research departments get distracted by conducting too many projects of too many types. A focus is important. When decision makers have the basic numbers they need and are in tune with their customer base, they are in a good position to succeed, and it is market research’s role to provide this framework.

Wow! Market research presentations have changed.

I recently led an end-of-project presentation over Zoom. During it, I couldn’t help but think how market research presentations have changed over the years. There was no single event or time period that changed the nature of research presentations, but if you teleported a researcher from the 1990’s to a modern presentation they would feel a bit uncomfortable.

I have been in hundreds of market research presentations — some led by me, some led by others, and I’ve racked up quite a few air miles getting to them. In many ways, today’s presentations are more effective than those in the past. In some other ways, quality has been lost. Below is a summary of some key differences.

Today’s presentations are:

  • Far more likely to be conducted remotely over video or audio. COVID-19 disruptions acted as an accelerant onto this trend which was happening well before 2020. This has made presentations easier to schedule because not everyone has to be available in the office. This allows clients and suppliers to take part from their homes, hotels, and even their vehicles. It seems clear that a lasting effect of the pandemic will be that research presentations will be conducted via Zoom by default. There are plusses and minuses to this. For the first time in 30 years, I find myself working with clients whom I have never met in-person.
  • Much more likely to be bringing in data and perspectives from outside the immediate project. Research projects and presentations tended to be standalone events in the past, concentrating solely on the area of inquiry the study addressed. Today’s presentations are often integrated into a wider reaching strategic discussion that goes beyond the questions the research addresses.
  • More interactive. In yesteryear, the presentation typically consisted of the supplier running through the project results and implications for 45 minutes followed by a period of Q&A. It was rare to be interrupted before the Q&A portion of the meeting. Today’s presentations are often not presentations at all. As a supplier we feel like we are more like emcee’s leading a discussion than experts presenting findings.
  • More inclusive of upper management. We used to present almost exclusively to researchers and mid-level marketers. Now, we tend to see a lot more marketing VP’s and CMOs, strategy officers, and even the CEO on occasion. It used to be rare that our reports would make it to the CEOs desk. Now, I’d say most of the time they do. This is indicative of the increasing role data and research has in business today.
  • Far more likely to integrate the client’s perspective. In the past, internal research staff rarely tried to change or influence our reports and presentations, preferring to keep some distance and then separately add their perspective. Clients have become much more active in reviewing and revising supplier reports and presentations.

Presentations from the 1990’s were:

  • A more thorough presentation of the findings of the study. They told a richer, more nuanced story. They focused a lot more on storytelling and building a case for the recommendations. Today’s presentations often feel like a race to get to the conclusions before you get interrupted.
  • More confrontational. Being challenged on the study method, data quality, and interpretations was more commonplace a few decades ago. I felt a much greater need to prepare and rehearse than I do today because I am not as in control of the flow of the meetings as I was previously. In the past I felt like I had to know the data in great detail, and it was difficult for me to present a project if I wasn’t the lead analyst on it. Today, that is much less of a concern.
  • More strategic. This refers more to the content of the studies than the presentation itself. Since far fewer studies were being done, the ones that were tended to be informing high consequence decisions. While plenty of strategic studies are still conducted, there are so many studies being done today that many of them are informing smaller, low-consequence, tactical decisions.
  • More relaxed. Timelines were more relaxed and as a result research projects were planned well in advance and the projects fed into a wider strategic process. That still happens, but a lot of today’s projects are completed quickly (often too quickly) because information is needed to make a decision that wasn’t even on the radar a few weeks prior.
  • More of a “show.” In the past we rehearsed more, were concerned about the graphical design of the slides, and worried about the layout of the room. Today, there is rarely time for that.
  • More social. Traveling in for a presentation meant spending time beforehand with clients, touring offices, and almost always going to lunch or dinner afterword. Even before the COVID/Zoom era, more recent presentations tended to be “in and out” affairs – where suppliers greet the clients, give a presentation, and leave. While there are many plusses to this, some (I’d actually say most) of the best researchers I know are introverts who were never comfortable with this forced socialization. Those types of people are going to thrive in the new presentation environment.

Client-side researchers were much more planned out in the past. Annually, they would go through a planning phase where all the projects for the year would be budgeted and placed in a timeline. The research department would then execute against that plan. More recently, our clients seem like they don’t really know what projects they will be working on in a few weeks’ time – because many of today’s projects take just days from conception to execution.

I have also noticed that while clients are commissioning more projects they seem to be using fewer suppliers than in the past. I think this is because studies are being done so quickly they don’t have time to manage more than a few supplier relationships. Bids aren’t as competitive and are more likely to be sole-sourced.

Clients are thus developing closer professional relationships with their suppliers. Suppliers are closer partners with clients than ever before, but with this comes a caution. It becomes easy to lose a third-party objectivity when we get too close to the people and issues at hand and when clients have too heavy a hand in the report process. In this sense, I prefer the old days, where we provided a perspective and our clients would then add a POV. Now, we often meld the two into one presentation, and at time we lose the value that comes from a back and forth disagreement over what the findings mean to a business.

If I teleported my 1990’s self to today I would be amazed at how quickly projects go from conception to final presentation. Literally, this happens in about one-third the time it used to. There are many downsides of going too fast and clients rarely focus or care about them. While there are dangers to going too fast, clients seem to prefer getting something 90% right and getting it done tomorrow, than waiting for a perfect project.

There is even a new category of market research called “agile research” that seeks to provide real-time data. I am sure it is a category that will grow, but those employing it need to keep in mind that providing data faster than managers can act on it can actually be a disservice to the client. It is an irony of our field that more data and continuous data can actually slow down decision making.  

Today’s presentations are less stressful, more inclusive, and more strategic. The downside is there are probably too many of them – clients are conducting too many projects on minor issues, they don’t always learn thoroughly from one study before moving onto the next, and researchers are sometimes being rewarded more for getting things done than for providing insight into the business.

I have more LinkedIn contacts named “Steve” than contacts who are Black

There have been increasing calls for inclusiveness and fairness across America and the world. The issues presented by the MeToo and Black Lives Matter movements affect all sectors of society and the business world. Market research is no exception. Recent events have spurred me to reflect on my experiences and to think about whether the market research field is diverse enough and ready to make meaningful changes. Does market research have structural, systemic barriers preventing women and minorities from succeeding?

My recollections are anecdotal – just one person’s experiences when working in market research for more than 30 years. What follows isn’t based on an industry study or necessarily representative of all researchers’ experiences.

Women in Market Research

When it comes to gender equity in the market research field, my gut reaction is to think that research is a good field for women and one that I would recommend. I reviewed Crux Research’s client base and client contacts. In 15 years, we have worked with about 150 individual research clients across 70 organizations. 110 (73%) of those 150 clients are female. This dovetails with my recollection of my time at a major research supplier. Most of my direct clients there were women.

Crux’s client base is largely mid-career professionals – I’d say our typical client is a research manager or director in his/her 30’s or 40’s. I’d conclude that in my experience, women are well represented at this level.

But, when I look through our list of 70 clients and catalog who the “top” research manager is at these organizations, I find that 42 (60%) of the 70 research VPs and directors are male. And, when I catalog who these research VP’s report into, typically a CMO, I find that 60 (86%) of the 70 individuals are male. To recap, among our client base, 73% of the research managers are female, 40% of the research VPs are female, and 14% of the CMO’s are female.

This meshes with my experience working at a large supplier. While I was there, women were well-represented in our research director and VP roles but there were almost no women represented in the C-suite or among those that report to them. There seem to be a clear but firm glass ceiling in place in market research suppliers and in clients.

Minorities in Market Research

My experience paints a bleaker picture when I think of ethnic minority representation in market research. Of our 150 individual research clients, just 25 (17%) have been non-white and just 3 (2%) have been black. Moving up the corporate ladder, in only 5 (13%) of our 70 clients is the top researcher in the organization non-white and in only 4 (6%) of the 70 companies is the CMO non-white, and none of the CMOs are black. Undoubtedly, we have a long way to go.

A lack of staff diversity in research suppliers and market research corporate staffs is a problem worth resolving for a very important reason: market researchers and pollsters are the folks providing the information to the rest of the world on diversity issues. Our field can’t possibly provide an appropriate perspective to decision makers if we aren’t more diverse. Our lack of diversity affects the conversation because we provide the data the conversation is based upon.  

Non-profits seem to be a notable exception when it comes to ethnic diversity. I have had large non-profit clients that have wonderfully diverse employee bases, to the point where it is not uncommon to attend meetings and Zoom calls where I am the only white male in the session. These non-profits make an effort to recruit and train diverse staffs and their work benefits greatly from the diversity of perspectives this brings. There is a palpable openness of ideas in these organizations. Research clients and suppliers would do well to learn from their example.  

I can’t think of explicit structural barriers that limit the progression of minorities thought the market research ranks, but that just illustrates the problem: the barriers aren’t explicit, they are more subtle and implicit. Which is what makes them so intractable.

We have to make a commitment to develop more diverse employee bases. I worked directly for the CEO of a major supplier for a number of years. One thing I respected about him was he was confident enough in himself that he was not afraid to hire people who were smarter than him or didn’t think like him or came from an entirely different background. It made him unique. In my experience, most hiring managers unintentionally hire “mini-me’s” – younger variants of themselves whom they naturally like in a job interview. Well, if the hiring managers are mostly white males and they are predisposed to hire a lot of “mini-me’s” over time this perpetuates a privilege and is an example of an unintentional, but nonetheless structural bias that limits the progress of women and minorities.

If you don’t think managers tend to hire in their own image, consider a recent Economist article that states “In 2018 there were more men called Steve than there were women among the chief executives of FTSE 100 companies.” I wouldn’t be surprised if there are more market researchers in the US named Steve than there are black market researchers.

To further illustrate that we naturally seek people like ourselves, I reviewed my own LinkedIn contact list. This list is made up of former colleagues, clients, people I have met along the way, etc. It is a good representation of the professional circle I exist within. It turns out that my LinkedIn contact list is 60% female and has 25% non-whites. But, just 3% of my LinkedIn contacts are black. And, yes, I have more LinkedIn contacts named Steve than I have contacts who are black.

This is a problem because as researchers we need to do our best to cast aside our biases and provide an objective analysis of the data we collect. We cannot do that well if we do not have a diverse array of people working on our projects.

Many managers will tell you that they would like to hire a minority for a position but they just don’t get quality candidates applying. This is not taking ownership of the issue. What are you doing to generate candidates in the first place?

It is all too easy to point the finger backwards at colleges and universities and say that we aren’t getting enough qualified candidates of color. And that might be true. MBA programs continue to enroll many more men than women and many more whites than non-whites. They should be taken to task for this. As employers we also need to be making more demands on them to recruit women and minorities to their programs in the first place.

I like that many research firms have come out with supportive statements and financial contributions to relevant causes recently. This is just a first step and needs to be the catalyst to more long-lasting cultural changes in organizations.

We need to share best practices, and our industry associations need to step up and lead this process. Let’s establish relationships with HCBU’s and other institutions to train the next generation of black researchers.

The need to be diverse is also important in the studies we conduct. We need to call more attention to similarities and differences in our analyses – and sample enough minorities in the first place so that we can do this. Most researchers do this already when we have a reason to believe before we launch the study that there might be important differences by race/ethnicity. However, we need to do this more as a matter of course, and become more attuned to highlighting the nuances in our data sets that are driven by race.

Our sample suppliers need to do a better job of recruiting minorities to our studies, and to ensure that the minorities we sample are representative of a wider population. As their clients, we as suppliers need to make more demands about the quality of the minority samples we seek.

We need an advocacy group for minorities in market research. There is an excellent group, Women in Research https://www.womeninresearch.org/ advocating for women. We need an analogous organization for minorities.

Since I am in research, I naturally think that measurement is key to the solution. I’ve long thought that organizations only change what they can measure. Does your organization’s management team have a formal reporting process that informs them of the diversity of their staff, of their new hires, of the candidates they bring in for interviews? If they do not, your organization is not poised to fix the problem. If your head of HR cannot readily tell you what proportion of your staff is made up of minorities, your firm is likely not paying enough attention.

Researchers will need to realize that their organizations will become better and more profitable when they recruit and develop a more diverse client base. Even though it is the right thing to do, we need to view resolving these issues not solely as altruism. It is in our own self-interest to work on this problem. It is truly the case that if we aren’t part of the solution, we are likely part of the problem. And again, because we are the ones that inform everyone else about public opinion on these issues, we need to lead the way.

My belief is it that this issue will be resolved by Millennials once they get to an age when they are more senior in organizations. Millennials are a generation that is intolerant to unfairness of this sort and notices the subtle biases that add up. They are the most diverse generation in US history. The oldest Millennials are currently in their mid-30’s. In 10-20 years’ time they will be in powerful positions in business, non-profits, education, and government.

Optimistically, I believe Millennials will make a big difference. Pessimistically, I wonder if real change will happen before they are the ones managing suppliers and clients, as thus far the older generations have not shown that they are up to the task.

How COVID-19 may change Market Research

Business life is changing as COVID-19 spreads in the US and the world. In the market research and insights field there will be both short-term and long-term effects. It is important that clients and suppliers begin preparing for them.

This has been a challenging post to write. First, in the context of what many people are going though in their personal and business lives as a result of this disruption, writing about what might happen to one small sector of the business world can come across as uncaring and tone-deaf, which is not the intention. Second, this is a quickly changing situation and this post has been rewritten a number of times in the past week. I have a feeling it may not age well.

Nonetheless, market research will be highly impacted by this situation. Below are some things we think will likely happen to the market research industry.

  • An upcoming recession will hit the MR industry hard. Market research is not an investment that typically pays off quickly. Companies that are forced to pare back will cut their research spending and likely their staffs.
  • Cuts will affect clients more than suppliers. In previous recessions, clients have cut MR staff and outsourced work to suppliers. This is an opportunity for suppliers that know their clients’ businesses well and can step up to help.
  • Unlike a lot of other types of industries, it is the large suppliers that are most at risk of losing work. Publicly-held research suppliers will be under even more intense pressure from their investors than usual. There will most certainly be cost cutting at these firms, and if the concerns over the virus persist, it will lead to layoffs.
  • The smallest suppliers could face an existential risk. Many independent contractors and small firms are dependent on one or two clients for the bulk of their revenue. If those clients are in highly affected sectors, these small suppliers will be at risk of going out of business.
  • Smallish to mid-sized suppliers may emerge stronger. Clients are going to be under cost pressures due to a receding economy and smaller research suppliers tend to be less expensive. Smaller research firms did well post 9/11 and during the recession of 2008-09 because clients moved work from higher priced larger firms to them. Smaller research firms would be wise to build tight relationships so that when the storm over the virus abates, they will have won their clients trust for future projects.
  • New small firms will emerge as larger firms cut staff and create refugees who will launch new companies.

Those are all items that might pertain to any sort of sudden business downturn. There are also some things that we think will happen that are specific to the COVID-19 situation:

  • Market research conferences will never be the same. Conferences are going to have difficulty drawing speakers and attendees. Down the line, conferences will be smaller and more targeted and there will be more virtual conferences and training sessions scheduled. At a minimum, companies will send fewer people to research conferences.
  • This will greatly affect MR trade associations as these conferences are important revenue sources for them. They will rethink their missions and revenue models, and will become less dependent on their signature events. The associations will have more frequent, smaller, more targeted online events. The days of the large, comprehensive research conference may be over.
  • Business travel will not return to its previous level. There will be fewer in-person meetings between clients and suppliers and those that are held will have fewer participants. Video conferencing will become an even more important way to reach clients.
  • Clients and suppliers will allow much more “work from home.” It may become the norm that employees are only expected to be in the office for key meetings. The situation with COVID-19 will give companies who don’t have a lot of experience allowing employees to work from home the opportunity to see the value in it. When the virus is under control, they will embrace telecommuting. We will see this crisis kick-start an already existing movement towards allowing more employees to work from home. The amount of office space needed will shrink.
  • Research companies will review and revise their sick-leave policies and there will be pressure on them to make them more generous.
  • Companies that did the right thing during the crisis will be rewarded with employee loyalty. Employees will become more attached and appreciative of suppliers that showed flexibility, did what they could to maintain payroll, and expressed genuine concerns for their employees.

Probably the biggest change we will see in market research projects is to qualitative research.

  • While there will always be great value in traditional, in-person focus groups , the situation around COVID-19 is going to cause online qualitative to become the standard approach. We are at a time where the technologies available for online qualitative are well-developed, yet clients and suppliers have clung to traditional methods. To date, the technology has been ahead of the demand. Companies will be forced by travel restrictions to embrace online methods and this will be at the expense of traditional groups. This is an excellent time to be in the online qualitative technology business. It is not such a great time to be in the focus group facility management business.
  • Independent moderators, who work exclusively with traditional groups, are going to be in trouble and not just in the short term. Many of these individuals will retire or look for work elsewhere or leave research. Others will necessarily adapt to online methods. Of course, there will continue to be independent moderators but we are predicting the demand for in-person groups will be permanently affected, and this portion of the industry will significantly shrink.
  • There is a risk that by not commissioning as much in-person qualitative, marketers may become further removed from direct human interaction with their customer base. This is a very real concern. We wouldn’t be in market research if we didn’t have an affinity for data and algorithms, but qualitative research is what keeps all of our efforts grounded. I’d caution clients to think carefully before removing all in-person interaction from your research plans.

What will happen to quantitative research? In the short-run, most studies will continue. Respondents are home, have free time, and thus far have shown they are willing to take part in studies. Some projects, typically in highly affected industries like travel and entertainment, are being postponed or canceled. All current data sets need to be viewed with a careful eye as the tumult around the virus can affect results. For instance, we conduct a lot of research with young respondents, and we now know for sure that their parents are likely nearby when they are taking our surveys, and that can influence our findings for some subjects.

Particular care needs to be taken in ongoing tracking studies. It makes sense for many trackers to add questions in to see how the situation has affected the brand in question.

But, in the longer term there will be too much change in quantitative research methods that result directly from this situation. If anything, there will be a greater need to understand consumers.

Tough times for sure. It has been heartening to see how our industry has reacted. Research panel and technology providers have reached out to help keep projects afloat. We’ve had subcontractors tell us we can delay payments if we need to. Calls with clients have become more “human” as we hear their kids and pets in the background and see the stresses they are facing. Respondents have continued to fill out our surveys.

There is a lot of uncertainty right now. At its core, market research is a way to reduce uncertainty for decision makers by making the future more predictable, so we are needed now more than ever. Research will adapt as it always does, and I believe in the long-run it may become even more valued as a result of this crisis.

Truth Initiative wins Ogilvy for Opioid Campaign

Truth Initiative has won two 2019 Ogilvy awards for its campaign against opioid misuse.

The ARF David Ogilvy Awards is the only award that honors the research and analytics insights behind the most successful advertising campaigns. Crux Research, along with our research partners at CommSight, provided the research services for this campaign.

A case study of the campaign can be found here

You can view spots from the campaign here and here.

We are very proud to have provided Truth Initiative with research support for this important campaign.

Jeff Bezos is right about market research

In an annual shareholder letter, Amazon’s Jeff Bezos recently stated that market research isn’t helpful. That created some backlash among researchers, who reacted defensively to the comment.

For context, below is the text of Bezos’ comment:

No customer was asking for Echo. This was definitely us wandering. Market research doesn’t help. If you had gone to a customer in 2013 and said “Would you like a black, always-on cylinder in your kitchen about the size of a Pringles can that you can talk to and ask questions, that also turns on your lights and plays music?” I guarantee you they’d have looked at you strangely and said “No, thank you.”

This comment is reflective of someone who understands the role market research can play for new products as well as its limitations.

We have been saying for years that market research does a poor job of predicting the success of truly breakthrough products. What was the demand for television sets in the 1920’s and 1930’s before there was even content to broadcast or a way to broadcast it? Just a decade ago, did consumers know they wanted a smartphone they would carry around with them all day and constantly monitor? Henry Ford once said that if he had asked customers what they wanted they would have wanted faster horses and not cars.

In 2014, we wrote a post (Writing a Good Questionnaire is Just Like Brian Surgery) that touched on this issue. In short, consumer research works best when the consumer has a clear frame-of-reference from which to draw. New product studies on line extensions or easily understandable and relatable new ideas tend to be accurate. When the new product idea is harder to understand or is outside the consumer’s frame-of-reference research isn’t as predictive.

Research can sometimes provide the necessary frame-of-reference. We put a lot of effort to be sure that concept descriptions are understandable. We often go beyond words to do this and produce short videos instead of traditional concept statements. But even then, if the new product being tested is truly revolutionary the research will probably predict demand inaccurately. The good news is few new product ideas are actually breakthroughs – they are usually refinements on existing ideas.

Failure to provide a frame-of-reference or realize that one doesn’t exist leads to costly research errors. Because this error is not quantifiable (like a sample error) it gets little attention.

The mistake people are making when reacting to Bezos’ comment is they are viewing it as an indictment of market research in general. It is not. Research still works quite well for most new product forecasting studies. For new products, companies are often investing millions or tens of millions in development, production, and marketing. It usually makes sense to invest in market research to be confident these investments will pay off and to optimize the product.

It is just important to recognize that there are cases where respondents don’t have a good frame-of-reference and the research won’t accurately predict demand. Truly innovative ideas are where this is most likely to happen.

I’ve learned recently that this anti-research mentality pervades the companies in Silicon Valley. Rather than use a traditional marketing approach of identifying a need and then developing a product to fulfill the need, tech firms often concern themselves first with the technology. They develop a technology and then look for a market for it. This is a risky strategy and likely fails more than it succeeds, but the successes, like the Amazon Echo, can be massive.

I own an Amazon Echo. I bought it shortly after it was launched having little idea what it was or what it could do. Even now I am still not quite sure what it is capable of doing. It probably has a lot of potential that I can’t even conceive of. I think it is still the type of product that might not be improved much by market research, even today, when it has been on the market for years.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.