Archive for the 'Marketing' Category

Which quality control checks questions should you use in your surveys?

While it is no secret that the quality of market research data has declined, how to address poor data quality is rarely discussed among clients and suppliers. When I started in market research more than 30 years ago, telephone response rates were about 60%. Six in 10 people contacted for a market research study would choose to cooperate and take our polls. Currently, telephone response rates are under 5%. If we are lucky, 1 in 20 people will take part. Online research is no better, as even from verified customer lists response rates are commonly under 10% and even the best research panels can have response rates under 5%.

Even worse, once someone does respond, a researcher has to guard against “bogus” interviews that come from scripts and bots, as well as individuals who are cheating on the survey to claim the incentives offered. Poor-quality data is clearly on the rise and is an existential threat to the market research industry that is not being taken seriously enough.

Maximizing response requires a broad approach with tactics deployed throughout the process. One important step is to cleanse each project of bad quality respondents. Another hidden secret in market research is that researchers routinely have to remove anywhere from 10% to 50% of respondents from their database due to poor quality.

Unfortunately, there is no industry standard way of doing this – of identifying poor-quality respondents. Every supplier sets their own policies. This is likely because there is considerable variability in how respondents are sourced for studies, and a one-size-fits-all approach might not be possible, and some quality checks depend on the specific topic of the study. Unfortunately, researchers are left to largely fend for themselves when trying to come up with a process for how to remove poor quality respondents from their data.

One of the most important ways to guard against poor quality respondents is to design a compelling questionnaire to begin with. Respondents will attend to a short, relevant survey. Unfortunately, we rarely provide them with this experience.

We have been researching this issue recently in an effort to come up with a workable process for our projects. Below, we share our thoughts. The market research industry needs to work together on this issue, as when one of us removes a bad respondent from a database in helps the next firm with their future studies.

There is a practical concern for most studies – we rarely have room for more than a handful of questions that relate to quality control. In addition to speeder and straight-line checks, studies tend to have room for about 4-5 quality control questions. With the exception of “severe speeders” as described below, respondents will be automatically removed if they fail three or more of the checks. We use a “three strikes and you’re out” rule to remove respondents. If anything, this is probably too conservative, but we’d rather err on the side of retaining some bad quality respondents in than inadvertently removing some good quality ones.

When possible, we favor checks that can be done programmatically, without human intervention, as that keeps fielding and quota management more efficient. To the degree possible, all quality check questions should have a base of “all respondents” and not be asked of subgroups.

Speeder Checks

We aim to set up two criteria: “severe” speeders are those that complete the survey in less than one-third of the median time. These respondents are automatically tossed. “Speeders” are those that take between one-third and one-half of the median time, and these respondents are flagged.

We also consider setting up timers within the survey – for example, we may place timers on a particularly long grid question or a question that requires substantial reading on the part of the respondent. Note that when establishing speeder checks it is important to use the median length as a benchmark and not the mean. In online surveys, some respondents will start a survey and then get distracted for a few hours and come back to it, and this really skews the average survey length. Using the median gets around that.

Straight Line Checks

Hopefully, we have designed our study well and do not have long grid type questions. However, more often than not these types of questions find their way into questionnaires.  For grids with more than about six items, we place a straight-lining check – if a respondent chooses the same response for all items in the grid, they are flagged.

Inconsistent Answers

We consider adding two question that check for inconsistent answers. First, we re-ask a demographic question from the screener near the end of the survey. We typically use “age” as this question. If the respondent doesn’t choose the same age in both questions, they are flagged.

In addition, we try to find an attitudinal question that is asked that we can re-ask in the exact opposite way. For instance, if earlier we asked “I like to go to the mall” on a 5-point agreement scale, we will also ask the opposite: “I do not like to go to the mall” on the same scale. Those that answer the same for both are flagged. We try to place these two questions a few minutes apart in the questionnaire.

Low Incidence items

This is a low attentiveness flag. It is meant to catch people who say they do really unlikely things and also catch people who say they don’t do likely things because they are not really paying attention to the questions we pose. We design this question specific to each survey and tend to ask what respondents have done over the past weekend. We like to have two high incidence items (such as “watched TV,” or “rode in a car”), 4 to 5 low incidence items (such as “flew in an airplane,” “read an entire book,” “played poker”) and one incredibly low incidence item (such as “visited Argentina”).  Respondents are flagged if they didn’t do at least one of our high incidence items, if they said they did more than two of our low incidence items, or if they say they did our incredibly low incidence item.

Open-ended check

We try to include this one in all studies, but sometimes have to skip it if the study is fielding on a tight timeframe because it involves a manual process. Here, we are seeing if a respondent provides a meaningful response to an open-ended question. Hopefully, we can use a question that is already in the study for this, but when we cannot we tend to use one like this: “Now I’d like to hear your opinions about some other things. Tell me about a social issue or cause that you really care about.  What is this cause and why do you care about it?” We are manually looking to see if they provide an articulate answer and they are flagged if they do not.

Admission of inattentiveness

We don’t use this one as a standard, but are starting to experiment with it. As the last question of the survey, we can ask respondents how attentive they were. This will suffer from a large social desirability bias, but we will sometimes directly ask them how attentive they were when taking the survey, and flag those that say they did not pay attention at all.

Traps and misdirects

I don’t really like the idea of “trick questions” – there is research that indicates that these types of questions tend to trap too many “good” respondents. Some researchers feel that these questions lower respondent trust and thus answer quality. That seems to be enough to recommend against this style of question. The most common types I have seen ask a respondent to select the “third choice” below no matter what, or to “pick the color from the list below,” or “select none of the above.” We counsel against using these.

Comprehension

This was recommended by a research colleague and was also mentioned by an expert in a questionnaire design seminar we attended. We don’t use this as a quality check, but like to use it during a soft-launch period. The question looks like this: “Thanks again for taking this survey.  Were there any questions on this survey you had difficulty with or trouble answering?  If so, it will be helpful to us if you let us know what those problems were in the space below.” This is a useful question, but we don’t use it as a quality check per se.

Preamble

I have mixed feelings on this type of quality check, but we use it when we can phrase it positively. A typical wording is like this: “By clicking yes, you agree to continue to our survey and give your best effort to answer 10-15 minutes of questions. If you speed through the survey or otherwise don’t give a good effort, you will not receive credit for taking the survey.”

This is usually one of the first questions in the survey. The argument I see against this is it sets the respondent up to think we’ll be watching them and that could potentially affect their answers. Then again, it might affect them in a good way if it makes them attend more.

I prefer a question that takes a gentler, more positive approach – telling respondents we are conducting this for an important organization, that their opinions will really matter, promise them confidentiality, and then ask them to agree to give their best effort, as opposed to lightly threatening them as this one does.

Guarding against bad respondents has become an important part of questionnaire design, and it is unfortunate that there is no industry standard on how to go about it. We try to build in some quality checks that will at least spot the most egregious cases of poor quality. This is an evolving issue, and it is likely that what we are doing today will change over time, as the nature of market research changes.

Should all college majors pay the same tuition?

Despite all that is written about the costs of higher education and how student debt is crippling an entire generation, college remains a solid investment for most students. The Bureau of Labor Statistics indicates that people with bachelor’s degrees earn about $1,173 on average each week while those with only high school diplomas earn an average of $712 per week. That is a difference of $461 per week, about $24,000 per year, and about $958,880 over a 40-year working lifetime. On average, four-year college graduates literally are about a million dollars better off in their lifetime than those that stop their education after high school.

This calculation suffers from a selection bias, as individuals that choose to go to college likely have higher earnings potential that those that do not, independent of their education, so it is not appropriate to credit the colleges entirely for the million dollar increase in value. But, at pretty much any tuition level it would be hard to argue that college does not pay off for most graduates.

This helps put the student debt debate in perspective. The average student debt is about $30,000. A typical U.S. college student goes $30,000 in debt to gain a credential that will earn an average of about $1,000,000 more over his/her lifetime. College costs are far too high, have grown considerably faster that colleges’ ability to increase value, and limit many worthy students from being able to furthering their education. Yet, college remains a stellar asset for most.

These calculations concentrate on an “average student” and much can be lost by doing that. About 1 in 5 college graduates carries more than $50,000 in loans. About 1 in 20 has more than $100,000 in loans. Not all college graduates make a million dollars more over their lifetimes. Plenty of students slip through the cracks and many are underemployed because of a mismatch between their training and what employers demand.

Many young people are in financial trouble because college is not an investment that is paying back quickly enough for them. There are too many students who begin college, take on debt, and never graduate and gain the credential that enhances their earning power. The most hidden statistic in America may be that only about 60% of those who enroll in college end up graduating.

There is an enormous disparity in the average starting salary for college graduates depending on their major and their college. When thinking of the financial aspects of college, parents and students would be wise to look more at the debt to earnings ratio rather than concentrate solely on the costs of college. That is, what will an expected first year salary be and what will the expected college debt be?

A rule of thumb is to try to get this ratio as far under 1.0 as possible, and to not let it go over 1.0. This means that students should seek to have loans that do not total more than their expected first year salary, and hopefully loans that are just a fraction of their first-year salary.

Data from the Department of Education’s College Scorecard shows average student debt and average first year salary by college and by major. What is striking is how much variability there is on the salary part and how little there is on the debt part. Broadly speaking, salaries vary widely by college and major, but the debt students end up with does not vary nearly as much.

Suppose you owned a business and two customers walked into your door. For customer A, you provide a service that is worth twice as much as what you provide to customer B. Would you charge both customers the same amount? Probably not. They would not expect you to even if it cost you the same to produce both products.

However, that is what colleges do. In the College Scorecard data, the most lucrative college majors result in starting salaries that are about two and a half times greater than the college majors that result in the lowest salaries. Yet, students graduating with these degrees all end up with similar levels of debt and pay similar tuition along the way.

Why? Why would colleges charge the same for a student who can expect to make $75,000 per year upon graduation the same as one that can expect to make $30,000? Colleges are pricing solely off the supply curve and ignoring the differences in demand among subgroups of students.

I have discussed this idea with many people including some who work in higher education. I have not found even one person that supports the idea of colleges charging different tuition rates for different majors, but I also have not heard a cogent argument against it.

This idea would provide an efficiency to the labor market. If too many students chose a particular college major, resulting first year salaries will decline because there will be an excess supply of job seekers in the market. This will cause fewer future students to flock to this major and cause colleges to adjust their recruiting tactics and tuition prices. The market would provide a clear financial signal to colleges that would help them adjust their program sizes appropriately. The incentives would be in place to produce the right number of graduates from each major.

Students majoring in traditionally higher paying fields, like engineering and computer science, would end up paying more. Those in traditionally lower paying fields, like arts and human services, would pay less. All would be paying a fair amount tied to their future earning potential and the value the degree provides. You could argue that in the current system students enrolled in liberal arts are subsidizing those enrolled in engineering. Currently, because pricing isn’t in equilibrium across majors, many students are unable to attend because their preferred major will not pay off for them.

A few years back there was a proposal in Florida to have differential pricing for different majors at state institutions. However, this proposal was not letting the market determine pricing. Instead, it sought to lower the cost of STEM majors in an effort to draw more students to STEM majors. This would result in a glut of STEM graduates and lower starting salaries for these students. Counter to the current political discourse, it is the case that salaries in STEM fields have been growing at a slower rate than other college majors on average, which is the market saying that we have too many students pursuing STEM, not too few.

Differential pricing would likely be good for the colleges as it would maximize revenue and would help colleges get closer to the equilibrium price for each student. There is a reason why everyone on an airplane seems to pay a different fare – it maximizes revenue to the airline. Differential pricing is most often seen in businesses with high fixed and low marginal costs, which perfectly describes today’s traditional colleges. Differential pricing would also help colleges allocate costs more efficiently, as resources will flow to the demand.

This is a radical idea that I don’t think has ever been tried. The best argument I have heard against it is that it has the potential to limit students from poorer households to the pursuit of lower paying majors and to draw richer students to the higher paying majors, thus perpetuating a disparity. This could happen, but is more of a temporary cash flow issue that can be resolved with intelligent public policies.

Students need access to the capital necessary to get them through the college years and assurance that their resulting debt will be connected to their future earnings potential. That is where college financial aid offices and government support of higher education should place their focus. Students with ability and without financial means need temporary help getting them to a position where they have a job offer and a reasonable amount of college debt. We all have a stake in getting them to that point.

Let’s charge students a fair price that is determined by the value they receive from colleges and concentrate our public support on being sure they have a financial bridge from the moment they leave high school to when they graduate college. Linking their personal financial stake to their expected earnings is inherently fair, helps balance the labor market, and will cause colleges to provide training that is in demand by employers.

The two (or three) types of research projects every organization needs

Every once and awhile I’ll get a call from a former client or colleague who has started a new market research job. They will be in their first role as a research director or VP with a client-side organization. As they are now in a position to set their organization’s research agenda, they ask for my thoughts on how to structure their research spending. I have received calls like this about a dozen times over the years.

I advise these researchers that two types of research stand above all others, and that their initial focus should be to get them set up correctly. The first is tracking their product volume. Most organizations know how many products they are producing and shipping, but it is surprising to see how many lose track of where their products go from there. To do a good job, marketers must know how their products move through the distribution system all the way to their end consumer. So, that becomes my first recommendation: know precisely whom is buying and using your products at every step along the way, in as much detail as possible.

The second type of research I suggest is customer satisfaction research. Understanding how customers use products and measuring their satisfaction is critical. Better yet, the customer satisfaction measuring system should be prescriptive and indicate what is driving satisfaction and what is detracting from it.

Most marketing decisions can be made if these two types of research systems are well-designed. If a marketer has a handle on precisely whom is using their products and what is enhancing and detracting from their satisfaction, most of them are smart enough to make solid decisions.

When pressed for what the third type of research should be, I usually would say that qualitative research is important. I’d put in place a regular program of in-person focus groups or usability projects, and compel key decision makers to attend them. I once consulted for a consumer packaged goods client and discovered that not a single person in their marketing department had spoken directly with a consumer of their products in the past year. There is too much of a gulf between the corporate office and the real world sometimes, and qualitative research can help close that void.

Only when these three things are in place and being well-utilized would I recommend that we move forward with other types of research projects. Competitive studies, new product forecasting, advertising testing, etc. probably take up the lion’s share of most research budgets currently. They are important, but in my view should only be pursued after these first three types of research are fully implemented.

Many research departments get distracted by conducting too many projects of too many types. A focus is important. When decision makers have the basic numbers they need and are in tune with their customer base, they are in a good position to succeed, and it is market research’s role to provide this framework.

A forgotten man: rural respondents

I have attended hundreds of focus groups. These are moderated small group discussions, typically with anywhere from 4 to 12 participants. The discussions take place in a tricked-out conference room, decked with recording equipment and a one-way mirror. Researchers and clients sit behind this one-way mirror in a cushy, multi-tiered lounge. The lounge has comfortable chairs, a refrigerator with beer and wine, and an insane number of M&M’s. Experienced researchers have learned to sit as far away from the M&M’s as possible.

Focus groups are used for many purposes. Clients use them to test out new product ideas or new advertising under development. We recommend them to clients if their objectives do not seem quite ready for survey research. We also like to do focus groups after a survey research project is complete, to put some personality on our data and to have an opportunity to pursue unanswered questions.

I would estimate that at least half of all focus groups being conducted are being held in just three cities: New York, Chicago, and Los Angeles. Most of the other half are held in other major cities or in travel destinations like Las Vegas or Orlando. These city choices can have little to do with the project objectives – focus groups tend to be held near where the client’s offices are or in cities that are easy to fly to. Clients often cities simply because they want to go there.

The result is that early-stage product and advertising ideas are almost always evaluated by urban participants or by suburban participants who live near a large city. Smaller city, small town, and rural consumers aren’t an afterthought in focus group research. They aren’t thought about at all.

I’ve always been conscious of this, perhaps because I grew up in a rural town and have never lived in a major metropolitan area. The people I grew up with an knew best were not being asked to provide their opinions.

This isn’t just an issue in qualitative research, it happens with surveys and polls as well. Rural and small-town America is almost always underrepresented in market research projects.

This wasn’t a large issue for quantitative market research early on, as RDD telephone samples could effectively include rural respondents. Many years ago, I started adding questions into questionnaires that would allow me to look at the differences between urban, suburban, and rural respondents. I would often find differences, but pointing them out met with little excitement with clients who often seemed uninterested in targeting their products or marketing to a small-town audience.

Online samples do not include rural respondents as effectively as RDD telephone samples. The rural respondents that are in online sampling data bases are not necessarily representative of rural people. Weighting them upward does not magically make them representative.

In 30 years, I have not had a single client ask me to correct a sample to ensure that rural respondents are properly represented. The result is that most products and services are designed for suburbia and don’t take the specific needs of small-town folks into account.

All biases only matter if they affect what we are measuring. If rural respondents and suburban respondents feel the same way about something, this issue doesn’t matter. However, it can matter. It can matter for product research, it certainly matters to the educational market research we have conducted, and it is likely a hidden cause of some of the problems that have occurred with election polling.

How to succeed in business without planning

Crux Research is now entering its 16th year. This places us in the 1% of start-ups – as it is well-documented that most new businesses fail in the first few years. Few are lucky enough to make it to 16.

One of the reasons experts tend to say new companies fail is a lack of business planning. However, new business successes are not a consequence of an ability to plan. Success is a function of a couple of basic things:  having a product people want to buy and then delivering it really well. It doesn’t get much more basic than that, but too many times we get lost in our ambitions and ideas and aren’t forthright with ourselves on what we are good at and not so good at.

Business plans are neither necessary nor sufficient to business success. Small businesses that focus on them remind me a bit of when a company I was with put the entire staff through training on personal organization and time management. The people that embraced the training were mostly hyper-organized to begin with and the people who might have most benefited from the training were resistant. Small businesses that rely heavily on business planning probably are using it more as reflection of their natural organizational tendencies than as a true guide to running a business. There are, of course, exceptions.

Entrepreneurs that use business plans seem to fall into two categories. First, they may feel constrained by them. If they have investors, they feel a pressure to stay within the confines of the plan even when the market starts telling them to pivot to a new strategy. They start to think adherence to the plan is, in itself, a definition of success. In short, they tend to be “process” people as opposed to “results” people.  Entrepreneurs that are “results” people are the successful ones. If you are a “process” person I’d suggest that you may be a better fit for a larger company.

Second, and this is more common, many entrepreneurs spend a lot of time in business planning before launching their company and then promptly file the plan away and never look at it again.

That is how it went for me in the corporate world. We would craft annual plans each spring and then never look at them again until late in the year when it was time to create the plan for the following year. It was sort of like giving blueprints to a contractor and then having them build whatever they want to anyway but then giving them a hard time for not following the plan… even if they had built a masterpiece.

We’d also create five-year plans every year. That never made a lot of sense to me as it was an acknowledgement that would couldn’t really plan more than a year out.

I am not against business planning in concept. I agree with Winston Churchill who said “plans are of little importance, but planning is essential.” The true value of most strategic plans is found in the thought process that is gone through to establish them and not in the resulting plan.

I’d suggest that anyone who starts a company list out the top reasons why your company might fail and then think through how you are going to prevent these things from happening. If the reasons for your potential failure are mostly under your control, you’ll be okay. We did this. We identified that if we were going to fail it was going to most likely result from an inability to generate sales, as this wasn’t where our interest or core competence lay. So, we commissioned a company to develop leads for us, had a plan for reaching out to past clients, and planned to hire a salesperson (something that never happened). The point is, while we didn’t really have a business plan, we did think through how to prevent a problem that would prevent our success.  

Before launching Crux Research, we did lay out some key opportunities and threats and thought hard about how to address what we saw as the barriers to our success. Late each fall we think about goals for the coming year, how we can improve what we are doing, and where we can best spend our time in marketing and sales. That sort of thinking is important and it is good to have formalized. But we’ve never had a formal business plan. Maybe we have succeeded in spite of that, but I tend to think we have succeeded because of it. 

Critics would probably say that the issue here is business plans just need to be better and more effective. I don’t think that is the case. The very concept of business planning for a small business can be misguided. It is important to be disciplined as an entrepreneur/small business owner. A focus is paramount to success. But I don’t think it is all that important to be “planned.” You miss too many opportunities that way.

Again, our company has never had a financial plan. I attribute much of our success to that. The company has gone in directions we would never have been able to predict in advance. We try to be opportunistic and open-minded and to stay within the confines of what we know how to do. We prefer having a high level of self-awareness of what we are good and not so good at and to create and respond to opportunities keeping that in mind. As Mike Tyson once said “everyone has a plan until they get punched in the mouth.” I wonder how many small businesses have plans that quickly got punched.

This has led us to many crossroads where we have to choose whether to expand our company or to refuse work. We’ve almost always come down on the side of staying small and being selective in the work we take on.

In the future we are likely to become even more selective in who we take on as clients. We’ve always said that we want a few core things in a client. Projects have to be financially viable, but more importantly they have to be interesting and a situation that will benefit from our experience and insight.  We have developed a client base of really great researchers who double as really great people. It sums up to a business that is a lot of fun to be in.

I would never discourage an entrepreneur from creating a business plan, but I’d advise them to think hard about why they are doing so. You’ll have no choice if you are seeking funding, as no investor is going to give money to a business that doesn’t have a plan. If you are in a low margin business, you have to have a tight understanding of your cash flow and planning that out in advance is important. I’d suggest you keep any plans broad and not detailed, conduct an honest assessment of what you are not so good at as well as good at, and prepare to respond to opportunities you can’t possibly foresee. Don’t judge your success by your adherence to the plan and if your funders appear too insistent on keeping to the plan, find new funders.

Wow! Market research presentations have changed.

I recently led an end-of-project presentation over Zoom. During it, I couldn’t help but think how market research presentations have changed over the years. There was no single event or time period that changed the nature of research presentations, but if you teleported a researcher from the 1990’s to a modern presentation they would feel a bit uncomfortable.

I have been in hundreds of market research presentations — some led by me, some led by others, and I’ve racked up quite a few air miles getting to them. In many ways, today’s presentations are more effective than those in the past. In some other ways, quality has been lost. Below is a summary of some key differences.

Today’s presentations are:

  • Far more likely to be conducted remotely over video or audio. COVID-19 disruptions acted as an accelerant onto this trend which was happening well before 2020. This has made presentations easier to schedule because not everyone has to be available in the office. This allows clients and suppliers to take part from their homes, hotels, and even their vehicles. It seems clear that a lasting effect of the pandemic will be that research presentations will be conducted via Zoom by default. There are plusses and minuses to this. For the first time in 30 years, I find myself working with clients whom I have never met in-person.
  • Much more likely to be bringing in data and perspectives from outside the immediate project. Research projects and presentations tended to be standalone events in the past, concentrating solely on the area of inquiry the study addressed. Today’s presentations are often integrated into a wider reaching strategic discussion that goes beyond the questions the research addresses.
  • More interactive. In yesteryear, the presentation typically consisted of the supplier running through the project results and implications for 45 minutes followed by a period of Q&A. It was rare to be interrupted before the Q&A portion of the meeting. Today’s presentations are often not presentations at all. As a supplier we feel like we are more like emcee’s leading a discussion than experts presenting findings.
  • More inclusive of upper management. We used to present almost exclusively to researchers and mid-level marketers. Now, we tend to see a lot more marketing VP’s and CMOs, strategy officers, and even the CEO on occasion. It used to be rare that our reports would make it to the CEOs desk. Now, I’d say most of the time they do. This is indicative of the increasing role data and research has in business today.
  • Far more likely to integrate the client’s perspective. In the past, internal research staff rarely tried to change or influence our reports and presentations, preferring to keep some distance and then separately add their perspective. Clients have become much more active in reviewing and revising supplier reports and presentations.

Presentations from the 1990’s were:

  • A more thorough presentation of the findings of the study. They told a richer, more nuanced story. They focused a lot more on storytelling and building a case for the recommendations. Today’s presentations often feel like a race to get to the conclusions before you get interrupted.
  • More confrontational. Being challenged on the study method, data quality, and interpretations was more commonplace a few decades ago. I felt a much greater need to prepare and rehearse than I do today because I am not as in control of the flow of the meetings as I was previously. In the past I felt like I had to know the data in great detail, and it was difficult for me to present a project if I wasn’t the lead analyst on it. Today, that is much less of a concern.
  • More strategic. This refers more to the content of the studies than the presentation itself. Since far fewer studies were being done, the ones that were tended to be informing high consequence decisions. While plenty of strategic studies are still conducted, there are so many studies being done today that many of them are informing smaller, low-consequence, tactical decisions.
  • More relaxed. Timelines were more relaxed and as a result research projects were planned well in advance and the projects fed into a wider strategic process. That still happens, but a lot of today’s projects are completed quickly (often too quickly) because information is needed to make a decision that wasn’t even on the radar a few weeks prior.
  • More of a “show.” In the past we rehearsed more, were concerned about the graphical design of the slides, and worried about the layout of the room. Today, there is rarely time for that.
  • More social. Traveling in for a presentation meant spending time beforehand with clients, touring offices, and almost always going to lunch or dinner afterword. Even before the COVID/Zoom era, more recent presentations tended to be “in and out” affairs – where suppliers greet the clients, give a presentation, and leave. While there are many plusses to this, some (I’d actually say most) of the best researchers I know are introverts who were never comfortable with this forced socialization. Those types of people are going to thrive in the new presentation environment.

Client-side researchers were much more planned out in the past. Annually, they would go through a planning phase where all the projects for the year would be budgeted and placed in a timeline. The research department would then execute against that plan. More recently, our clients seem like they don’t really know what projects they will be working on in a few weeks’ time – because many of today’s projects take just days from conception to execution.

I have also noticed that while clients are commissioning more projects they seem to be using fewer suppliers than in the past. I think this is because studies are being done so quickly they don’t have time to manage more than a few supplier relationships. Bids aren’t as competitive and are more likely to be sole-sourced.

Clients are thus developing closer professional relationships with their suppliers. Suppliers are closer partners with clients than ever before, but with this comes a caution. It becomes easy to lose a third-party objectivity when we get too close to the people and issues at hand and when clients have too heavy a hand in the report process. In this sense, I prefer the old days, where we provided a perspective and our clients would then add a POV. Now, we often meld the two into one presentation, and at time we lose the value that comes from a back and forth disagreement over what the findings mean to a business.

If I teleported my 1990’s self to today I would be amazed at how quickly projects go from conception to final presentation. Literally, this happens in about one-third the time it used to. There are many downsides of going too fast and clients rarely focus or care about them. While there are dangers to going too fast, clients seem to prefer getting something 90% right and getting it done tomorrow, than waiting for a perfect project.

There is even a new category of market research called “agile research” that seeks to provide real-time data. I am sure it is a category that will grow, but those employing it need to keep in mind that providing data faster than managers can act on it can actually be a disservice to the client. It is an irony of our field that more data and continuous data can actually slow down decision making.  

Today’s presentations are less stressful, more inclusive, and more strategic. The downside is there are probably too many of them – clients are conducting too many projects on minor issues, they don’t always learn thoroughly from one study before moving onto the next, and researchers are sometimes being rewarded more for getting things done than for providing insight into the business.

Shift the Infield, Go for Two, and Pull the Goalie Sooner!

Moneyball is one of my favorite books. It combines many interests of mine – statistics, baseball, and management. I once used it to inspire a client to think about their business differently. This client was a newly-named President of a firm and had brought us in to conduct some consumer market research. New management teams often like to bring in new research suppliers and shed their old ones, and in this case we were the beneficiaries.

In our initial meeting, I asked some basic marketing questions about how they decide to price their products or how much to spend on advertising. Each time his response was “this is how we have always done it” rather than a well-thought out rationale supporting his decision. For instance, most of his products were priced to retailers at 50% of the price to consumers because that is how it had been for decades. I asked him “what are the odds that your optimal pricing is 50% rather than something higher or lower?” What are the chances that a round number like 50% could be optimal for all products in all cases when he literally had thousands of products?

I sent him a copy of Moneyball when I returned from the trip because I knew he was a sports fan. He read it immediately. It sparked him to commission a consulting firm to delve deeply into pricing models and ultimately led to a significant change in their pricing policies. They no longer used 50% as a target and established different wholesale prices for each of their SKU’s based on demand and updated these prices regularly. A few years later, he told me that decision literally saved his firm millions of dollars and the pricing efficiency helped to distribute his products more effectively. He said this was probably the project he had led that had the biggest impact on his business since he had been there.

Businesses can use sports analogies too readily, but in this case it really worked. The rise of statisticians in sports has worked and there are lessons that businesses can learn from this.

I find it fascinating when old-timers and sports talk radio hosts lament the rise of “analytics” in sports. You can see the impact of statisticians every time you see a baseball team set up in a defensive shift, when you see a football team go for it on fourth down, or when you see a hockey team pull its goalie earlier than usual. These decisions being made more frequently and in situations where the prior norms of the game would have prevented them from happening. It is all because data jockeys have been given a seat at the sports management table to the chagrin of the purists.

But data geeks haven’t totally taken over sports and longstanding traditions continue to hold sway. For instance, in baseball it can be shown that more runs on average are scored in the first inning than in any other inning. This makes sense, as the first inning is the only time in the game when you can be sure your best hitters will be at the top of the batting order. So, why don’t major league teams start their closer and have him pitch the first inning? Instead, they reserve their most powerful pitcher for the 9th inning, when, more often or not, the game is already decided. I’ve been predicting that teams will figure this out and start their closer for about 20 years now and they haven’t done it yet. (The Tampa Rays did something close to this and had an “opener” pitcher in their rotation, but it didn’t work well because this pitcher wasn’t their most powerful arm.)

Similarly, hockey teams continue to be slow to pull their goalie when behind late in the game. Hockey coaches also continue to make a decision that baffles me every time. They are down by one goal late in the game so they pull their goalie and promptly surrender a goal. The first thing they do is put their goalie back in which makes no rational sense at all. If you are willing to take the risk of being scored upon when losing by one goal, you should be even more willing to do so when losing by two goals. There is an excellent paper on pulling the goalie (“Pulling the Goalie:  Hockey and Investment Implications.”) which shows that coaches aren’t pulling their goalie even close to quick enough.

These sports cases are interesting because it is the fans that always seem to notice the coaching strategy errors before the coaches and general managers. This illustrates the value of an outside perspective in organizations that have longstanding policies and traditions. I don’t think my client could have accomplished his pricing changes if he wasn’t brand new to the organization or if he didn’t hire a consulting firm to work out the optimal strategy. This change was not going to come from within his organization.

Businesses have been slow to adapt their thinking despite the vast amount of data at their disposal. Decisions are made all the time without consulting what the data are indicating. More relevant to our industry, in most organizations market research is still seen as a support function to marketing, as opposed to its equal. I don’t think I have ever heard of an organization where market research reports directly to senior management or where marketing reports into research, yet we often hear senior managers say that connecting to customers is the most critical part of their organization’s success.

Many saw Moneyball as a book about sports or a great movie. I saw it as one of the most important business books ever written. Its key message is to use data to break out of existing decision patterns, often to great success.

How COVID-19 may change Market Research

Business life is changing as COVID-19 spreads in the US and the world. In the market research and insights field there will be both short-term and long-term effects. It is important that clients and suppliers begin preparing for them.

This has been a challenging post to write. First, in the context of what many people are going though in their personal and business lives as a result of this disruption, writing about what might happen to one small sector of the business world can come across as uncaring and tone-deaf, which is not the intention. Second, this is a quickly changing situation and this post has been rewritten a number of times in the past week. I have a feeling it may not age well.

Nonetheless, market research will be highly impacted by this situation. Below are some things we think will likely happen to the market research industry.

  • An upcoming recession will hit the MR industry hard. Market research is not an investment that typically pays off quickly. Companies that are forced to pare back will cut their research spending and likely their staffs.
  • Cuts will affect clients more than suppliers. In previous recessions, clients have cut MR staff and outsourced work to suppliers. This is an opportunity for suppliers that know their clients’ businesses well and can step up to help.
  • Unlike a lot of other types of industries, it is the large suppliers that are most at risk of losing work. Publicly-held research suppliers will be under even more intense pressure from their investors than usual. There will most certainly be cost cutting at these firms, and if the concerns over the virus persist, it will lead to layoffs.
  • The smallest suppliers could face an existential risk. Many independent contractors and small firms are dependent on one or two clients for the bulk of their revenue. If those clients are in highly affected sectors, these small suppliers will be at risk of going out of business.
  • Smallish to mid-sized suppliers may emerge stronger. Clients are going to be under cost pressures due to a receding economy and smaller research suppliers tend to be less expensive. Smaller research firms did well post 9/11 and during the recession of 2008-09 because clients moved work from higher priced larger firms to them. Smaller research firms would be wise to build tight relationships so that when the storm over the virus abates, they will have won their clients trust for future projects.
  • New small firms will emerge as larger firms cut staff and create refugees who will launch new companies.

Those are all items that might pertain to any sort of sudden business downturn. There are also some things that we think will happen that are specific to the COVID-19 situation:

  • Market research conferences will never be the same. Conferences are going to have difficulty drawing speakers and attendees. Down the line, conferences will be smaller and more targeted and there will be more virtual conferences and training sessions scheduled. At a minimum, companies will send fewer people to research conferences.
  • This will greatly affect MR trade associations as these conferences are important revenue sources for them. They will rethink their missions and revenue models, and will become less dependent on their signature events. The associations will have more frequent, smaller, more targeted online events. The days of the large, comprehensive research conference may be over.
  • Business travel will not return to its previous level. There will be fewer in-person meetings between clients and suppliers and those that are held will have fewer participants. Video conferencing will become an even more important way to reach clients.
  • Clients and suppliers will allow much more “work from home.” It may become the norm that employees are only expected to be in the office for key meetings. The situation with COVID-19 will give companies who don’t have a lot of experience allowing employees to work from home the opportunity to see the value in it. When the virus is under control, they will embrace telecommuting. We will see this crisis kick-start an already existing movement towards allowing more employees to work from home. The amount of office space needed will shrink.
  • Research companies will review and revise their sick-leave policies and there will be pressure on them to make them more generous.
  • Companies that did the right thing during the crisis will be rewarded with employee loyalty. Employees will become more attached and appreciative of suppliers that showed flexibility, did what they could to maintain payroll, and expressed genuine concerns for their employees.

Probably the biggest change we will see in market research projects is to qualitative research.

  • While there will always be great value in traditional, in-person focus groups , the situation around COVID-19 is going to cause online qualitative to become the standard approach. We are at a time where the technologies available for online qualitative are well-developed, yet clients and suppliers have clung to traditional methods. To date, the technology has been ahead of the demand. Companies will be forced by travel restrictions to embrace online methods and this will be at the expense of traditional groups. This is an excellent time to be in the online qualitative technology business. It is not such a great time to be in the focus group facility management business.
  • Independent moderators, who work exclusively with traditional groups, are going to be in trouble and not just in the short term. Many of these individuals will retire or look for work elsewhere or leave research. Others will necessarily adapt to online methods. Of course, there will continue to be independent moderators but we are predicting the demand for in-person groups will be permanently affected, and this portion of the industry will significantly shrink.
  • There is a risk that by not commissioning as much in-person qualitative, marketers may become further removed from direct human interaction with their customer base. This is a very real concern. We wouldn’t be in market research if we didn’t have an affinity for data and algorithms, but qualitative research is what keeps all of our efforts grounded. I’d caution clients to think carefully before removing all in-person interaction from your research plans.

What will happen to quantitative research? In the short-run, most studies will continue. Respondents are home, have free time, and thus far have shown they are willing to take part in studies. Some projects, typically in highly affected industries like travel and entertainment, are being postponed or canceled. All current data sets need to be viewed with a careful eye as the tumult around the virus can affect results. For instance, we conduct a lot of research with young respondents, and we now know for sure that their parents are likely nearby when they are taking our surveys, and that can influence our findings for some subjects.

Particular care needs to be taken in ongoing tracking studies. It makes sense for many trackers to add questions in to see how the situation has affected the brand in question.

But, in the longer term there will be too much change in quantitative research methods that result directly from this situation. If anything, there will be a greater need to understand consumers.

Tough times for sure. It has been heartening to see how our industry has reacted. Research panel and technology providers have reached out to help keep projects afloat. We’ve had subcontractors tell us we can delay payments if we need to. Calls with clients have become more “human” as we hear their kids and pets in the background and see the stresses they are facing. Respondents have continued to fill out our surveys.

There is a lot of uncertainty right now. At its core, market research is a way to reduce uncertainty for decision makers by making the future more predictable, so we are needed now more than ever. Research will adapt as it always does, and I believe in the long-run it may become even more valued as a result of this crisis.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.