Posts Tagged 'Market Research'

Things That Surprised Me When Writing a Book

I recently published a book outlining the challenges election pollsters face and the implications of those challenges for survey researchers.

This book was improbable. I am not an author nor a pollster, yet I wrote a book on polling. It is a result of a curiosity that got away from me.

Because I am a new author, I thought it might be interesting to list unexpected things that happened along the way. I had a lot of surprises:

  • How quickly I wrote the first draft. Many authors toil for years on a manuscript. The bulk of POLL-ARIZED was composed in about three weeks, working a couple of hours daily. The book covers topics central to my career, and it was a matter of getting my thoughts typed and organized. I completed the entire first draft before telling my wife I had started it.
  • How long it took to turn that first draft into a final draft. After I had all my thoughts organized, I felt a need to review everything I could find on the topic. I read about 20 books on polling and dozens of academic papers, listened to many hours of podcasts, interviewed polling experts, and spent weeks researching online. I convinced a few fellow researchers to read the draft and incorporated their feedback. The result was a refinement of my initial draft and arguments and the inclusion of other material. This took almost a year!
  • How long it took to get the book from a final draft until it was published. I thought I was done at this point. Instead, it took another five months to get it in shape to publish – to select a title, get it edited, commission cover art, set it up on Amazon and other outlets, etc. I used Scribe Media, which was expensive, but this process would have taken me a year or more if I had done it without them.
  • That going for a long walk is the most productive writing tactic ever. Every good idea in the book came to me when I trekked in nature. Little of value came to me when sitting in front of a computer. I would go for long hikes, work out arguments in my head, and brew a strong cup of coffee. For some reason, ideas flowed from my caffeinated state of mind.
  • That writing a book is not a way to make money. I suspected this going in, but it became clear early on that this would be a money-losing project. POLL-ARIZED has exceeded my sales expectations, but it cost more to publish than it will ever make back in royalties. I suspect publishing this book will pay back in our research work, as it establishes credibility for us and may lead to some projects.
  • Marketing a book is as challenging as writing one. I guide large organizations on their marketing strategy, yet I found I didn’t have the first clue about how to promote this book. I would estimate that the top 10% of non-fiction books make up 90% of the sales, and the other 90% of books are fighting for the remaining 10%.
  • Because the commission on a book is a few dollars per copy, it proved challenging to find marketing tactics that pay back. For instance, I thought about doing sponsored ads on LinkedIn. It turns out that the per-click charge for those ads was more than the book’s list price. The best money I spent to promote the book was sponsored Amazon searches. But even those failed to break even.
  • Deciding to keep the book at a low price proved wise. So many people told me I was nuts to hold the eBook at 99 cents for so long or keep the paperback affordable. I did this because it was more important to me to get as many people to read it as possible than to generate revenue. Plus, a few college professors have been interested in adopting the book for their survey research courses. I have been studying the impact of book prices on college students for about 20 years, and I thought it was right not to contribute to the problem.
  • BookBub is incredible if you are lucky enough to be selected. BookBub is a community of voracious readers. I highly recommend joining if you read a lot. Once a week, they email their community about new releases they have vetted and like. They curate a handful of titles out of thousands of submissions. I was fortunate that my book got selected. Some authors angle for a BookBub deal for years and never get chosen. The sales volume for POLL-ARIZED went up by a factor of 10 in one day after the promotion ran.
  • Most conferences and some podcasts are “pay to play.” Not all of them, but many conferences and podcasts will not support you unless you agree to a sponsorship deal. When you see a research supplier speaking at an event or hear them on a podcast, they may have paid the hosts something for the privilege. This bothers me. I understand why they do this, as they need financial support. Yet, I find it disingenuous that they do not disclose this – it is on the edge of being unethical. It harms their product. If a guest has to pay to give a conference presentation or talk on a podcast, it pressures them to promote their business rather than have an honest discussion of the issues. I will never view these events or podcasts the same. (If you see me at an event or hear me on a podcast, be assured that I did not pay anything to do so.)
  • That the industry associations didn’t want to give the book attention. If you have read POLL-ARIZED, you will know that it is critical (I believe appropriately and constructively) of the polling and survey research fields. The three most important associations rejected my proposals to present and discuss the book at their events. This floored me, as I cannot think of any topics more essential to this industry’s future than those I raise in the book. Even insights professionals who have read the book and disagree with my arguments have told me that I am bringing up points that merit discussion. This cold shoulder from the associations made me feel better about writing that “this is an industry that doesn’t seem poised to fix itself.”
  • That clients have loved the book. The most heartwarming part of the process is that it has reconnected me with former colleagues and clients from a long research career. Everyone I have spoken to who is on the client-side of the survey research field has appreciated the book. Many clients have bought it for their entire staff. I have had client-side research directors I have never worked with tell me they loved the book.
  • That some of my fellow suppliers want to kill me. The book lays our industry bare, and not everyone is happy about that. I had a competitor ask me, ” Why are you telling clients to ask us what our response rates are?” I stand behind that!
  • How much I learned along the way. There is something about getting your thoughts on paper that creates a lot of learning. There is a saying that the best way to learn a subject is to teach it. I would add that trying to write a book about something can teach you what you don’t know. That was a thrill for me. But then again, I was the type of person who would attend lectures for classes I wasn’t even taking while in college. I started writing this book to educate myself, and it has been a great success in that sense.
  • How tough it was for me to decide to publish it. There was not a single point in the process when I did not consider not publishing this book. I found I wanted to write it a lot more than publish it. I suffered from typical author fears that it wouldn’t be good enough, that my peers would find my arguments weak, or that it would bring unwanted attention to me rather than the issues the book presents. I don’t regret publishing it, but it would never have happened without encouragement from the few people who read it in advance.
  • The respect I gained for non-fiction authors. I have always been a big reader. I now realize how much work goes into this process, with no guarantee of success. I have always told people that long-form journalism is the profession I respect the most. Add “non-fiction” writers to that now!

Almost everyone who has contacted me about the book has asked me if I will write another one. If I do, it will likely be on a different topic. If I learned anything, this process requires selecting an issue you care about passionately. Journalists are people who can write good books about almost anything. The rest of us mortals must choose a topic we are super interested in, or our books will be awful.

I’ve got a few dancing around in my head, so who knows, maybe you’ll see another book in the future.

For now, it is time to get back to concentrating on our research business!

The Insight that Insights Technology is Missing

The market research insights industry has long been characterized by a resistance to change. This likely results from the academic nature of what we do. We don’t like to adopt new ways of doing things until they have been proven and studied.

I would posit that the insights industry has not seen much change since the transition from telephone to online research occurred in the early 2000s. And even that transition created discord within the industry, with many traditional firms resistant to moving on from telephone studies because online data collection had not been thoroughly studied and vetted.

In the past few years, the insights industry has seen an influx of capital, mostly from private equity and venture capital firms. The conditions for this cash infusion have been ripe: a strong and growing demand for insights, a conservative industry that is slow to adapt, and new technologies arising that automate many parts of a research project have all come together simultaneously.

Investing organizations see this enormous business opportunity. Research revenues are growing, and new technologies are lowering costs and shortening project timeframes. It is a combustible business situation that needs a capital accelerant.

Old school researchers, such as myself, are becoming nervous. We worry that automation will harm our businesses and that the trend toward DIY projects will result in poor-quality studies. Technology is threatening the business models under which we operate.

The trends toward investment in automation in the insights industry are clear. Insights professionals need to embrace this and not fight it.

However, although the movement toward automation will result in faster and cheaper studies, this investment ignores the threats that declining data quality creates. In the long run, this automation will accelerate the decline in data quality rather than improve it.

It is great that we are finding ways to automate time-consuming research tasks, such as questionnaire authoring, sampling, weighting, and reporting. This frees up researchers to concentrate on drawing insights out of the data. But, we can apply all the automation in the world to the process, yet if we do not do something about data quality, it will not increase the value clients receive.

I argue in POLL-ARIZED that the elephant in the research room is the fact that very few people want to take our surveys anymore. When I began in this industry, I routinely fielded telephone projects with 70-80% response rates. Currently, telephone and online response rates are between 3-4% for most projects.

Response rates are not everything. You can make a compelling argument that they do not matter at all. There is no problem as long as the 3-4% response we get is representative. I would rather have a representative 3% answer a study than a biased 50%.

But, the fundamental problem is that this 3-4% is not representative. Only about 10% of the US population is currently willing to take surveys. What is happening is that this same 10% is being surveyed repeatedly. In the most recent project Crux fielded, respondents had taken an average of 8 surveys in the past two weeks. So, we have about 10% of the population taking surveys every other day, and our challenge is to make them represent the rest of the population.

Automate all you want, but the data that are the backbone of the insights we are producing quickly and cheaply is of historically low quality.

The new investment flooding into research technology will contribute to this problem. More studies will be done that are poorly designed, with long, tortuous questionnaires. Many more surveys will be conducted, fewer people will be willing to take them, and response rates will continue to fall.

There are plenty of methodologists working on these problems. But, for the most part, they are working on new ways to weight the data we can obtain rather than on ways to compel more response. They are improving data quality, but only slightly, and the insights field continues to ignore the most fundamental problem we have: people do not want to take our surveys.

For the long-term health of our field, that is where the investment should go.

In POLL-ARIZED, I list ten potential solutions to this problem. I am not optimistic that any of them will be able to stem the trend toward poor data quality. But, I am continually frustrated that our industry has not come together to work towards expanding respondent trust and the base of people willing to take part in our projects.

The trend towards research technology and automation is inevitable. It will be profitable. But, unless we address data quality issues, it will ultimately hasten the decline of this field.

POLL-ARIZED available on May 10

I’m excited to announce that my book, POLL-ARIZED, will be available on May 10.
 
After the last two presidential elections, I was fearful my clients would ask a question I didn’t know how to answer: “If pollsters can’t predict something as simple as an election, why should I believe my market research surveys are accurate?”
 
POLL-ARIZED results from a year-long rabbit hole that question led me down! In the process, I learned a lot about why polls matter, how today’s pollsters are struggling, and what the insights industry should do to improve data quality.
 
I am looking for a few more people to read an advance copy of the book and write an Amazon review on May 10. If you are interested, please send me a message at poll-arized@cruxresearch.com.

Associations and Trade Groups for Market Researchers and Pollsters

The market research and polling fields have some excellent trade associations. These organizations help lobby for the industry, conduct studies on issues relating to research, host in-person events and networking opportunities, and post jobs in the market research field. They also host many excellent online seminars. These organizations establish standards for research projects and codes of conduct for their memberships.

Below is a listing of some of the most influential trade groups for market researchers and pollsters. I would recommend that, at minimum, all researchers should get on the email lists of these organizations, as that allows you to see what events and seminars they have coming up. Many of their online seminars are free.

  • ESOMAR. ESOMAR is perhaps the most “worldwide” of all the research trade associations and probably the biggest. ESOMAR was established in 1948 and is headquartered in Europe (Amsterdam). With 40,000 members across 130 countries, it is an influential organization.
  • Insights Association. The Insights Association is U.S.-based. It was created in a merger of two longstanding associations: CASRO and MRA. This organization runs many events and has a certification program for market researchers.
  • Advertising Research Foundation (ARF). ARF concentrates on advertising and media research. ARF puts on a well-known trade show/conference each year and has an important awards program for advertising research, known as the Ogilvy’s. The ARF is likely the most essential trade organization to be a part of if you work in an ad agency or the media or focus on advertising research.
  • Market Research Society. MRS is the U.K. analog to the Insights Association. This organization reaches beyond the U.K. and has some great online courses.
  • The American Association for Public Opinion Research (AAPOR). AAPOR is an influential trade group regarding public opinion polling and pre-election polling. They win the award for longevity, as they have been around since 1947. I consider AAPOR to be the most “academic” of the trade groups, as in addition to researchers and clients, they have quite a few college professors as members. They publish Public Opinion Quarterly, a key academic journal for polling and survey research. AAPOR is a small organization with a large impact.
  • The Research Society. The Research Society is Australia’s key trade association for market researchers.

Many countries have their own trade associations, and there are some associations specific to particular industries, such as pharmaceuticals and health care.

Below are other types of organizations that are not trade associations but are of interest to survey researchers.

  • The Roper Center for Public Opinion Research. The Roper Center is an archive of past polling data, mainly from the U.S. It is currently housed at Cornell University. It can be fascinating to use it to see what American opinion looked like decades ago.
  • The Archive of Market and Social Research (AMSR). AMSR is likely of most interest to U.K. researchers. It is an archive of U.K. history through the lens of polls and market research studies that have been collected.
  • The University of Georgia. The University of Georgia has a leading academic program that trains future market researchers. This university is quite involved in the market research industry and sponsors many exciting seminars. There are some other universities with market research programs, but the University of Georgia is by far the one that is the most tightly connected with the industry.
  • The Burke Institute. The Burke Institute offers many seminars and courses of interest to market research. Many organizations encourage their staff members to take Burke Institute courses.
  • Women in Research (WiRe). WiRe is a group that advances the voice of women in market research. This organization has gained significantly in prominence over the past few years and is doing great work.
  • Green Book. Green Book is a directory of market research firms. Back “in the day,” the Green Book was the printed green directory used by most researchers to find focus group facilities. This organization hosts message boards, conducts industry studies and seminars.
  • Quirk’s. Quirk’s contains interesting articles and runs webinars and conferences.

Less useful research questions

Questionnaire “real estate” is limited and valuable. Most surveys fielded today are too long and this causes problems with respondent fatigue and trust. Researchers tend to start the questionnaire design process with good intent and aim to keep survey experiences short and compelling for respondents. However, it is rare to see a questionnaire get shorter as it undergoes revision and review, and many times the result is impossibly long surveys.

One way to guard against this is to be mindful. All questions included should have a clear purpose and tie back to study objectives. Many times, researchers include some questions and options simply out of habit, and not because these questions will add value to the project.

Below are examples of question types that, more often or not, add little to most questionnaires. These questions are common and used out of habit. There are certainly exceptions when it makes sense to include these questions, but for the most part we advise against using them unless there is a specific reason to include them.

Marital status

Somewhere along the way, asking a respondent’s marital status became standard on most consumer questionnaires. Across thousands of studies, I can only recall a few times when I have actually used it for anything. It is appropriate to ask if it is relevant. Perhaps your client is a jewelry company or in the bridal industry. Or, maybe you are studying relationships. However, I would nominate marital status as being the least used question in survey research history.

Other (specify)

Many multiple response questions ask a respondent to select all that apply from a list, and then as a final option will have “other.” Clients constantly pressure researchers to leave a space for respondents to type out what this “other” option is. We rarely look at what they type in. I tell clients that if we expect a lot of respondents to select the other option, it probably means that we have not done a good job at developing the list. It may also mean that we should be asking the question in an open-ended fashion. Even when it is included, most of the respondents who select other will not type anything into the little box anyway.

Don’t Know Options

We recently composed an entire post about when to include a Don’t Know option on a question. To sum it up, the incoming assumption should be that you will not use a Don’t Know option unless you have an explicit reason to do so. Including Don’t Know as an option can make a data set hard to analyze. However, there are exceptions to this rule, as Don’t Know can be an appropriate choice. That said, it is overused on surveys currently.

Open-Ends

The transition from telephone to online research has completely changed how researchers can ask open-ended questions. In the telephone days, we could pose questions that were very open-ended because we had trained interviewers who could probe for meaningful answers. With online surveys, open-ended questions that are too loose rarely produce useful information. Open-ends need to be specific and targeted. We favor the inclusion of just a handful of open-ends in each survey, and that they are a bit less “open-ended” than what has been traditionally asked.

Grid questions with long lists

We have all seen these. These are long lists of items that require a scaled response, perhaps a 5-point agree/disagree scale. The most common abandon point on a survey is the first time a respondent encounters a grid question with a long list. Ideally, these lists are about 4 to 6 items and there are no more than two or three of them on a questionnaire.

We currently field a study that has a list like this with 28 items in it. There is no way we are getting good information from this question and we are fatiguing the respondent for the remainder of the survey.

Specifying time frames

Survey research often seeks to find out about a behavior across a specified time frame. For instance, we might want to know if a consumer has used a product in the past day, past week, past month, etc. The issue here is not so much the time frame, it is when we consider the responses to be literal. I have seen clients take past day usage and multiply it by 365 and assume that will equate to past year usage. Technically and mathematically, that might be true, but it isn’t how respondents react to questions.

In reality, it is likely accurate to ask if a respondent has done something in the past day. But, once the time frames get longer, we are really asking about “ever” usage. It depends a bit on the purchase cycle of the product and its cost, but for most products, asking if they have used in the past month, 6 months, year, etc. will yield similar responses.

Some researchers work around this by just asking “ever used” and “recently used.” There are times when that works, but we tend to set a reasonable time frame for recent use and go with that, typically within the past week.

Household income

Researchers have asked household income as long as the survey research field has been around. There are at least three serious problems with it. First, many respondents are not knowledgeable about what their household income is. Most households have a “family CFO” who takes the lead on financial issues, and even this person often will not know what the family income is. 

Second, the categories chosen affect the response to the income question, indicating just how unstable it is. Asking household income in say, ten categories versus five categories will not result in comparable data. Respondents tend to assume the middle of the range given is normal, and respond using that as a reference point.

Third, and most importantly, household income is a lousy measure of socio-economic status (SES). Many young people have low annual incomes but a wealthy lifestyle as they are still being supported by their parents. Many older people are retired and may have almost non-existent incomes, yet live a wealthy lifestyle off of their savings. Household income tends to only be a reasonable measure of SES for respondents aged about 30 to 60,

There are better measures of SES. Education level can work, and a particularly good question is to ask the respondent about their mother’s level of education, which has been shown to correlate strongly with SES. We also ask about their attitudes towards their income – whether they have all the money they need, just enough, or if they struggle to meet basic expenses.

Attention spans are getting shorter and as more and more surveys are being completed on mobile devices there are plenty of distractions as respondents answer questionnaires. Engage them, get their attention, and keep the questionnaire short. There may be no such thing as a dumb question, but there are certainly questions that when asked on a survey do not yield useful information.

Should you include a “Don’t Know” option on your survey question?

Questionnaire writers construct a bridge between client objectives and a line of questioning that a respondent can understand. This is an underappreciated skill.

The best questionnaire writers empathize with respondents and think deeply about tasks respondents are asked to perform. We want to strike a balance between the level of cognitive effort required and a need to efficiently gather large amounts of data. If the cognitive effort required is too low, the data captured is not of high quality. If it is too high, respondents get fatigued and stop attending to our questions.

One of the most common decisions researchers have to make is whether or not to allow for a Don’t Know (DK) option on a question. This is often a difficult choice, and the correct answer on whether to include a DK option might be the worst possible answer: “It depends.”

Researchers have genuine disagreements about the value of a DK option. I lean strongly towards not using DK’s unless there is a clear and considered reason for doing so.

Clients pay us to get answers from respondents and to find out what they know, not what they don’t know. Pragmatically, whenever you are considering adding a DK option your first inclination should be that you perhaps have not designed the question well. If a large proportion of your respondent base will potentially choose “don’t know,” odds are high that you are not asking a good question to begin with, but there are exceptions.

If you get in a situation where you are not sure if you should include a DK option, the right thing to do is to think broadly and reconsider your goal: why are you asking the question in the first place? Here is an example which shows how the DK decision can actually be more complicated than it first appears.

We recently had a client that wanted us to ask a question similar to this: “Think about the last soft drink you consumed. Did this soft drink have any artificial ingredients?”

Our quandary was whether we should just ask this as a Yes/No question or to also give the respondent a DK option. There was some discussion back and forth, as we initially favored not including DK, but our client wanted it.

Then it dawned on us that whether or not to include DK depended on what the client wanted to get out of the question. On one hand, the client might want to truly understand if the last soft drink consumed had any artificial ingredients in it, which is ostensibly what the question asks. If this was the goal, we felt it was necessary to better educate the respondent on what an “artificial ingredient” was so they could provide an informed answer and so all respondents would be working from a common definition. Or, alternatively, we could ask for the exact brand and type of soft drink they consumed and then on the back-end code which ones have artificial ingredients and which do not, and thus get a good estimate for the client.

The other option was to realize that respondents might have their own definitions of “artificial ingredients” that may or may not match our client’s definition. Or, they may have no clue what is artificial and what is not.

In the end, we decided to use the DK option in this case because understanding how many people are ignorant to artificial ingredients fit well with our objectives. When we pressed the client, we learned that they wanted to document this ambiguity. If a third of consumers don’t know whether or not their soft drinks have artificial ingredients in them, this would be useful information for our client to know.

This is a good example on how a seemingly simple question can have a lot of thinking behind it and how it is important to contextualize this reasoning when reporting results. In this case, we are not really measuring whether people are drinking soft drinks with artificial ingredients. We are measuring what they think they are doing, which is not the same thing and likely more relevant from a marketing point-of-view.

There are other times when a DK option makes sense to include. For instance, some researchers will conflate the lack of an option (a DK response) with a neutral opinion and these are not the same thing. For example, we could be asking “how would you rate the job Joe Biden is doing as President?” Someone who answers in the middle of the response scale likely has a considered, neutral opinion of Joe Biden. Someone answering DK has not considered the issue and should not be assumed to have a neutral opinion of the president. This is another case where it might make sense to use DK.

However, there are probably more times when including a DK option is a result of lazy questionnaire design than any deep thought regarding objectives. In practice, I have found that it tends to be clients who are inexperienced in market research that press hardest to include DK options.

There are at least a couple of serious problems with including DK options on questionnaires. The first is “satisficing” – which is a tendency respondents have to not place a lot of effort on responding and instead choose the option that requires the least cognitive effort. The DK option encourages satisficing. A DK option also allows respondents to disengage with the survey and can lead to inattention on subsequent items.

DK responses create difficulties when analyzing data. We like to look at questions on a common base of respondents, and that becomes hard to comprehend when respondents choose DK on some questions but not others. Including DK makes it harder to compare results across questions. DK options also limit the ability to use multivariate statistics, as a DK response does not fit neatly on a scale.

Critics would say that researchers should not force respondents to express and opinion they do not have and therefore should provide DK options. I would counter by saying that if you expect a substantial amount of people to not have an opinion, odds are high you should reframe the question and ask them about something they do know about. It is usually (but not always) the case that we want to find out more about what people know than what they don’t know.

“Don’t know” can be a plausible response. But, more often than not, even when it is a plausible response if we feel a lot of people will choose it, we should reconsider why we are asking the question. Yes, we don’t want to force people to express an option they don’t have. But rather than include DK, it is better to rewrite a question to be more inclusive of everybody.

As an extreme example, here is a scenario that shows how a DK can be designed out of a question:

We might start with a question the client provides us: “How many minutes does your child spend doing homework on a typical night?” For this question, it wouldn’t take much pretesting to realize that many parents don’t really know the answer to this, so our initial reaction might be to include a DK option. If we don’t, parents may give an uninformed answer.

However, upon further thought, we should realize that we may not really care about how many minutes the child spends on homework and we don’t really need to know whether the parent knows this precisely or not. Thinking even deeper, some kids are much more efficient in their homework time than others, so measuring quantity isn’t really what we want at all. What we really want to know is, is the child’s homework level appropriate and effective from the parent’s perspective?

This probing may lead us down a road to consider better questions, such as “in your opinion, does your child have too much, too little, or about the right amount of homework?” or “does the time your child spends on homework help enhance his/her understanding of the material?” This is another case when thinking more about why we are asking the question tends to result in better questions being posed.

This sort of scenario happens a lot when we start out thinking we want to ask about a behavior, when what we really want to do is ask about an attitude.

The academic research on this topic is fairly inconclusive and sometimes contradictory. I think that is because academic researchers don’t consider the most basic question, which is whether or not including DK will better serve the client’s needs. There are times that understanding that respondents don’t know is useful. But, in my experience, more often than not if a lot of respondents choose DK it means that the question wasn’t designed well. 

The two (or three) types of research projects every organization needs

Every once and awhile I’ll get a call from a former client or colleague who has started a new market research job. They will be in their first role as a research director or VP with a client-side organization. As they are now in a position to set their organization’s research agenda, they ask for my thoughts on how to structure their research spending. I have received calls like this about a dozen times over the years.

I advise these researchers that two types of research stand above all others, and that their initial focus should be to get them set up correctly. The first is tracking their product volume. Most organizations know how many products they are producing and shipping, but it is surprising to see how many lose track of where their products go from there. To do a good job, marketers must know how their products move through the distribution system all the way to their end consumer. So, that becomes my first recommendation: know precisely whom is buying and using your products at every step along the way, in as much detail as possible.

The second type of research I suggest is customer satisfaction research. Understanding how customers use products and measuring their satisfaction is critical. Better yet, the customer satisfaction measuring system should be prescriptive and indicate what is driving satisfaction and what is detracting from it.

Most marketing decisions can be made if these two types of research systems are well-designed. If a marketer has a handle on precisely whom is using their products and what is enhancing and detracting from their satisfaction, most of them are smart enough to make solid decisions.

When pressed for what the third type of research should be, I usually would say that qualitative research is important. I’d put in place a regular program of in-person focus groups or usability projects, and compel key decision makers to attend them. I once consulted for a consumer packaged goods client and discovered that not a single person in their marketing department had spoken directly with a consumer of their products in the past year. There is too much of a gulf between the corporate office and the real world sometimes, and qualitative research can help close that void.

Only when these three things are in place and being well-utilized would I recommend that we move forward with other types of research projects. Competitive studies, new product forecasting, advertising testing, etc. probably take up the lion’s share of most research budgets currently. They are important, but in my view should only be pursued after these first three types of research are fully implemented.

Many research departments get distracted by conducting too many projects of too many types. A focus is important. When decision makers have the basic numbers they need and are in tune with their customer base, they are in a good position to succeed, and it is market research’s role to provide this framework.

Wow! Market research presentations have changed.

I recently led an end-of-project presentation over Zoom. During it, I couldn’t help but think how market research presentations have changed over the years. There was no single event or time period that changed the nature of research presentations, but if you teleported a researcher from the 1990’s to a modern presentation they would feel a bit uncomfortable.

I have been in hundreds of market research presentations — some led by me, some led by others, and I’ve racked up quite a few air miles getting to them. In many ways, today’s presentations are more effective than those in the past. In some other ways, quality has been lost. Below is a summary of some key differences.

Today’s presentations are:

  • Far more likely to be conducted remotely over video or audio. COVID-19 disruptions acted as an accelerant onto this trend which was happening well before 2020. This has made presentations easier to schedule because not everyone has to be available in the office. This allows clients and suppliers to take part from their homes, hotels, and even their vehicles. It seems clear that a lasting effect of the pandemic will be that research presentations will be conducted via Zoom by default. There are plusses and minuses to this. For the first time in 30 years, I find myself working with clients whom I have never met in-person.
  • Much more likely to be bringing in data and perspectives from outside the immediate project. Research projects and presentations tended to be standalone events in the past, concentrating solely on the area of inquiry the study addressed. Today’s presentations are often integrated into a wider reaching strategic discussion that goes beyond the questions the research addresses.
  • More interactive. In yesteryear, the presentation typically consisted of the supplier running through the project results and implications for 45 minutes followed by a period of Q&A. It was rare to be interrupted before the Q&A portion of the meeting. Today’s presentations are often not presentations at all. As a supplier we feel like we are more like emcee’s leading a discussion than experts presenting findings.
  • More inclusive of upper management. We used to present almost exclusively to researchers and mid-level marketers. Now, we tend to see a lot more marketing VP’s and CMOs, strategy officers, and even the CEO on occasion. It used to be rare that our reports would make it to the CEOs desk. Now, I’d say most of the time they do. This is indicative of the increasing role data and research has in business today.
  • Far more likely to integrate the client’s perspective. In the past, internal research staff rarely tried to change or influence our reports and presentations, preferring to keep some distance and then separately add their perspective. Clients have become much more active in reviewing and revising supplier reports and presentations.

Presentations from the 1990’s were:

  • A more thorough presentation of the findings of the study. They told a richer, more nuanced story. They focused a lot more on storytelling and building a case for the recommendations. Today’s presentations often feel like a race to get to the conclusions before you get interrupted.
  • More confrontational. Being challenged on the study method, data quality, and interpretations was more commonplace a few decades ago. I felt a much greater need to prepare and rehearse than I do today because I am not as in control of the flow of the meetings as I was previously. In the past I felt like I had to know the data in great detail, and it was difficult for me to present a project if I wasn’t the lead analyst on it. Today, that is much less of a concern.
  • More strategic. This refers more to the content of the studies than the presentation itself. Since far fewer studies were being done, the ones that were tended to be informing high consequence decisions. While plenty of strategic studies are still conducted, there are so many studies being done today that many of them are informing smaller, low-consequence, tactical decisions.
  • More relaxed. Timelines were more relaxed and as a result research projects were planned well in advance and the projects fed into a wider strategic process. That still happens, but a lot of today’s projects are completed quickly (often too quickly) because information is needed to make a decision that wasn’t even on the radar a few weeks prior.
  • More of a “show.” In the past we rehearsed more, were concerned about the graphical design of the slides, and worried about the layout of the room. Today, there is rarely time for that.
  • More social. Traveling in for a presentation meant spending time beforehand with clients, touring offices, and almost always going to lunch or dinner afterword. Even before the COVID/Zoom era, more recent presentations tended to be “in and out” affairs – where suppliers greet the clients, give a presentation, and leave. While there are many plusses to this, some (I’d actually say most) of the best researchers I know are introverts who were never comfortable with this forced socialization. Those types of people are going to thrive in the new presentation environment.

Client-side researchers were much more planned out in the past. Annually, they would go through a planning phase where all the projects for the year would be budgeted and placed in a timeline. The research department would then execute against that plan. More recently, our clients seem like they don’t really know what projects they will be working on in a few weeks’ time – because many of today’s projects take just days from conception to execution.

I have also noticed that while clients are commissioning more projects they seem to be using fewer suppliers than in the past. I think this is because studies are being done so quickly they don’t have time to manage more than a few supplier relationships. Bids aren’t as competitive and are more likely to be sole-sourced.

Clients are thus developing closer professional relationships with their suppliers. Suppliers are closer partners with clients than ever before, but with this comes a caution. It becomes easy to lose a third-party objectivity when we get too close to the people and issues at hand and when clients have too heavy a hand in the report process. In this sense, I prefer the old days, where we provided a perspective and our clients would then add a POV. Now, we often meld the two into one presentation, and at time we lose the value that comes from a back and forth disagreement over what the findings mean to a business.

If I teleported my 1990’s self to today I would be amazed at how quickly projects go from conception to final presentation. Literally, this happens in about one-third the time it used to. There are many downsides of going too fast and clients rarely focus or care about them. While there are dangers to going too fast, clients seem to prefer getting something 90% right and getting it done tomorrow, than waiting for a perfect project.

There is even a new category of market research called “agile research” that seeks to provide real-time data. I am sure it is a category that will grow, but those employing it need to keep in mind that providing data faster than managers can act on it can actually be a disservice to the client. It is an irony of our field that more data and continuous data can actually slow down decision making.  

Today’s presentations are less stressful, more inclusive, and more strategic. The downside is there are probably too many of them – clients are conducting too many projects on minor issues, they don’t always learn thoroughly from one study before moving onto the next, and researchers are sometimes being rewarded more for getting things done than for providing insight into the business.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.