Archive for the 'Marketing' Category

What is the Potential for Chatbots for Market Research?

AI tools, specifically chatbots such as Chat GBT, are buzzing among market researchers. While chatbots aren’t brand new, Chat GBT’s capabilities have brought the technology to the minds of many for the first time. Chat GBT seems uncannily accurate and less “artificial” than its AI predecessors. It is as if we are watching Frankenstein’s monster become aware in real time.

History shows that there is always a hype curve with new technology. Initially, there is strong enthusiasm that forecasts a greater potential than will ever be realized. Then, as reality sets in, people adjust their expectations downward – usually too far- and disillusionment with the technology ensues. Finally, the technology finds its level – usually below its initial expectations and greater than its disillusionment level.

This is an MBA-like way of saying that chatbots probably won’t take over our field, but they can potentially change how researchers work. They can create efficiencies by saving time and money and may launch new ways of researching consumers that have not been undertaken before.

Chatbots have the potential to significantly improve market research by providing a fast and efficient way to collect data from a large number of people in a short amount of time. Chatbots can be programmed to ask specific questions and record responses in real-time, making the data collection process more efficient and accurate.

Some potential benefits of using chatbots for market research include:

  • Improved efficiency: Chatbots can collect data from multiple respondents simultaneously, reducing the time and resources required for data collection.
  • Increased accuracy: Chatbots can ask questions in a consistent and unbiased manner, reducing the risk of human error and bias in data collection.
  • Greater reach: Chatbots can reach a larger audience, including those who may not typically participate in market research studies.
  • Real-time insights: Chatbots can provide real-time insights into consumer preferences and behaviors, allowing businesses to quickly respond to changes in the market.
  • Cost-effectiveness: Chatbots can be programmed to automate the data collection process, reducing the need for human researchers and lowering the overall cost of market research.

I see immediate potential for Chat GBT in qualitative research. Question probing can be more instantaneous, automatic, and patient for online bulletin boards than a human interviewer can provide. I can envision automated IDIs being developed, which will be online one-on-one conversations that do not require an interviewer. This will make IDIs more affordable, and more will be done. This is a good thing, as quantitative studies are suffering from data quality issues and IDIs are becoming a more critical element of many research programs.

I see less of an immediate potential for open-ends in quantitative surveys. Yes, chatbots can probe an open-ended response. But, their ability to do this well requires a good initial response from the respondent. The quality of open-ended responses in online surveys tends to be weak and is declining. The bot requires a thoughtful response to probe on, so I don’t see this as a good use of these bots until we improve the initial responses we get. For now, I see this being much more effective when surveying an engaged audience, such as customers, or in business-to-business studies.

At some point, I suspect that chatbots will make quantitative surveys more adaptive. Follow-up survey questions can automatically appear depending on earlier responses. Yes, we can do that now with survey logic and branching, but the bots may be able to ask questions we haven’t even thought of. When analyzing data at the end of every research project, we always discover questions we wish we had asked. Chatbots may be able to develop those for us before we even see the data.

On the downside, chatbots will also beget even more survey fraud. To date, it is pretty easy to look at open-end responses and tell which ones were written by a bot. That may not be as easy in the future. More fraudsters will defeat our defenses. This begs a strange question: will we use AI to detect the presence of AI fraud tools?

It is essential to know how chatbots work. They aren’t human. But, like humans, they are “trained” on past information. A chatbot’s advantage over humans is it can be trained on a vast amount of data, which gives them enormous potential to “think” faster than a human can.

But, because they are trained on past information, chatbots will be most accurate when tasked with researching issues with a past analog. They are likely less accurate in predicting the future for new and novel products and questions. And that is often the most important role for research. The job of market research is to provide information to guide future decisions.

As currently designed, these bots will likely be more accurate and helpful in areas where technical, encyclopedic knowledge prevails. We have heard how Chat GBT can ace the SAT exam. But, at least currently, it does much better on AP exams in Science than in English.

Back to the original point: chatbots are probably over-hyped right now, but don’t discount their potential too much. What if I told you that two of the paragraphs of this post were generated by Chat GBT? Do you think you can tell which ones?

Be afraid. Be very afraid!

The value of looking at data from more than one perspective

About 20 years ago, I flew to the Midwest to present the findings from an extensive project. My audience included the head of marketing, my direct research client, and the firm’s CEO. We constructed an insightful study that profiled the market my client played in, their position, and their competitive strengths and weaknesses.

I spent about an hour presenting the study findings and fielding questions. It went great. It was one of those meetings where I knew our work would affect this company, and the CEO seemed to buy into taking action based on our recommendations.

Then, with about five minutes to go in the meeting, I asked if there were any follow-up analyses they would like us to do. The CEO said, “yes, there is one thing …”

He then instructed me to take a couple of weeks to do a new analysis and then to fly back out and present it to him. I was at first taken aback, as I thought the project was over, and I was ready to declare victory and move on to other things.

The analysis he requested? He told me to imagine that his largest competitor would call me tomorrow. I could use everything I knew about my client and the information gathered in our study. If this competitor called me, what would I tell them about how to position against my client? What are the implications of our research from his competitor’s point of view?

This is a brilliant idea. I have always believed that although research can often be quite insightful, it is more about what clients do with our data that matters. This CEO knew full well that his competitor probably had their own research firm doing a similar project to what I had just presented. He wanted to view the world from his competitor’s perspective.

It worked. I returned in a couple of weeks and did a role-play presentation where I pretended they were their competition. This led to a game-theory discussion of how their competition would likely react to initiatives they were considering, how they could address their weaknesses, and where their strengths mattered.

Since then, I have proposed similar analyses to many clients. I have been surprised at how few have taken me up on the offer. So, late in presentations, I often slip in a few slides showing what I would tell their competition based on the study findings if I worked for them.

If I were a client-side researcher, I’d ask my researchers to do this regularly. It forces us to do a better job at checking our biases, as, like it or not, we want our data to show our clients are succeeding. We know how much work they put in, and it isn’t easy to tell them where their weaknesses are. Looking at the data from another angle gives us the space to be more agnostic in our conclusions and provides better insight to the clients. It makes us more agnostic to the data and less likely to tell clients what they want to hear.

The request from this CEO made me a better, more empathetic researcher. We worked with his firm for about 15 years; he recently retired. He will always be in my “client hall of fame” because of his willingness to view research results objectively and his insistence that we consider all perspectives.

Clients hire us so they can learn from us, but often they don’t realize how much we learn from them.

Things That Surprised Me When Writing a Book

I recently published a book outlining the challenges election pollsters face and the implications of those challenges for survey researchers.

This book was improbable. I am not an author nor a pollster, yet I wrote a book on polling. It is a result of a curiosity that got away from me.

Because I am a new author, I thought it might be interesting to list unexpected things that happened along the way. I had a lot of surprises:

  • How quickly I wrote the first draft. Many authors toil for years on a manuscript. The bulk of POLL-ARIZED was composed in about three weeks, working a couple of hours daily. The book covers topics central to my career, and it was a matter of getting my thoughts typed and organized. I completed the entire first draft before telling my wife I had started it.
  • How long it took to turn that first draft into a final draft. After I had all my thoughts organized, I felt a need to review everything I could find on the topic. I read about 20 books on polling and dozens of academic papers, listened to many hours of podcasts, interviewed polling experts, and spent weeks researching online. I convinced a few fellow researchers to read the draft and incorporated their feedback. The result was a refinement of my initial draft and arguments and the inclusion of other material. This took almost a year!
  • How long it took to get the book from a final draft until it was published. I thought I was done at this point. Instead, it took another five months to get it in shape to publish – to select a title, get it edited, commission cover art, set it up on Amazon and other outlets, etc. I used Scribe Media, which was expensive, but this process would have taken me a year or more if I had done it without them.
  • That going for a long walk is the most productive writing tactic ever. Every good idea in the book came to me when I trekked in nature. Little of value came to me when sitting in front of a computer. I would go for long hikes, work out arguments in my head, and brew a strong cup of coffee. For some reason, ideas flowed from my caffeinated state of mind.
  • That writing a book is not a way to make money. I suspected this going in, but it became clear early on that this would be a money-losing project. POLL-ARIZED has exceeded my sales expectations, but it cost more to publish than it will ever make back in royalties. I suspect publishing this book will pay back in our research work, as it establishes credibility for us and may lead to some projects.
  • Marketing a book is as challenging as writing one. I guide large organizations on their marketing strategy, yet I found I didn’t have the first clue about how to promote this book. I would estimate that the top 10% of non-fiction books make up 90% of the sales, and the other 90% of books are fighting for the remaining 10%.
  • Because the commission on a book is a few dollars per copy, it proved challenging to find marketing tactics that pay back. For instance, I thought about doing sponsored ads on LinkedIn. It turns out that the per-click charge for those ads was more than the book’s list price. The best money I spent to promote the book was sponsored Amazon searches. But even those failed to break even.
  • Deciding to keep the book at a low price proved wise. So many people told me I was nuts to hold the eBook at 99 cents for so long or keep the paperback affordable. I did this because it was more important to me to get as many people to read it as possible than to generate revenue. Plus, a few college professors have been interested in adopting the book for their survey research courses. I have been studying the impact of book prices on college students for about 20 years, and I thought it was right not to contribute to the problem.
  • BookBub is incredible if you are lucky enough to be selected. BookBub is a community of voracious readers. I highly recommend joining if you read a lot. Once a week, they email their community about new releases they have vetted and like. They curate a handful of titles out of thousands of submissions. I was fortunate that my book got selected. Some authors angle for a BookBub deal for years and never get chosen. The sales volume for POLL-ARIZED went up by a factor of 10 in one day after the promotion ran.
  • Most conferences and some podcasts are “pay to play.” Not all of them, but many conferences and podcasts will not support you unless you agree to a sponsorship deal. When you see a research supplier speaking at an event or hear them on a podcast, they may have paid the hosts something for the privilege. This bothers me. I understand why they do this, as they need financial support. Yet, I find it disingenuous that they do not disclose this – it is on the edge of being unethical. It harms their product. If a guest has to pay to give a conference presentation or talk on a podcast, it pressures them to promote their business rather than have an honest discussion of the issues. I will never view these events or podcasts the same. (If you see me at an event or hear me on a podcast, be assured that I did not pay anything to do so.)
  • That the industry associations didn’t want to give the book attention. If you have read POLL-ARIZED, you will know that it is critical (I believe appropriately and constructively) of the polling and survey research fields. The three most important associations rejected my proposals to present and discuss the book at their events. This floored me, as I cannot think of any topics more essential to this industry’s future than those I raise in the book. Even insights professionals who have read the book and disagree with my arguments have told me that I am bringing up points that merit discussion. This cold shoulder from the associations made me feel better about writing that “this is an industry that doesn’t seem poised to fix itself.”
  • That clients have loved the book. The most heartwarming part of the process is that it has reconnected me with former colleagues and clients from a long research career. Everyone I have spoken to who is on the client-side of the survey research field has appreciated the book. Many clients have bought it for their entire staff. I have had client-side research directors I have never worked with tell me they loved the book.
  • That some of my fellow suppliers want to kill me. The book lays our industry bare, and not everyone is happy about that. I had a competitor ask me, ” Why are you telling clients to ask us what our response rates are?” I stand behind that!
  • How much I learned along the way. There is something about getting your thoughts on paper that creates a lot of learning. There is a saying that the best way to learn a subject is to teach it. I would add that trying to write a book about something can teach you what you don’t know. That was a thrill for me. But then again, I was the type of person who would attend lectures for classes I wasn’t even taking while in college. I started writing this book to educate myself, and it has been a great success in that sense.
  • How tough it was for me to decide to publish it. There was not a single point in the process when I did not consider not publishing this book. I found I wanted to write it a lot more than publish it. I suffered from typical author fears that it wouldn’t be good enough, that my peers would find my arguments weak, or that it would bring unwanted attention to me rather than the issues the book presents. I don’t regret publishing it, but it would never have happened without encouragement from the few people who read it in advance.
  • The respect I gained for non-fiction authors. I have always been a big reader. I now realize how much work goes into this process, with no guarantee of success. I have always told people that long-form journalism is the profession I respect the most. Add “non-fiction” writers to that now!

Almost everyone who has contacted me about the book has asked me if I will write another one. If I do, it will likely be on a different topic. If I learned anything, this process requires selecting an issue you care about passionately. Journalists are people who can write good books about almost anything. The rest of us mortals must choose a topic we are super interested in, or our books will be awful.

I’ve got a few dancing around in my head, so who knows, maybe you’ll see another book in the future.

For now, it is time to get back to concentrating on our research business!

POLL-ARIZED available on May 10

I’m excited to announce that my book, POLL-ARIZED, will be available on May 10.
 
After the last two presidential elections, I was fearful my clients would ask a question I didn’t know how to answer: “If pollsters can’t predict something as simple as an election, why should I believe my market research surveys are accurate?”
 
POLL-ARIZED results from a year-long rabbit hole that question led me down! In the process, I learned a lot about why polls matter, how today’s pollsters are struggling, and what the insights industry should do to improve data quality.
 
I am looking for a few more people to read an advance copy of the book and write an Amazon review on May 10. If you are interested, please send me a message at poll-arized@cruxresearch.com.

Questions You Are Not Asking Your Market Research Supplier That You Should Be Asking

It is no secret that providing representative samples for market research projects has become challenging. While clients are always focused on obtaining respondents quickly and efficiently, it is also important that they are concerned with the quality of their data. The reality is that quality is slipping.

While there are many causes of this, one that is not discussed much is that clients rarely ask their suppliers the tough questions they should. Clients are not putting pressure on suppliers to focus on data quality. Since clients ultimately control the purse strings of projects, suppliers will only improve quality if clients demand it.

I can often tell if I have an astute client by their questions when we are designing studies. Newer or inexperienced clients tend to start by talking about the questionnaire topics. Experienced clients tend to start by talking about the sample and its representativeness.

Below is a list of a few questions that I believe clients should be asking their suppliers on every study. The answers to these are not always easy to come by, but as a client, you want to see that your supplier has contemplated these questions and pays close attention to the issues they highlight.

For each, I have also provided a correct or acceptable answer to expect from your supplier.

  • What was the response rate to my study? While it was once commonplace to report response rates, suppliers try to dodge this issue. Most data quality issues stem from low response rates. Correct Answer: For most studies, under 5%. Unless the survey is being fielded among a highly engaged audience, such as your customers, you should be suspicious of any answer over 15%. “I don’t know” is an unacceptable answer. Suppliers will also try to convince you that response rates do not matter when every data quality issue we experience stems from inadequate response to our surveys.
  • How many respondents did you remove in fielding for quality issues? This is an emerging issue. The number of bad-quality respondents in studies has grown substantially in just the last few years. Correct answer: at least 10%, but preferably between 25% and 40%. If your supplier says 0%, you should question whether they are properly paying attention to data quality issues. I would guide you to find a different supplier if they cannot describe a process to remove poor-quality respondents. There is no standard way of doing this, but each supplier should have an established process.
  • How were my respondents sourced? This is an essential question seldom asked unless our client is an academic researcher. It is a tricky question to answer. Correct answer: This is so complicated that I have difficulty providing a cogent response to our clients. Here, the hope is that your supplier has at least some clue as to how the panel companies get their respondents and know who to go to if a detailed explanation is needed. They should connect you with someone who can explain this in detail.
  • What are you doing to protect against bots? Market research samples are subject to the ugly things that happen online – hackers, bots, cheaters, etc. Correct answer: Something proactive. They might respond that they are working with the panel companies to prevent bots or a third-party firm to address this. If they are not doing anything or don’t seem to know that bots are a big issue for surveys, you should be concerned.
  • What is in place to ensure that my respondents are not being used for competitors or vice-versa? Often, clients should care that the people answering their surveys have not done another project in your product category recently. I have had cases where two suppliers working for the same client (one being us) used the same sample source and polluted the sample base for both projects because we did not know the other study was fielding. Correct answer: Something if this is important to you. If your research covers brand or advertising awareness, you should account for this. If you are commissioning work with several suppliers, this takes considerable coordination.
  • Did you run simulated data through my survey before fielding? This is an essential, behind-the-scenes step that all suppliers that know what they are doing take. Running thousands of simulated surveys through the questionnaire tests survey logic and ensures that the right people get to the right questions. While it doesn’t prevent all errors, it catches many of them. Correct answer: Yes. If the supplier does not know what simulated data is, it is time to consider a new supplier.
  • How many days will my study be in the field? Many errors in data quality stem from conducting studies too quickly. Correct answer: Varies, but this should be 10-21 days for a typical project. If your study better have difficult-to-find respondents, this could be 3-4 weeks. If the data collection period is shorter than ten days, you WILL have data quality errors that arise, so be sure you understand the tradeoffs for speed. Don’t insist on field speed unless you need to.
  • Can I have a copy of the panel company’s answers to the ESOMAR questions? ESOMAR has put out a list of questions to help buyers of online samples. Every sample supplier worth using will have created a document that answers these questions. Correct answer: Yes. Do not work with a company that has not put together a document answering these questions, as all the good ones have. However, after reading this document, don’t expect to understand how your respondents are being sourced.
  • How do you handle requests down the road when the study is over? It is a longstanding pet peeve of most clients that suppliers charge for basic customer support after the project is over. Make sure you have set expectations properly upfront and put these expectations into the contract. Correct answer: Forever. Our company only charges if support requests become substantial. Many suppliers will provide support for three- or six months post-study and will charge for this support. I have never understood this, as I am flattered when a client calls me to discuss a study that was done years ago, as this means our study is continuing to make an impact. We do not charge for this follow-up unless the request requires so much time that we have to.

There are probably many other questions clients should be asking suppliers. Clients need to get tougher on insisting on data quality. It is slipping, and suppliers are not investing enough to improve response rates and develop trust with respondents. If clients pressure them, the economic incentives will be there to create better techniques to obtain quality research data.

Let’s Appreciate Statisticians Who Make Data Understandable

Statistical analyses are amazing, underrated tools. All scientific fields depend on discoveries in statistics to make inferences and draw conclusions. Without statistics, advances in engineering, medicine, and science that have greatly improved the quality of life would not have been possible. Statistics is the Rodney Dangerfield of academic subjects – it never gets the respect it deserves.

Statistics is central to market research and polling. We use statistics to describe our findings and understand the relationships between variables in our data sets. Statistics are the most important tools we have as researchers.

However, we often misuse these tools. I firmly believe that pollsters and market researchers overdo it with statistics. Basic, statistical analyses are easy to understand, but complicated ones are not. Researchers like to get into complex statistics because it lends an air of expertise to what we do.

Unfortunately, most sophisticated techniques are impossible to convey to “normal” people who may not have a statistical background, and this tends to describe the decision-makers we support.

I learned long ago that when working with a dataset, any result that will be meaningful will likely be uncovered by using simple descriptive statistics and cross-tabulations. Multivariate techniques can tease out more subtle relationships in the data. Still, the clients (primarily marketers) we work with are not looking for subtleties – they want some conclusions that leap off the page from the data.

If a result is so subtle that it needs complicated statistics to find, it is likely not a large enough result to be acted upon by a client.

Because of this, we tend to use multivariate techniques to confirm what we see with more straightforward methods. Not always – as there are certainly times when the client objectives call for sophisticated techniques. But, as researchers, our default should be to use the most straightforward designs possible.

I always admire researchers who make complicated things understandable. That should be the goal of statistical analyses. George Terhanian of Electric Insights has developed a way to use sophisticated statistical techniques to answer some of the most fundamental questions a marketer will ask.

In his article “Hit? Stand? Double? Master’ likely effects’ to make the right call”, George describes his revolutionary process. It is sophisticated behind the scenes, but I like the simplicity in the questions it can address.

He has created a simulation technique that makes sense of complicated data sets. You may measure hundreds of things on a survey and have an excellent profile of the attitudes and behaviors of your customer base. But, where should you focus your investments? This technique demonstrates the likely effects of changes.

As marketers, we cannot directly increase sales. But we can establish and influence attitudes and behaviors that result in sales. Our problem is often to identify which of these attitudes and behaviors to address.

For instance, if I can convince my customer base that my product is environmentally responsible, how many of them can I count on to buy more of my product? The type of simulator described in this article can answer this question, and as a marketer, I can then weigh if the investment necessary is worth the probable payoff.

George created a simulator on some data from a recent Crux Poll. Our poll showed that 17% of Americans trust pollsters. George’s analysis shows that trust in pollsters is directly related to their performance in predicting elections.

Modeling the Crux Poll data showed that if all Americans “strongly agreed” that presidential election polls do a good job of predicting who will win, trust in pollsters/polling organizations would increase by 44 million adults. If Americans feel “extremely confident” that pollsters will accurately predict the 2024 election, trust in pollsters will increase by an additional 40 million adults.

If we are worried that pollsters are untrusted, this suggests that improving the quality of our predictions should address the issue.

Putting research findings in these sorts of terms is what gets our clients’ attention. 

Marketers need this type of quantification because it can plug right into financial plans. Researchers often hear that the reports we provide are not “actionable” enough. There is not much more actionable than showing how many customers would be expected to change their behavior if we successfully invest in a marketing campaign to change an attitude.

Successful marketing is all about putting the probabilities in your favor. Nothing is certain, but as a marketer, your job is to decide where best place your resources (money and time). This type of modeling is a step in the right direction for market researchers.

Less useful research questions

Questionnaire “real estate” is limited and valuable. Most surveys fielded today are too long and this causes problems with respondent fatigue and trust. Researchers tend to start the questionnaire design process with good intent and aim to keep survey experiences short and compelling for respondents. However, it is rare to see a questionnaire get shorter as it undergoes revision and review, and many times the result is impossibly long surveys.

One way to guard against this is to be mindful. All questions included should have a clear purpose and tie back to study objectives. Many times, researchers include some questions and options simply out of habit, and not because these questions will add value to the project.

Below are examples of question types that, more often or not, add little to most questionnaires. These questions are common and used out of habit. There are certainly exceptions when it makes sense to include these questions, but for the most part we advise against using them unless there is a specific reason to include them.

Marital status

Somewhere along the way, asking a respondent’s marital status became standard on most consumer questionnaires. Across thousands of studies, I can only recall a few times when I have actually used it for anything. It is appropriate to ask if it is relevant. Perhaps your client is a jewelry company or in the bridal industry. Or, maybe you are studying relationships. However, I would nominate marital status as being the least used question in survey research history.

Other (specify)

Many multiple response questions ask a respondent to select all that apply from a list, and then as a final option will have “other.” Clients constantly pressure researchers to leave a space for respondents to type out what this “other” option is. We rarely look at what they type in. I tell clients that if we expect a lot of respondents to select the other option, it probably means that we have not done a good job at developing the list. It may also mean that we should be asking the question in an open-ended fashion. Even when it is included, most of the respondents who select other will not type anything into the little box anyway.

Don’t Know Options

We recently composed an entire post about when to include a Don’t Know option on a question. To sum it up, the incoming assumption should be that you will not use a Don’t Know option unless you have an explicit reason to do so. Including Don’t Know as an option can make a data set hard to analyze. However, there are exceptions to this rule, as Don’t Know can be an appropriate choice. That said, it is overused on surveys currently.

Open-Ends

The transition from telephone to online research has completely changed how researchers can ask open-ended questions. In the telephone days, we could pose questions that were very open-ended because we had trained interviewers who could probe for meaningful answers. With online surveys, open-ended questions that are too loose rarely produce useful information. Open-ends need to be specific and targeted. We favor the inclusion of just a handful of open-ends in each survey, and that they are a bit less “open-ended” than what has been traditionally asked.

Grid questions with long lists

We have all seen these. These are long lists of items that require a scaled response, perhaps a 5-point agree/disagree scale. The most common abandon point on a survey is the first time a respondent encounters a grid question with a long list. Ideally, these lists are about 4 to 6 items and there are no more than two or three of them on a questionnaire.

We currently field a study that has a list like this with 28 items in it. There is no way we are getting good information from this question and we are fatiguing the respondent for the remainder of the survey.

Specifying time frames

Survey research often seeks to find out about a behavior across a specified time frame. For instance, we might want to know if a consumer has used a product in the past day, past week, past month, etc. The issue here is not so much the time frame, it is when we consider the responses to be literal. I have seen clients take past day usage and multiply it by 365 and assume that will equate to past year usage. Technically and mathematically, that might be true, but it isn’t how respondents react to questions.

In reality, it is likely accurate to ask if a respondent has done something in the past day. But, once the time frames get longer, we are really asking about “ever” usage. It depends a bit on the purchase cycle of the product and its cost, but for most products, asking if they have used in the past month, 6 months, year, etc. will yield similar responses.

Some researchers work around this by just asking “ever used” and “recently used.” There are times when that works, but we tend to set a reasonable time frame for recent use and go with that, typically within the past week.

Household income

Researchers have asked household income as long as the survey research field has been around. There are at least three serious problems with it. First, many respondents are not knowledgeable about what their household income is. Most households have a “family CFO” who takes the lead on financial issues, and even this person often will not know what the family income is. 

Second, the categories chosen affect the response to the income question, indicating just how unstable it is. Asking household income in say, ten categories versus five categories will not result in comparable data. Respondents tend to assume the middle of the range given is normal, and respond using that as a reference point.

Third, and most importantly, household income is a lousy measure of socio-economic status (SES). Many young people have low annual incomes but a wealthy lifestyle as they are still being supported by their parents. Many older people are retired and may have almost non-existent incomes, yet live a wealthy lifestyle off of their savings. Household income tends to only be a reasonable measure of SES for respondents aged about 30 to 60,

There are better measures of SES. Education level can work, and a particularly good question is to ask the respondent about their mother’s level of education, which has been shown to correlate strongly with SES. We also ask about their attitudes towards their income – whether they have all the money they need, just enough, or if they struggle to meet basic expenses.

Attention spans are getting shorter and as more and more surveys are being completed on mobile devices there are plenty of distractions as respondents answer questionnaires. Engage them, get their attention, and keep the questionnaire short. There may be no such thing as a dumb question, but there are certainly questions that when asked on a survey do not yield useful information.

Should you include a “Don’t Know” option on your survey question?

Questionnaire writers construct a bridge between client objectives and a line of questioning that a respondent can understand. This is an underappreciated skill.

The best questionnaire writers empathize with respondents and think deeply about tasks respondents are asked to perform. We want to strike a balance between the level of cognitive effort required and a need to efficiently gather large amounts of data. If the cognitive effort required is too low, the data captured is not of high quality. If it is too high, respondents get fatigued and stop attending to our questions.

One of the most common decisions researchers have to make is whether or not to allow for a Don’t Know (DK) option on a question. This is often a difficult choice, and the correct answer on whether to include a DK option might be the worst possible answer: “It depends.”

Researchers have genuine disagreements about the value of a DK option. I lean strongly towards not using DK’s unless there is a clear and considered reason for doing so.

Clients pay us to get answers from respondents and to find out what they know, not what they don’t know. Pragmatically, whenever you are considering adding a DK option your first inclination should be that you perhaps have not designed the question well. If a large proportion of your respondent base will potentially choose “don’t know,” odds are high that you are not asking a good question to begin with, but there are exceptions.

If you get in a situation where you are not sure if you should include a DK option, the right thing to do is to think broadly and reconsider your goal: why are you asking the question in the first place? Here is an example which shows how the DK decision can actually be more complicated than it first appears.

We recently had a client that wanted us to ask a question similar to this: “Think about the last soft drink you consumed. Did this soft drink have any artificial ingredients?”

Our quandary was whether we should just ask this as a Yes/No question or to also give the respondent a DK option. There was some discussion back and forth, as we initially favored not including DK, but our client wanted it.

Then it dawned on us that whether or not to include DK depended on what the client wanted to get out of the question. On one hand, the client might want to truly understand if the last soft drink consumed had any artificial ingredients in it, which is ostensibly what the question asks. If this was the goal, we felt it was necessary to better educate the respondent on what an “artificial ingredient” was so they could provide an informed answer and so all respondents would be working from a common definition. Or, alternatively, we could ask for the exact brand and type of soft drink they consumed and then on the back-end code which ones have artificial ingredients and which do not, and thus get a good estimate for the client.

The other option was to realize that respondents might have their own definitions of “artificial ingredients” that may or may not match our client’s definition. Or, they may have no clue what is artificial and what is not.

In the end, we decided to use the DK option in this case because understanding how many people are ignorant to artificial ingredients fit well with our objectives. When we pressed the client, we learned that they wanted to document this ambiguity. If a third of consumers don’t know whether or not their soft drinks have artificial ingredients in them, this would be useful information for our client to know.

This is a good example on how a seemingly simple question can have a lot of thinking behind it and how it is important to contextualize this reasoning when reporting results. In this case, we are not really measuring whether people are drinking soft drinks with artificial ingredients. We are measuring what they think they are doing, which is not the same thing and likely more relevant from a marketing point-of-view.

There are other times when a DK option makes sense to include. For instance, some researchers will conflate the lack of an option (a DK response) with a neutral opinion and these are not the same thing. For example, we could be asking “how would you rate the job Joe Biden is doing as President?” Someone who answers in the middle of the response scale likely has a considered, neutral opinion of Joe Biden. Someone answering DK has not considered the issue and should not be assumed to have a neutral opinion of the president. This is another case where it might make sense to use DK.

However, there are probably more times when including a DK option is a result of lazy questionnaire design than any deep thought regarding objectives. In practice, I have found that it tends to be clients who are inexperienced in market research that press hardest to include DK options.

There are at least a couple of serious problems with including DK options on questionnaires. The first is “satisficing” – which is a tendency respondents have to not place a lot of effort on responding and instead choose the option that requires the least cognitive effort. The DK option encourages satisficing. A DK option also allows respondents to disengage with the survey and can lead to inattention on subsequent items.

DK responses create difficulties when analyzing data. We like to look at questions on a common base of respondents, and that becomes hard to comprehend when respondents choose DK on some questions but not others. Including DK makes it harder to compare results across questions. DK options also limit the ability to use multivariate statistics, as a DK response does not fit neatly on a scale.

Critics would say that researchers should not force respondents to express and opinion they do not have and therefore should provide DK options. I would counter by saying that if you expect a substantial amount of people to not have an opinion, odds are high you should reframe the question and ask them about something they do know about. It is usually (but not always) the case that we want to find out more about what people know than what they don’t know.

“Don’t know” can be a plausible response. But, more often than not, even when it is a plausible response if we feel a lot of people will choose it, we should reconsider why we are asking the question. Yes, we don’t want to force people to express an option they don’t have. But rather than include DK, it is better to rewrite a question to be more inclusive of everybody.

As an extreme example, here is a scenario that shows how a DK can be designed out of a question:

We might start with a question the client provides us: “How many minutes does your child spend doing homework on a typical night?” For this question, it wouldn’t take much pretesting to realize that many parents don’t really know the answer to this, so our initial reaction might be to include a DK option. If we don’t, parents may give an uninformed answer.

However, upon further thought, we should realize that we may not really care about how many minutes the child spends on homework and we don’t really need to know whether the parent knows this precisely or not. Thinking even deeper, some kids are much more efficient in their homework time than others, so measuring quantity isn’t really what we want at all. What we really want to know is, is the child’s homework level appropriate and effective from the parent’s perspective?

This probing may lead us down a road to consider better questions, such as “in your opinion, does your child have too much, too little, or about the right amount of homework?” or “does the time your child spends on homework help enhance his/her understanding of the material?” This is another case when thinking more about why we are asking the question tends to result in better questions being posed.

This sort of scenario happens a lot when we start out thinking we want to ask about a behavior, when what we really want to do is ask about an attitude.

The academic research on this topic is fairly inconclusive and sometimes contradictory. I think that is because academic researchers don’t consider the most basic question, which is whether or not including DK will better serve the client’s needs. There are times that understanding that respondents don’t know is useful. But, in my experience, more often than not if a lot of respondents choose DK it means that the question wasn’t designed well. 


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.