Posts Tagged 'Questionnaires'

Your grid questions probably aren’t working

Convincing people to participate in surveys and polls has become so challenging that more attention is going toward preventing them from suspending once they choose to respond.

Most survey suspends occur in one of two places. The first is at the initial screen the respondent sees. Respondents click through an invitation, and many quickly decide that the survey isn’t for them and abandon the effort.

The second most common place is the first grid question respondents encounter. They see an imposing grid question and decide it isn’t worth their time to continue. It doesn’t matter where this question is placed – this happens whether the first grid question is early in the questionnaire, in the middle, or toward the end.

Respondents hate answering grid questions. Yet clients continue to ask them, and survey researchers include them without much thought. The quality of data they yield tends to be low.

A measurement error issue with grid questions is known as “response set bias.” When we present a list of, say, ten items, we want to get a respondent to make an independent judgment of each, unrelated to what they think of the others. But, with a long list of items, that is not what happens. Instead, when people respond to later questions, they remember what they said earlier. If I indicated that feature A in a list was “somewhat important” to me when I assess feature B, it is natural to think about how it compares in importance to feature A. This introduces unwanted correlations into the data set.

Instead, we want a respondent to assess feature A, clear their mind entirely, and then assess feature B. That is a challenging task, but placing features on a long, intimidating list, makes it near impossible. Some researchers think we eliminate this error by randomizing the list order, but all that does is spread the error out. It is important to randomize the options so this error doesn’t concentrate on just a few items, but randomization does not solve the problem.

Errors you have probably heard of lurk in long grid questions. Things like fatigue biases (respondents attend less to the items late in the list), question order biases, priming effects, recency biases, etc. In short, grid questions are just asking for many measurement errors, and we end up crossing our fingers and hoping some of these cancel each other out.

This is admittedly a mundane topic, but it is the one questionnaire design issue I have the most difficulty convincing clients to do something about. Grid questions capture a lot of data in a short amount of questionnaire time, so they are enticing for clients.

I prefer a world where we seldom ask them. If we need to, we recommend maybe one or two per questionnaire and never more than 4 to 6 items in them. I rarely succeed in convincing clients of this.

“Textbook” explanations of problems with grid questions do not include the issue that bothers me most. What happens in grid questions is the question respondents hear and respond to is often not the literal question that is composed.

Consider a grid question like this, with a 5-point importance scale as the response options:

Q: How important were the following when you decided to buy the widget?

  1. The widget brand cares about sustainability
  2. The price of the widget
  3. The color of the widget is attractive to you
  4. The widget will last a long time

Think about the first item (“The widget brand cares about sustainability”). The client wants to understand how important sustainability is in the buying decision. How important of a buying criterion is sustainability?

But that is likely not what the respondent “hears” in the question. The respondent will probably see the question as asking if they care about sustainability and who doesn’t? So, what would tend to happen is sustainability would be overstated as a decision driver when analyzing the data set. Respondents don’t leap to thinking about sustainability as a buying consideration; instead, they respond about sustainability in general.

Clients and suppliers must realize that respondents do not parse our words as we would like them to, and they do not always attend to our questions. We need to anticipate this.

How do we fix this issue?  We should be more straightforward in how we ask questions. In this example, I would prefer to derive the importance of sustainability in the buying decision. I’d include a question asking how much they care about sustainability (and be careful to phrase it so it can have a response across various answer choices).  Then, in a second question, I would gather a dependent variable asking how likely they are to buy the widget in the future.

A regression or correlation analysis would provide coefficients across variables that indicate their relative importance. Yes, it would be based on correlations and not necessarily causation. In reality, research studies rarely set up the experiments necessary to give evidence of causation, and we should not get too hung up on that.

I would conclude that sustainability is an essential feature if it popped in the regression as having a high coefficient and if I saw something else in other questions or open-ends that indicated sustainability mattered from another angle. Always look for another data point or another data source that supports your conclusion.

Grid questions are the most over-rated and overused types of survey questions. Clients like them, but they tend to provide poor-quality data. Use them sparingly and look for alternatives.

Less useful research questions

Questionnaire “real estate” is limited and valuable. Most surveys fielded today are too long and this causes problems with respondent fatigue and trust. Researchers tend to start the questionnaire design process with good intent and aim to keep survey experiences short and compelling for respondents. However, it is rare to see a questionnaire get shorter as it undergoes revision and review, and many times the result is impossibly long surveys.

One way to guard against this is to be mindful. All questions included should have a clear purpose and tie back to study objectives. Many times, researchers include some questions and options simply out of habit, and not because these questions will add value to the project.

Below are examples of question types that, more often or not, add little to most questionnaires. These questions are common and used out of habit. There are certainly exceptions when it makes sense to include these questions, but for the most part we advise against using them unless there is a specific reason to include them.

Marital status

Somewhere along the way, asking a respondent’s marital status became standard on most consumer questionnaires. Across thousands of studies, I can only recall a few times when I have actually used it for anything. It is appropriate to ask if it is relevant. Perhaps your client is a jewelry company or in the bridal industry. Or, maybe you are studying relationships. However, I would nominate marital status as being the least used question in survey research history.

Other (specify)

Many multiple response questions ask a respondent to select all that apply from a list, and then as a final option will have “other.” Clients constantly pressure researchers to leave a space for respondents to type out what this “other” option is. We rarely look at what they type in. I tell clients that if we expect a lot of respondents to select the other option, it probably means that we have not done a good job at developing the list. It may also mean that we should be asking the question in an open-ended fashion. Even when it is included, most of the respondents who select other will not type anything into the little box anyway.

Don’t Know Options

We recently composed an entire post about when to include a Don’t Know option on a question. To sum it up, the incoming assumption should be that you will not use a Don’t Know option unless you have an explicit reason to do so. Including Don’t Know as an option can make a data set hard to analyze. However, there are exceptions to this rule, as Don’t Know can be an appropriate choice. That said, it is overused on surveys currently.

Open-Ends

The transition from telephone to online research has completely changed how researchers can ask open-ended questions. In the telephone days, we could pose questions that were very open-ended because we had trained interviewers who could probe for meaningful answers. With online surveys, open-ended questions that are too loose rarely produce useful information. Open-ends need to be specific and targeted. We favor the inclusion of just a handful of open-ends in each survey, and that they are a bit less “open-ended” than what has been traditionally asked.

Grid questions with long lists

We have all seen these. These are long lists of items that require a scaled response, perhaps a 5-point agree/disagree scale. The most common abandon point on a survey is the first time a respondent encounters a grid question with a long list. Ideally, these lists are about 4 to 6 items and there are no more than two or three of them on a questionnaire.

We currently field a study that has a list like this with 28 items in it. There is no way we are getting good information from this question and we are fatiguing the respondent for the remainder of the survey.

Specifying time frames

Survey research often seeks to find out about a behavior across a specified time frame. For instance, we might want to know if a consumer has used a product in the past day, past week, past month, etc. The issue here is not so much the time frame, it is when we consider the responses to be literal. I have seen clients take past day usage and multiply it by 365 and assume that will equate to past year usage. Technically and mathematically, that might be true, but it isn’t how respondents react to questions.

In reality, it is likely accurate to ask if a respondent has done something in the past day. But, once the time frames get longer, we are really asking about “ever” usage. It depends a bit on the purchase cycle of the product and its cost, but for most products, asking if they have used in the past month, 6 months, year, etc. will yield similar responses.

Some researchers work around this by just asking “ever used” and “recently used.” There are times when that works, but we tend to set a reasonable time frame for recent use and go with that, typically within the past week.

Household income

Researchers have asked household income as long as the survey research field has been around. There are at least three serious problems with it. First, many respondents are not knowledgeable about what their household income is. Most households have a “family CFO” who takes the lead on financial issues, and even this person often will not know what the family income is. 

Second, the categories chosen affect the response to the income question, indicating just how unstable it is. Asking household income in say, ten categories versus five categories will not result in comparable data. Respondents tend to assume the middle of the range given is normal, and respond using that as a reference point.

Third, and most importantly, household income is a lousy measure of socio-economic status (SES). Many young people have low annual incomes but a wealthy lifestyle as they are still being supported by their parents. Many older people are retired and may have almost non-existent incomes, yet live a wealthy lifestyle off of their savings. Household income tends to only be a reasonable measure of SES for respondents aged about 30 to 60,

There are better measures of SES. Education level can work, and a particularly good question is to ask the respondent about their mother’s level of education, which has been shown to correlate strongly with SES. We also ask about their attitudes towards their income – whether they have all the money they need, just enough, or if they struggle to meet basic expenses.

Attention spans are getting shorter and as more and more surveys are being completed on mobile devices there are plenty of distractions as respondents answer questionnaires. Engage them, get their attention, and keep the questionnaire short. There may be no such thing as a dumb question, but there are certainly questions that when asked on a survey do not yield useful information.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.