In the 1990’s I was working on a project for a consumer packaged goods client in a “low-involvement” category. The product was inexpensive (at the time it sold for less than $2) and ubiquitous – the category was owned by more than 90% of U.S. households. But, it was also mundane and sold mainly on a price basis. More than two-thirds of the category volume was private label. For all intents, the category was a commodity.
Our client wanted to delve deeply into the consumer’s mindset when buying and using the product. They had devised a list of 36 product attributes and our task was to discover which of these differentiated their product from the competition and drove sales. This is a common project type, but it was entirely unworkable for this particular study.
The reason? The product was so low-involvement and inexpensive that customers really never thought about it, let alone its performance on 36 nuanced characteristics. I personally hadn’t ever heard of at least half of the attributes despite using the product my entire life. We were asking consumers to differentiate traits they had never considered in advance.
We proceeded to build a questionnaire and conduct a study, and predictably found that the 36 items were all highly correlated with each other. We applied some statistical wizardry (factor analysis) to demonstrate that essentially, consumer option on the category came down to if they could recognize the brand and how much it cost. In effect, there were really only two questions to ask, yet we had asked 36.
It took me quite a while to understand why our study had failed. It really came down to understanding that writing a good question is brain surgery without the mess. One way to view the questionnaire writing task is recognizing we are trying to get inside the respondent’s brain and retrieve an opinion. This works very well for high involvement decisions and for issues where a respondent is likely to have already formulated an opinion before we survey them. Suppose we want to find out how much someone likes their job, or how they view their local school district, or what color they feel the sky is. For the most part, we are conducting simple brain surgery – going inside their brain and via a carefully worded question, plucking out an established opinion. It works very well.
But, with low involvement items, there is nothing there to retrieve. People just haven’t thought about 36 different buying attributes for low involvement products. We are asking them to figure out what the attribute is, formulate an opinion, and express it to us in about 10 seconds time. They will provide an answer, but there will be an enormous amount of error involved.
In short, when you ask a question, you will get an answer. That doesn’t mean that answer will be meaningful or even accurate. Low-involvement products are low-risk and involve little consequence of making a “wrong” decision. Consumers apply more heuristic approaches in these situations.
For the most part, the more we try to retrieve already established thoughts on surveys, the more accurate and useful our data are. This doesn’t mean we cannot research low-involvement products, but it does imply you have to pose questions a respondent can actually answer. Sometimes this means the questioning has to be simpler, or expressed in a clear choice task, or that we need to move to experimental designs.
As researchers, we have to understand that consumers’ lives are hectic and settling on a limited number of easily comprehensible decision criteria for low-involvement items is how the consumer world works. In the end, I think if consumers really contemplated 36 attributes in the real world, the product would have sold for a much higher price. It just wasn’t worth their time to consider a $2 purchase in this much detail.