Archive for the 'Quality Checks' Category

POLL-ARIZED available on May 10

I’m excited to announce that my book, POLL-ARIZED, will be available on May 10.
 
After the last two presidential elections, I was fearful my clients would ask a question I didn’t know how to answer: “If pollsters can’t predict something as simple as an election, why should I believe my market research surveys are accurate?”
 
POLL-ARIZED results from a year-long rabbit hole that question led me down! In the process, I learned a lot about why polls matter, how today’s pollsters are struggling, and what the insights industry should do to improve data quality.
 
I am looking for a few more people to read an advance copy of the book and write an Amazon review on May 10. If you are interested, please send me a message at poll-arized@cruxresearch.com.

Questions You Are Not Asking Your Market Research Supplier That You Should Be Asking

It is no secret that providing representative samples for market research projects has become challenging. While clients are always focused on obtaining respondents quickly and efficiently, it is also important that they are concerned with the quality of their data. The reality is that quality is slipping.

While there are many causes of this, one that is not discussed much is that clients rarely ask their suppliers the tough questions they should. Clients are not putting pressure on suppliers to focus on data quality. Since clients ultimately control the purse strings of projects, suppliers will only improve quality if clients demand it.

I can often tell if I have an astute client by their questions when we are designing studies. Newer or inexperienced clients tend to start by talking about the questionnaire topics. Experienced clients tend to start by talking about the sample and its representativeness.

Below is a list of a few questions that I believe clients should be asking their suppliers on every study. The answers to these are not always easy to come by, but as a client, you want to see that your supplier has contemplated these questions and pays close attention to the issues they highlight.

For each, I have also provided a correct or acceptable answer to expect from your supplier.

  • What was the response rate to my study? While it was once commonplace to report response rates, suppliers try to dodge this issue. Most data quality issues stem from low response rates. Correct Answer: For most studies, under 5%. Unless the survey is being fielded among a highly engaged audience, such as your customers, you should be suspicious of any answer over 15%. “I don’t know” is an unacceptable answer. Suppliers will also try to convince you that response rates do not matter when every data quality issue we experience stems from inadequate response to our surveys.
  • How many respondents did you remove in fielding for quality issues? This is an emerging issue. The number of bad-quality respondents in studies has grown substantially in just the last few years. Correct answer: at least 10%, but preferably between 25% and 40%. If your supplier says 0%, you should question whether they are properly paying attention to data quality issues. I would guide you to find a different supplier if they cannot describe a process to remove poor-quality respondents. There is no standard way of doing this, but each supplier should have an established process.
  • How were my respondents sourced? This is an essential question seldom asked unless our client is an academic researcher. It is a tricky question to answer. Correct answer: This is so complicated that I have difficulty providing a cogent response to our clients. Here, the hope is that your supplier has at least some clue as to how the panel companies get their respondents and know who to go to if a detailed explanation is needed. They should connect you with someone who can explain this in detail.
  • What are you doing to protect against bots? Market research samples are subject to the ugly things that happen online – hackers, bots, cheaters, etc. Correct answer: Something proactive. They might respond that they are working with the panel companies to prevent bots or a third-party firm to address this. If they are not doing anything or don’t seem to know that bots are a big issue for surveys, you should be concerned.
  • What is in place to ensure that my respondents are not being used for competitors or vice-versa? Often, clients should care that the people answering their surveys have not done another project in your product category recently. I have had cases where two suppliers working for the same client (one being us) used the same sample source and polluted the sample base for both projects because we did not know the other study was fielding. Correct answer: Something if this is important to you. If your research covers brand or advertising awareness, you should account for this. If you are commissioning work with several suppliers, this takes considerable coordination.
  • Did you run simulated data through my survey before fielding? This is an essential, behind-the-scenes step that all suppliers that know what they are doing take. Running thousands of simulated surveys through the questionnaire tests survey logic and ensures that the right people get to the right questions. While it doesn’t prevent all errors, it catches many of them. Correct answer: Yes. If the supplier does not know what simulated data is, it is time to consider a new supplier.
  • How many days will my study be in the field? Many errors in data quality stem from conducting studies too quickly. Correct answer: Varies, but this should be 10-21 days for a typical project. If your study better have difficult-to-find respondents, this could be 3-4 weeks. If the data collection period is shorter than ten days, you WILL have data quality errors that arise, so be sure you understand the tradeoffs for speed. Don’t insist on field speed unless you need to.
  • Can I have a copy of the panel company’s answers to the ESOMAR questions? ESOMAR has put out a list of questions to help buyers of online samples. Every sample supplier worth using will have created a document that answers these questions. Correct answer: Yes. Do not work with a company that has not put together a document answering these questions, as all the good ones have. However, after reading this document, don’t expect to understand how your respondents are being sourced.
  • How do you handle requests down the road when the study is over? It is a longstanding pet peeve of most clients that suppliers charge for basic customer support after the project is over. Make sure you have set expectations properly upfront and put these expectations into the contract. Correct answer: Forever. Our company only charges if support requests become substantial. Many suppliers will provide support for three- or six months post-study and will charge for this support. I have never understood this, as I am flattered when a client calls me to discuss a study that was done years ago, as this means our study is continuing to make an impact. We do not charge for this follow-up unless the request requires so much time that we have to.

There are probably many other questions clients should be asking suppliers. Clients need to get tougher on insisting on data quality. It is slipping, and suppliers are not investing enough to improve response rates and develop trust with respondents. If clients pressure them, the economic incentives will be there to create better techniques to obtain quality research data.

Which quality control checks questions should you use in your surveys?

While it is no secret that the quality of market research data has declined, how to address poor data quality is rarely discussed among clients and suppliers. When I started in market research more than 30 years ago, telephone response rates were about 60%. Six in 10 people contacted for a market research study would choose to cooperate and take our polls. Currently, telephone response rates are under 5%. If we are lucky, 1 in 20 people will take part. Online research is no better, as even from verified customer lists response rates are commonly under 10% and even the best research panels can have response rates under 5%.

Even worse, once someone does respond, a researcher has to guard against “bogus” interviews that come from scripts and bots, as well as individuals who are cheating on the survey to claim the incentives offered. Poor-quality data is clearly on the rise and is an existential threat to the market research industry that is not being taken seriously enough.

Maximizing response requires a broad approach with tactics deployed throughout the process. One important step is to cleanse each project of bad quality respondents. Another hidden secret in market research is that researchers routinely have to remove anywhere from 10% to 50% of respondents from their database due to poor quality.

Unfortunately, there is no industry standard way of doing this – of identifying poor-quality respondents. Every supplier sets their own policies. This is likely because there is considerable variability in how respondents are sourced for studies, and a one-size-fits-all approach might not be possible, and some quality checks depend on the specific topic of the study. Unfortunately, researchers are left to largely fend for themselves when trying to come up with a process for how to remove poor quality respondents from their data.

One of the most important ways to guard against poor quality respondents is to design a compelling questionnaire to begin with. Respondents will attend to a short, relevant survey. Unfortunately, we rarely provide them with this experience.

We have been researching this issue recently in an effort to come up with a workable process for our projects. Below, we share our thoughts. The market research industry needs to work together on this issue, as when one of us removes a bad respondent from a database in helps the next firm with their future studies.

There is a practical concern for most studies – we rarely have room for more than a handful of questions that relate to quality control. In addition to speeder and straight-line checks, studies tend to have room for about 4-5 quality control questions. With the exception of “severe speeders” as described below, respondents will be automatically removed if they fail three or more of the checks. We use a “three strikes and you’re out” rule to remove respondents. If anything, this is probably too conservative, but we’d rather err on the side of retaining some bad quality respondents in than inadvertently removing some good quality ones.

When possible, we favor checks that can be done programmatically, without human intervention, as that keeps fielding and quota management more efficient. To the degree possible, all quality check questions should have a base of “all respondents” and not be asked of subgroups.

Speeder Checks

We aim to set up two criteria: “severe” speeders are those that complete the survey in less than one-third of the median time. These respondents are automatically tossed. “Speeders” are those that take between one-third and one-half of the median time, and these respondents are flagged.

We also consider setting up timers within the survey – for example, we may place timers on a particularly long grid question or a question that requires substantial reading on the part of the respondent. Note that when establishing speeder checks it is important to use the median length as a benchmark and not the mean. In online surveys, some respondents will start a survey and then get distracted for a few hours and come back to it, and this really skews the average survey length. Using the median gets around that.

Straight Line Checks

Hopefully, we have designed our study well and do not have long grid type questions. However, more often than not these types of questions find their way into questionnaires.  For grids with more than about six items, we place a straight-lining check – if a respondent chooses the same response for all items in the grid, they are flagged.

Inconsistent Answers

We consider adding two question that check for inconsistent answers. First, we re-ask a demographic question from the screener near the end of the survey. We typically use “age” as this question. If the respondent doesn’t choose the same age in both questions, they are flagged.

In addition, we try to find an attitudinal question that is asked that we can re-ask in the exact opposite way. For instance, if earlier we asked “I like to go to the mall” on a 5-point agreement scale, we will also ask the opposite: “I do not like to go to the mall” on the same scale. Those that answer the same for both are flagged. We try to place these two questions a few minutes apart in the questionnaire.

Low Incidence items

This is a low attentiveness flag. It is meant to catch people who say they do really unlikely things and also catch people who say they don’t do likely things because they are not really paying attention to the questions we pose. We design this question specific to each survey and tend to ask what respondents have done over the past weekend. We like to have two high incidence items (such as “watched TV,” or “rode in a car”), 4 to 5 low incidence items (such as “flew in an airplane,” “read an entire book,” “played poker”) and one incredibly low incidence item (such as “visited Argentina”).  Respondents are flagged if they didn’t do at least one of our high incidence items, if they said they did more than two of our low incidence items, or if they say they did our incredibly low incidence item.

Open-ended check

We try to include this one in all studies, but sometimes have to skip it if the study is fielding on a tight timeframe because it involves a manual process. Here, we are seeing if a respondent provides a meaningful response to an open-ended question. Hopefully, we can use a question that is already in the study for this, but when we cannot we tend to use one like this: “Now I’d like to hear your opinions about some other things. Tell me about a social issue or cause that you really care about.  What is this cause and why do you care about it?” We are manually looking to see if they provide an articulate answer and they are flagged if they do not.

Admission of inattentiveness

We don’t use this one as a standard, but are starting to experiment with it. As the last question of the survey, we can ask respondents how attentive they were. This will suffer from a large social desirability bias, but we will sometimes directly ask them how attentive they were when taking the survey, and flag those that say they did not pay attention at all.

Traps and misdirects

I don’t really like the idea of “trick questions” – there is research that indicates that these types of questions tend to trap too many “good” respondents. Some researchers feel that these questions lower respondent trust and thus answer quality. That seems to be enough to recommend against this style of question. The most common types I have seen ask a respondent to select the “third choice” below no matter what, or to “pick the color from the list below,” or “select none of the above.” We counsel against using these.

Comprehension

This was recommended by a research colleague and was also mentioned by an expert in a questionnaire design seminar we attended. We don’t use this as a quality check, but like to use it during a soft-launch period. The question looks like this: “Thanks again for taking this survey.  Were there any questions on this survey you had difficulty with or trouble answering?  If so, it will be helpful to us if you let us know what those problems were in the space below.” This is a useful question, but we don’t use it as a quality check per se.

Preamble

I have mixed feelings on this type of quality check, but we use it when we can phrase it positively. A typical wording is like this: “By clicking yes, you agree to continue to our survey and give your best effort to answer 10-15 minutes of questions. If you speed through the survey or otherwise don’t give a good effort, you will not receive credit for taking the survey.”

This is usually one of the first questions in the survey. The argument I see against this is it sets the respondent up to think we’ll be watching them and that could potentially affect their answers. Then again, it might affect them in a good way if it makes them attend more.

I prefer a question that takes a gentler, more positive approach – telling respondents we are conducting this for an important organization, that their opinions will really matter, promise them confidentiality, and then ask them to agree to give their best effort, as opposed to lightly threatening them as this one does.

Guarding against bad respondents has become an important part of questionnaire design, and it is unfortunate that there is no industry standard on how to go about it. We try to build in some quality checks that will at least spot the most egregious cases of poor quality. This is an evolving issue, and it is likely that what we are doing today will change over time, as the nature of market research changes.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.