Posts Tagged 'Methodology'

Will Blockchain Disrupt Marketing and Research?

The field of survey research was largely established in the 1930s and matured in the post WWII era as the US economy boomed and companies became more customer-driven. Many early polls were conducted in the most old-fashioned way possible: by going door-to-door with a clipboard and pestering people with questions. Adoption of the telephone in the US (which happened slowly – telephone penetration was less than 50% before WWII and didn’t hit 90% until 1972) made possible an efficient way to reliably gather projectable samples of consumers and the research industry grew quickly.

Then the Internet changed everything. I was fortunate to be at a firm that was leading the charge for online market research at a time when Internet penetration in the US was only about 20%. By the time I left the firm, Internet penetration had reached over 85% and online market research had pretty much supplanted telephone research. What had taken the telephone 40+ years to do to door-to-door polling had happened in less than 10 years, completely transforming an industry.

So, what is next? What nascent technology might transform the market research industry?

Keep your eyes on Blockchain.

Blockchain is best known as the technology that underpins cryptocurrencies like Bitcoin. The actual technology of Blockchain is complex and difficult for most people to understand. (I’d be lying if I said I understood the technology.) But, Blockchain is conceptually simple. It is a way to exchange value and trust between strangers in an un-hackable way and without the need for middlemen. It allows value to be exchanged and stored securely and privately. Whereas the Internet moves information, Blockchain moves value.

Those interested in the potential for Blockchain technology should read The Blockchain Revolution by Don and Alex Tapscott. Or, if you’d like a shortcut, you can watch Don’s excellent Ted Talk.

If Blockchain gains steam and hits a critical mass of acceptance, it has the potential to transform everything including our financial system, our contracts, our elections, our corporate structures, and our governments. It has applicability for any aspect of life that involves an exchange of value that requires an element of trust – which is pretty much everything we do to interact as human beings.

A simple example of how it works is provided by its first widespread application – as a cryptocurrency like Bitcoin. Currently, if I buy a book online, my transaction passes though many intermediaries that are often transparent to me. My money might move from my credit card company to my bank, to another bank, to Amazon, their bank, to the bookseller, to their bank, and I suppose eventually a few crumbs make their way to the author (via their bank of course). There are markups all along the way that are taken by all the intermediaries who don’t add value beyond facilitating the transaction. And, at every step there is an opportunity for my data to be compromised and hacked. The digital shadow left allows all sorts of third parties to know what I am reading and even where I am reading it.

This is an imperfect system at best and one that a cryptocurrency resolves. Via Bitcoin, I can buy a book directly from an author, securely, with no opportunity for others to see what I am doing or to skim value along the way. In fact, the author and I remain strangers.

Blockchain is mostly known currently as Bitcoin’s technology, but its potential dwarfs its current use. Blockchain will clearly transform the financial services industry, and for the better. Buyers and sellers can transact anonymously and securely without a need for intermediaries. Stocks can be exchanged directly by buyers and sellers, and this could lead to a world devoid of investment banks, brokers, and hedge fund managers, or at least one where their roles become strictly advisory.

A useful way to think of the potential of Blockchain is to think of trust. Trust in an economic sense lowers transactions costs and decreases risk. Why do I need lawyers and a contract if I can fully trust my contractor to do what he/she promises? Why do I need Uber when I can contract directly with the driver? As transactions costs decline, we’ll see a much more “democratized” economy. Smaller entities will no longer be at a disadvantage. The costs of coordinating just about anything will decline, resulting in a smaller and very different role for management. If Blockchain really ignites, I’d expect to see flatter corporate structures, very little middle management, and a greater need for truly inspirational leaders.

Any industry reliant on payment systems or risk is ripe for disruption via Blockchain technology. Retail, insurance, government contracting, etc. will all be affected. But, Blockchain isn’t just about payments.  Payments are just a tangible manifestation of what Blockchain really facilitates – which is an exchange of value. Value isn’t always monetary.

Which brings me (finally!) to our field: marketing and marketing research. Marketers and market researchers are “middlemen” – and any middleman has the potential to be affected by Blockchain technology. We stand between the corporation and its customers.

Marketers should realize Blockchain may have important implications to the brand. A brand is essentially a manifestation of trust. In the current digital world, many marketers struggle to retain control of their brands. This is upsetting to those of us trained in historical brand management. Blockchain will result in a greater focus on the brand by customers. They will seek to trust the brand more because Blockchain can enable this trust.

As a researcher I see Blockchain as making it essential that I add value to the process as opposed to being a conduit for the exchange of value. Put more simply, Blockchain will make it even more important that researchers add insight rather than merely gather data. In custom research about half of the cost of a market research project is wrapped up in data collection and that is the part that seems most ripe for disruption. There won’t be as many financial rewards for researchers for the operational aspects of projects. But, there will always be a need to help marketers make sense of the world.

When we design a survey, we are seeking information from a respondent. This information might be classification information (information about who you are), behavioral information (information about what you do), or attitudinal information (information about what you think and feel). In all cases, as a researcher, I am trusting that respondents will provide this information to me willingly and accurately.  As a respondent, you trust me to keep your identity confidential and to provide you an honorarium or incentive for your time. We are exchanging value – you are providing me with information and your time, and I am providing you with compensation and a comfort that you are helping clients better understand the needs of their customers. Blockchain has the potential to make this process more efficient and beneficial to the respondent. And that is important – our industry is suffering from a severe respondent trust problem right now. We don’t have to look much past our plummeting response rates to see that we have lost the respondent trust. Blockchain may be one way we can earn it back.

Blockchain can also authenticate the information we analyze. It can sort out fake data, such as fake postings on websites. To its core, Blockchain makes data transfers simple, secure, and efficient. It can help us more securely store personal information, which in turn will assure our respondents that they can trust us.

Blockchain can provide individuals with greater control over their “digital beings.” Currently, as we go about our lives (smartphone in pocket) we leave digital traces everywhere. This flotsam of our digital lives has value and is gathered and used by companies and governments, and has spawned new research techniques to mine value from this passive data stream. The burgeoning field of Big Data analysis is dependent on this trail we leave. Privacy concerns aside, it doesn’t seem right that consumers are creating a value they do not get to benefit from. Blockchain technology has the potential to allow individuals to retain control and to benefit from the trail of value they are leaving behind as they negotiate a digital world.

Of course as a research supplier I can also see Blockchain as a threat, as suppliers are middlemen between clients and their customers. Blockchain has the potential to replace, or at least enhance, any third-party relationship.  But, I envision Blockchain as being beneficial to smaller suppliers like Crux Research. Blockchain will require suppliers to be more value-added consultants, and less about reliable data collection. That is precisely what smaller suppliers do better than the larger firms, so I would predict that more smaller firms will be started as a result.

Blockchain is clearly in its infancy for marketers. Its potential may prove to be greater than its reality. But, just as we saw with the rise of the Internet, a technology such as this can grow up quickly, and can transform our industry.

Let’s Make Research and Polling Great Again!

Crux Logo Final 2016

The day after the US Presidential election, we quickly wrote and posted about the market research industry’s failure to accurately predict the election.  Since this has been our widest-read post (by a factor of about 10!) we thought a follow-up was in order.

Some of what we predicted has come to pass. Pollsters are being defensive, claiming their polls really weren’t that far off, and are not reaching very deep to try to understand the core of why their predictions were poor. The industry has had a couple of confabs, where the major players have denied a problem exists.

We are at a watershed moment for our industry. Response rates continue to plummet, clients are losing confidence in the data we provide, and we are swimming in so much data our insights are often not able to find space to breathe. And the public has lost confidence in what we do.

Sometimes it is everyday conversations that can enlighten a problem. Recently, I was staying at an AirBnB in Florida. The host (Dan) was an ardent Trump supporter and at one point he asked me what I did for a living. When I told him I was a market researcher the conversation quickly turned to why the polls failed to accurately predict the winner of the election. By talking with Dan I quickly I realized the implications of Election 2016 polling to our industry. He felt that we can now safely ignore all polls – on issues, approval ratings, voter preferences, etc.

I found myself getting defensive. After all, the polls weren’t off that much.  In fact, they were actually off by more in 2012 than in 2016 – the problem being that this time the polling errors resulted in an incorrect prediction. Surely we can still trust polls to give a good sense of what our citizenry thinks about the issues of the day, right?

Not according to Dan. He didn’t feel our political leaders should pay attention to the polls at all because they can’t be trusted.

I’ve even seen a new term for this bandied about:  poll denialism. It is a refusal to believe any poll results because of their past failures. Just the fact that this has been named should be scary enough for researchers.

This is unnerving not just to the market research industry, but to our democracy in general.  It is rarely stated overtly, but poll results are a key way political leaders keep in touch with the needs of the public, and they shape public policy a lot more than many think. Ignoring them is ignoring public opinion.

Market research remains closely associated with political polling. While I don’t think clients have become as mistrustful about their market research as the public has become about polling, clients likely have their doubts. Much of what we do as market researchers is much more complicated than election polling. If we can’t successfully predict who will be President, why would a client believe our market forecasts?

We are at a defining moment for our industry – a time when clients and suppliers will realize this is an industry that has gone adrift and needs a righting of the course. So what can we do to make research great again?  We have a few ideas.

  1. First and foremost, if you are a client, make greater demands for data quality. Nothing will stimulate the research industry more to fix itself than market forces – if clients stop paying for low quality data and information, suppliers will react.
  2. Slow down! There is a famous saying about all projects.  They have three elements that clients want:  a) fast, b) good, and c) cheap, and on any project you can choose two of these.  In my nearly three decades in this industry I have seen this dynamic change considerably. These days, “fast” is almost always trumping the other two factors.  “Good” has been pushed aside.  “Cheap” has always been important, but to be honest budget considerations don’t seem to be the main issue (MR spending continues to grow slowly). Clients are insisting that studies are conducted at a breakneck pace and data quality is suffering badly.
  3. Insist that suppliers defend their methodologies. I’ve worked for corporate clients, but also many academic researchers. I have found that a key difference between them becomes apparent during results presentations. Corporate clients are impatient and want us to go as quickly as possible over the methodology section and get right into the results.  Academics are the opposite. They dwell on the methodology and I have noticed if you can get an academic comfortable with your methods it is rare that they will doubt your findings. Corporate researchers need to understand the importance of a sound methodology and care more about it.
  4. Be honest about the limitations of your methodology. We often like to say that everything you were ever taught about statistics assumed a random sample and we haven’t seen a study in at least 20 years that can credibly claim to have one.  That doesn’t mean a study without a random sample isn’t valuable, it just means that we have to think through the biases and errors it could contain and how that can be relevant to the results we present. I think every research report should have a page after the methodology summary that lists off the study’s limitations and potential implications to the conclusions we draw.
  5. Stop treating respondents so poorly. I believe this is a direct consequence of the movement from telephone to online data collection. Back in the heyday of telephone research, if you fielded a survey that was too long or was challenging for respondents to answer, it wasn’t long until you heard from your interviewers just how bad your questionnaire was. In an online world, this feedback never gets back to the questionnaire author – and we subsequently beat up our respondents pretty badly.  I have been involved in at least 2,000 studies and about 1 million respondents.  If each study averages 15 minutes that implies that people have spent about 28 and a half years filling out my surveys.  It is easy to lose respect for that – but let’s not forget the tremendous amount of time people spend on our surveys. In the end, this is a large threat to the research industry, as if people won’t respond, we have nothing to sell.
  6. Stop using technology for technology’s sake. Technology has greatly changed our business. But, it doesn’t supplant the basics of what we do or allow us to ignore the laws of statistics.  We still need to reach a representative sample of people, ask them intelligent questions, and interpret what it means for our clients.  Tech has made this much easier and much harder at the same time.  We often seem to do things because we can and not because we should.

The ultimate way to combat “poll denialism” in a “post-truth” world is to do better work, make better predictions, and deliver insightful interpretations. That is what we all strive to do, and it is more important than ever.

 

An Epic Fail: How Can Pollsters Get It So Wrong?

picture1

Perhaps the only bigger loser than Hillary Clinton in yesterday’s election was the polling industry itself. Those of us who conduct surveys for a living should be asking if we can’t even get something as simple as a Presidential election right, why should our clients have confidence in any data we provide?

First, a recap of how poorly the polls and pundits performed:

  • FiveThirtyEight’s model had Clinton’s likelihood of winning at 72%.
  • Betfair (a prediction market) had Clinton trading at an 83% chance of winning.
  • A quick scan of Real Clear Politics on Monday night showed 25 final national polls. 22 of these 25 polls had Clinton as the winner, and the most reputable ones almost all had her winning the popular vote by 3 to 5 points. (It should be noted that Clinton seems likely to win the popular vote.)

There will be claims that FiveThirtyEight “didn’t say her chances were 100%” or that Betfair had Trump with a “17% chance of winning.” Their predictions were never to be construed to be certain.  No prediction is ever 100% certain, but this is a case where almost all forecasters got it wrong.  That is pretty close to the definition of a bias – something systematic that affected all predictions must have happened.

The polls will claim that the outcome was in the margin of error. But, to claim a “margin of error” defense is statistically suspect, as margins of error only apply to random or probability samples and none of these polls can claim to have a random sample. FiveThirtyEight also had Clinton with 302 electoral votes, way beyond any reasonable error rate.

Regardless, the end result is going to end up barely within the margin of error of most of these polls erroneously use anyway. That is not a free pass for the pollsters at all. All it means is rather than their estimate being accurate 95% of the time, it was predicted to be accurate a little bit less:  between 80% and 90% of the time for most of these polls by my calculations.

Lightning can strike for sure. But this is a case of it hitting the same tree numerous times.

So, what happened? I am sure this will be the subject of many post mortems by the media and conferences from the research industry itself, but let me provide an initial perspective.

First, it seems that it had anything to do with the questions themselves. In reality, most pollsters use very similar questions to gather voter preferences and many of these questions have been in use for a long time.  Asking whom you will vote for is pretty simple. The question itself seems to be an unlikely culprit.

I think the mistakes the pollster’s made come down to some fairly basic things.

  1. Non-response bias. This has to be a major reason why the polls were wrong. In short, non-response bias means that the sample of people who took the time to answer the poll did not adequately represent the people who actually voted.  Clearly this must have occurred. There are many reasons this could happen.  Poor response rates is likely a key one, but poor selection of sampling frames, researchers getting too aggressive with weighting and balancing, and simply not being able to reach some key types of voters well all play into it.
  2. Social desirability bias. This tends to be more present in telephone and in-person polls that involve an interviewer but it happens in online polls as well. This is when the respondent tells you what you want to hear or what he or she thinks is socially acceptable. A good example of this is if you conduct a telephone poll and an online poll at the same time, more people will say they believe in God in the telephone poll.  People tend to answer how they think they are supposed to, especially when responding to an interviewer.   In this case, let’s take the response bias away.  Suppose pollsters reached every single voter who actually showed up in a poll. If we presume “Trump” was a socially unacceptable answer in the poll, he would do better in the actual election than in the poll.  There is evidence this could have happened, as polls with live interviewers had a wider Clinton to Trump gap than those that were self-administered.
  3. Third parties. It looks like Gary Johnson’s support is going to end up being about half of what the pollster’s predicted.  If this erosion benefited Trump, it could very well have made a difference. Those that switched their vote from Johnson in the last few weeks might have been more likely to switch to Trump than Clinton.
  4. Herding. This season had more polls than ever before and they often had widely divergent results.  But, if you look closely you will see that as the election neared, polling results started to converge.  The reason could be that if a pollster had a poll that looked like an outlier, they probably took a closer look at it, toyed with how the sample was weighted, or decided to bury the poll altogether.  It is possible that there were some accurate polls out there that declared a Trump victory, but the pollster’s didn’t release them.

I’d also submit that the reasons for the polling failure are likely not completely specific to the US and this election. We can’t forget that pollsters also missed the recent Brexit vote, the Mexican Presidency, and David Cameron’s original election in the UK.

So, what should the pollsters do? Well, they owe it to the industry to convene, share data, and attempt to figure it out. That will certainly be done via the trade organizations pollsters belong to, but I have been to a few of these events and they devolve pretty quickly into posturing, defensiveness, and salesmanship. Academics will take a look, but they move so slowly that the implications they draw will likely be outdated by the time they are published.  This doesn’t seem to be an industry that is poised to fix itself.

At minimum, I’d like to see the polling organizations re-contact all respondents from their final polls. That would shed a lot of light on any issues relating to social desirability or other subtle biases.

This is not the first time pollsters have gotten it wrong. President Hillary Clinton will be remembered in history along with President Thomas Dewey and President Alf Landon.  But, this time seems different.  There is so much information out there that seeing the signal to the noise is just plain difficult – and there are lessons in that for Big Data analyses and research departments everywhere.

We are left with an election result that half the country is ecstatic about and half is worried about.  However, everyone in the research industry should be deeply concerned. I am hopeful that this will cause more market research clients to ask questions about data quality, potential errors and biases, and that they will value quality more. Those conversations will go a long way to putting a great industry back on the right path.

Asking about gender and sexual orientation on surveys

When composing questionnaires, there are times when the simplest of questions have to adjust to fit the times. Questions we draft become catalysts for larger discussions. That has been the case with what was once the most basic of all questions – asking a respondent for their gender.

This is probably the most commonly asked question in the history of survey research. And it seems basic – we typically just ask:

  • Are you… male or female?

Or, if we are working with younger respondents, we ask:

  • Are you … a boy or a girl?

The question is almost never refused and I’ve never seen any research to suggest this is anything other than a highly reliable measure.

Simple, right?

But, we are in the midst of an important shift in the social norms towards alternative gender classifications. Traditionally, meaning up until a couple of years ago, if we wanted to classify homosexual respondents we wouldn’t come right out and ask the question, for fear that it would be refused or be found to be an offensive question for many respondents. Instead, we would tend to ask respondents to check off a list of causes that they support. If they chose “gay rights”, we would then go ahead and ask if they were gay or straight. Perhaps this was too politically correct, but it was an effective way to classify respondents in a way that wasn’t likely to offend.

We no longer ask it that way. We still ask if the respondent is male or female, but we follow up to ask if they are heterosexual, lesbian, gay, bisexual, transgender, etc.

We recently completed a study among 4-year college students where we posed this question.  Results were as follows:

  • Heterosexual = 81%
  • Bisexual = 8%
  • Lesbian = 3%
  • Gay = 2%
  • Transgender = 1%
  • Other = 2%
  • Refused to answer = 3%

First, it should be noted that 3% refused to answer is less than the 4% that refused to answer the race/ethnicity question on the same survey.  Conclusion:  asking today’s college students about sexual orientation is less sensitive than asking them about their race/ethnicity.

Second, it is more important than ever to ask this question. These data show that about 1 in 5 college students identify as NOT being heterosexual. Researchers need to start viewing these students as a segment, just as we do age or race. This is the reality of the Millennial market:  they are more likely to self-identify as not being heterosexual and more likely to be accepting of alternative lifestyles. Failure to understand this group results in a failure to truly understand the generation.

We have had three different clients ask us if we should start asking this question younger – to high school or middle school students. For now, we are advising against it unless the study has clear objectives that point to a need. Our reasoning for this is not that we feel the kids will find the question to be offensive, but that their parents and educators (whom we are often reliant on to be able to survey minors) might. We think that will change over time as well.

So, perhaps nothing is as simple as it seems.

How can you predict an election by interviewing only 400 people?

This might be the most commonly asked question researchers get at cocktail parties (to the extent that researchers go to cocktail parties). It is also a commonly unasked question among researchers themselves: how can we predict an election by only talking to 400 people? 

The short answer is we can’t. We can never predict anything with 100% certainty from a research study or poll. The only way we could predict the election with 100% certainty would be to interview every person who will end up voting. Even then, since people might change their mind between the poll and the election we couldn’t say our prediction was 100% likely to come true.

To provide an example, if I want to flip a coin 100 times, my best estimate before I do it would be that I will get “heads” 50 times. But, it isn’t 100% certain the coin will land on heads 50 times.

The reason it is hard to comprehend how we predict elections by talking to so few people is our brains aren’t trained to understand probability. If we interview 400 people and find that 53% will vote for Hillary Clinton and 47% for Donald Trump, as long as the poll was conducted well, this result becomes our best prediction for what the vote will be. It is similar to predicting we will get 50 heads out of 100 coin tosses.  53% is our best prediction given the information we have. But, it isn’t an infallible prediction.

Pollsters provide a sampling error, which is +/-5% in this case. 400 is a bit of a magic number. It results in a maximum possible sampling error of +/-5% which has long been an acceptable standard. (Actually, we need 384 interviews for that, but researchers will use 400 instead because it sounds better.)

What that means is that if we repeated this poll over and over, we would expect to find Clinton to receive between 48% and 58% of the intended vote, 95% of the time. We’d expect Trump to receive between 42% and 52% of the intended vote, 95% of the time. On average though, if we kept doing poll after poll, our best guess would be if we averaged Clinton’s result it would be 53%.

In the coin flipping example, if we repeatedly flipped the coin 400 times, we should get between 45% and 55% heads 95% of the time. But, our average and most common result will be 50% heads.

Because the ranges of the election poll (48%-58% for Clinton and 42%-52% for Trump) overlap, you will often see reporters (and the candidate that is in second place) say that the poll is a “statistical dead heat.” There is no such thing as a statistical dead heat in polling unless the exact number of respondents prefer each candidate, which may never have actually happened in the history of polling.

There is a much better way to report the findings of the poll. We can statistically determine the “odds” that the 53% for Clinton is actually higher than the 47% for Trump. If we repeated the poll many times, what is the probability that the percentage we found for Clinton would be higher than what we found for Trump? In other words, what is the probability that Clinton is going to win?

The answer in this case is 91%.  Based on our example poll, Clinton has a 91% chance of winning the election. Say that instead of 400 people we interviewed 1,000. The same finding would imply that Clinton has a 99% chance of winning. This is a much more powerful and interesting way to report polling results, and we are surprised we have never seen a news organization use polling data in this way.

Returning to our coin flipping example, if we flip a coin 400 times and get heads 53% of the time, there is a 91% chance that we have a coin that is unfair, and biased towards heads. If we did it 1,000 times and got heads 53% of the time, there would be a 99% chance that the coin is unfair. Of course, a poll is a snapshot in time. The closer it is to the election, the more likely it is that the numbers will not change.  And, polling predictions assume many things that are rarely true:  that we have a perfect random sample, that all subgroups respond at the same rate, that questions are clear, that people won’t change their mind on Election Day, etc.

So, I guess the correct answer to “how can we predict the election from surveying 400 people” is “we can’t, but we can make a pretty good guess.”

Analysis Paralysis and Big Data

Many of us in the market research field lament our poor timing when choosing when to be born. We went to college and graduate school and entered a field that focuses on data analysis at a time when this was a marginally marketable skill.  Data analysis is a field that has now exploded in its value to employers. Many of us feel we were at the forefront of an eventual “nerd takeover” of the business world.

We can explain why in two words:  “Big Data.” Never has there been more data aching to be analyzed. Consumers used to be tracked in just two ways:  when they bought something and when they took the time to interrupt their dinners to answer a telephone survey in the evening. Now, people are being tracked in ways unimaginable just a few years ago and perhaps in ways they don’t even realize.

The digital trail we leave as we navigate the Internet leaves a powerful and permanent data contrail. It used to be that marketers could learn where we went online.  Now, they can also learn who we are, what our friends do, and where we are when we do various things.

Yet, despite all of this data and attempts to harvest it, marketers often seem no more knowledgeable about their customers than they were a generation ago. This conundrum is often chalked up to a failing of researchers:  we have all this data, yet our researchers just haven’t yet discovered how to separate the signal from the noise.

While this may well be true, there also might be a “hype curve” type phenomenon going on.  The “hype curve” was made most famous by Gartner and is often studied in MBA programs. In short, when we are faced with something disruptive, we tend to overstate its potential.  Then, when the reality of the phenomenon inevitably fails to meet the hype, we adjust our expectations downward… but by too much.  We start to be overly critical of the phenomenon’s potential. In the final phase, the actual potential of the phenomenon establishes itself – somewhere between the initial hype and our revised, downward expectations of it.

Picture1

The hype curve can be used in many contexts.  I’ve seen it applied in politics.  President Obama came to office amid sky high (and unrealistic) expectations.  As he invariably failed to meet them all, people revised their assessment of his potential too far downward.  Eventually, his performance will be reviewed by historians as being between these two extremes.

I’ve seen the concept applied to music artists. A music artist puts out an incredible first album.  Fans start touting them as the next coming of the Beatles.  Their second album can’t possibly meet these expectations, and when it comes out the group start to fade in popularity. By the time their third album is released, it is time for their popularity to settle at an appropriate level.

They hype curve concept is most commonly applied to technology. A new gadget comes out.  We hear about how it will save the world and change our lives. It fails to meet those expectations, and people start to think that it won’t make much of a difference at all. Over time, the gadget finds its level – it is a useful addition to our lives.  Its reality falls between the initial hype and the revised expectation.

The hype curve example is applicable to Big Data as well, and we are in the early stages of it. Our expectations of what can be done with the incredible amounts of data that are out there are overstated. There will be a point soon when our expectations will be revised downward, and people will start to underestimate what can be done. Eventually, like all other innovations, Big Data will find its level.

So, right now is the perfect time to graduate from college with strong data analysis skills – too late for many of us unfortunately!

 

10 Tips to Writing an Outstanding Questionnaire

I have written somewhere between a zillion and a gazillion survey questions in my career. I am approaching 3,000 projects managed or overseen and I have been the primary questionnaire author on at least 1,000 of them.  Doing the math, if an average questionnaire is 35 questions long, it means I have written or overseen 35,000+ survey questions. That is 25 questions a week for 26 years!

More importantly, I’ve had to analyze the results of these questions, which is where one really starts to understand if they worked or not.

I started in the (land line) telephone research days. Back then, it was common practice for questionnaire authors to step into the phone center to conduct interviews during the pre-test or first interview day.  While I disliked doing this, the experience served as the single best education on how to write a survey question I could have had.  I quickly understood if a question was working, was understood by the respondent, etc. It was a trial by fire and in addition to discovering that I don’t have what it takes to be a telephone interviewer I quickly learned what was and wasn’t working with the questions I was writing.

Something in this learning process is lost in today’s online research world. We never really experience first-hand the struggles a respondent has with our questions and thus don’t get to apply this to the next study.  For this reason I am thankful I started in the halcyon days of telephone research. Today’s young researchers don’t have the opportunity to develop these skills in the same way.

There are many guides to writing survey questions out there that cover the basics. Here I thought I’d take a broader view and list some of the top things to keep in mind when writing survey questions.  These are things I wish I had discovered far earlier!

  1. Begin with the end in mind. This concept is straight out of the 7 Habits of Highly Effective People and is central to questionnaire design.  Good questionnaire writers are thinking ahead to how they will analyze the resulting data.  In fact, I have found that if this is done well, writing the research report becomes straightforward.  I have also discovered that when training junior research staff it is always better to help them develop their report writing skills first and then move to questionnaire development.  Once you are an apt report writer questionnaire writing flows naturally because it begins with the end in mind.  It is also a reason why most good analysts/writers run from situations where they have to write a report from a questionnaire someone else has written.
  2. Start with an objective list. We start with a clear objective list the client has signed off on. Every question should be tied to the objective list or it doesn’t make it in the questionnaire. This is an excellent way to manage clients who might have multiple people providing input. It helps them prioritize. Most projects that end up not fully satisfying clients are ones where the objectives weren’t clear or agreed upon at the onset.
  3. Keep it simple – ridiculously simple. One of the most fortuitous things that happened to me in my career is that for a few years I exclusively wrote questionnaires that were intended for young respondents.  When I went back to writing “adult” survey questions I didn’t change a thing as I realized that what works for a 3rd grader is short, clear, unambiguous questions with one possible outcome.  The same thing is true for adults.
  4. Begin with a questionnaire outline. Outlines are easier to work through with clients than questionnaires. The outlines keep the focus on the types of questions we are asking and keep us from dwelling on the precise wording or scales. Writing the outline is actually more difficult than writing the questionnaire.
  5. Use consistent scales. Try not to use more than 2-3 scale types on the same questionnaire as it is confusing to the respondents.
  6. Don’t write long questions. There is evidence that respondents don’t read them. You are better off being more wordy in the answer choices than in the question itself, as online many respondents just look at the answer choices and don’t  even read the question you spent hours tweaking the wording on.
  7. Don’t get cute. We have a software system that allows us to do all sorts of sexy things, like drag-and-drop, slider scales, etc.  We rarely use them, as there is evidence that the bells and whistles are distracting and good old fashion pick lists and radio buttons provide more reliable measures.
  8. Consider mobile. From major research panels, the percentage of respondents answering on mobile devices is just 15% or so currently, but that is rapidly changing. Not only does your questionnaire have to work on the limited screen real estate of a mobile device, but it also is increasingly less likely to be answered by someone tethered to a desktop and laptop screen in a situation where you have their attention.  Your questionnaires are soon going to be answered by people multitasking, walking the dog, hanging with friends, etc.  This context needs to be appreciated.
  9. Ask the question you are getting paid to ask. Too many times I see questionnaires that dance around the main issue of the study without ever directly asking the respondent the central question. While it is nice to back into some issues with good data analysis skills, there is no substitute to simply asking direct questions. We also see questionnaires that allow too many “not sure/no opinion” type options. You are getting paid to find out what the target audience’s opinion is, so if this seems like a frequent response you have probably not phrased the question well.
  10. Think like a respondent and not a client. This is perhaps the most important advice I can give. The respondent doesn’t live and breathe the product or service you are researching like you client does. Survey writers must appreciate this context and ask questions that can be answered. There is a saying that if you “ask a question you will get an answer” – but that is no indication that the respondent understood your question or viewed it in the same context as you client.

Anecdotally, I have found that staff with the strongest data analytics skills and training can be some of the poorest questionnaire writers. I think that is because they can deploy their statistical skills on the back end to make up for their questionnaire writing deficiencies. But, across 3,000 projects I would say less than 100 of them truly required statistical skills beyond what you might learn in the second stats course you take in college. It really isn’t about statistical skills; it is more about translating study objectives into language a target audience can embrace.

Good questionnaire writing is not rocket science (but it is brain surgery). Above all, seek to simplify and not to complicate.

The most profitable industry?

http://www.inc.com/graham-winfrey/the-5-most-profitable-industries-in-the-us.html

According to this Inc. article, online survey software is the most profitable industry in the US. There are only a handful of top-shelf systems out there and they are pricey. In fact, after personnel and taxes, they are our #3 expense.  I think the reason this industry is so profitable is there really isn’t a lot of competition and, after having programmers learn a system for years there is a significant barrier preventing firms from moving to a new system.

There are quite a few “quick and dirty” systems out there for DIY-ers. But, in terms of the major systems that most suppliers tend to use, a lack of competition has driven the pricing very high. I think that will change over time, as it has with online panels. Online panels used to be expensive, but became reasonably priced over time, as new firms emerged in the space.  That hasn’t happened yet for online survey software – at least not for the really good systems.