Archive for the 'Uncategorized' Category

I have more LinkedIn contacts named “Steve” than contacts who are Black

There have been increasing calls for inclusiveness and fairness across America and the world. The issues presented by the MeToo and Black Lives Matter movements affect all sectors of society and the business world. Market research is no exception. Recent events have spurred me to reflect on my experiences and to think about whether the market research field is diverse enough and ready to make meaningful changes. Does market research have structural, systemic barriers preventing women and minorities from succeeding?

My recollections are anecdotal – just one person’s experiences when working in market research for more than 30 years. What follows isn’t based on an industry study or necessarily representative of all researchers’ experiences.

Women in Market Research

When it comes to gender equity in the market research field, my gut reaction is to think that research is a good field for women and one that I would recommend. I reviewed Crux Research’s client base and client contacts. In 15 years, we have worked with about 150 individual research clients across 70 organizations. 110 (73%) of those 150 clients are female. This dovetails with my recollection of my time at a major research supplier. Most of my direct clients there were women.

Crux’s client base is largely mid-career professionals – I’d say our typical client is a research manager or director in his/her 30’s or 40’s. I’d conclude that in my experience, women are well represented at this level.

But, when I look through our list of 70 clients and catalog who the “top” research manager is at these organizations, I find that 42 (60%) of the 70 research VPs and directors are male. And, when I catalog who these research VP’s report into, typically a CMO, I find that 60 (86%) of the 70 individuals are male. To recap, among our client base, 73% of the research managers are female, 40% of the research VPs are female, and 14% of the CMO’s are female.

This meshes with my experience working at a large supplier. While I was there, women were well-represented in our research director and VP roles but there were almost no women represented in the C-suite or among those that report to them. There seem to be a clear but firm glass ceiling in place in market research suppliers and in clients.

Minorities in Market Research

My experience paints a bleaker picture when I think of ethnic minority representation in market research. Of our 150 individual research clients, just 25 (17%) have been non-white and just 3 (2%) have been black. Moving up the corporate ladder, in only 5 (13%) of our 70 clients is the top researcher in the organization non-white and in only 4 (6%) of the 70 companies is the CMO non-white, and none of the CMOs are black. Undoubtedly, we have a long way to go.

A lack of staff diversity in research suppliers and market research corporate staffs is a problem worth resolving for a very important reason: market researchers and pollsters are the folks providing the information to the rest of the world on diversity issues. Our field can’t possibly provide an appropriate perspective to decision makers if we aren’t more diverse. Our lack of diversity affects the conversation because we provide the data the conversation is based upon.  

Non-profits seem to be a notable exception when it comes to ethnic diversity. I have had large non-profit clients that have wonderfully diverse employee bases, to the point where it is not uncommon to attend meetings and Zoom calls where I am the only white male in the session. These non-profits make an effort to recruit and train diverse staffs and their work benefits greatly from the diversity of perspectives this brings. There is a palpable openness of ideas in these organizations. Research clients and suppliers would do well to learn from their example.  

I can’t think of explicit structural barriers that limit the progression of minorities thought the market research ranks, but that just illustrates the problem: the barriers aren’t explicit, they are more subtle and implicit. Which is what makes them so intractable.

We have to make a commitment to develop more diverse employee bases. I worked directly for the CEO of a major supplier for a number of years. One thing I respected about him was he was confident enough in himself that he was not afraid to hire people who were smarter than him or didn’t think like him or came from an entirely different background. It made him unique. In my experience, most hiring managers unintentionally hire “mini-me’s” – younger variants of themselves whom they naturally like in a job interview. Well, if the hiring managers are mostly white males and they are predisposed to hire a lot of “mini-me’s” over time this perpetuates a privilege and is an example of an unintentional, but nonetheless structural bias that limits the progress of women and minorities.

If you don’t think managers tend to hire in their own image, consider a recent Economist article that states “In 2018 there were more men called Steve than there were women among the chief executives of FTSE 100 companies.” I wouldn’t be surprised if there are more market researchers in the US named Steve than there are black market researchers.

To further illustrate that we naturally seek people like ourselves, I reviewed my own LinkedIn contact list. This list is made up of former colleagues, clients, people I have met along the way, etc. It is a good representation of the professional circle I exist within. It turns out that my LinkedIn contact list is 60% female and has 25% non-whites. But, just 3% of my LinkedIn contacts are black. And, yes, I have more LinkedIn contacts named Steve than I have contacts who are black.

This is a problem because as researchers we need to do our best to cast aside our biases and provide an objective analysis of the data we collect. We cannot do that well if we do not have a diverse array of people working on our projects.

Many managers will tell you that they would like to hire a minority for a position but they just don’t get quality candidates applying. This is not taking ownership of the issue. What are you doing to generate candidates in the first place?

It is all too easy to point the finger backwards at colleges and universities and say that we aren’t getting enough qualified candidates of color. And that might be true. MBA programs continue to enroll many more men than women and many more whites than non-whites. They should be taken to task for this. As employers we also need to be making more demands on them to recruit women and minorities to their programs in the first place.

I like that many research firms have come out with supportive statements and financial contributions to relevant causes recently. This is just a first step and needs to be the catalyst to more long-lasting cultural changes in organizations.

We need to share best practices, and our industry associations need to step up and lead this process. Let’s establish relationships with HCBU’s and other institutions to train the next generation of black researchers.

The need to be diverse is also important in the studies we conduct. We need to call more attention to similarities and differences in our analyses – and sample enough minorities in the first place so that we can do this. Most researchers do this already when we have a reason to believe before we launch the study that there might be important differences by race/ethnicity. However, we need to do this more as a matter of course, and become more attuned to highlighting the nuances in our data sets that are driven by race.

Our sample suppliers need to do a better job of recruiting minorities to our studies, and to ensure that the minorities we sample are representative of a wider population. As their clients, we as suppliers need to make more demands about the quality of the minority samples we seek.

We need an advocacy group for minorities in market research. There is an excellent group, Women in Research https://www.womeninresearch.org/ advocating for women. We need an analogous organization for minorities.

Since I am in research, I naturally think that measurement is key to the solution. I’ve long thought that organizations only change what they can measure. Does your organization’s management team have a formal reporting process that informs them of the diversity of their staff, of their new hires, of the candidates they bring in for interviews? If they do not, your organization is not poised to fix the problem. If your head of HR cannot readily tell you what proportion of your staff is made up of minorities, your firm is likely not paying enough attention.

Researchers will need to realize that their organizations will become better and more profitable when they recruit and develop a more diverse client base. Even though it is the right thing to do, we need to view resolving these issues not solely as altruism. It is in our own self-interest to work on this problem. It is truly the case that if we aren’t part of the solution, we are likely part of the problem. And again, because we are the ones that inform everyone else about public opinion on these issues, we need to lead the way.

My belief is it that this issue will be resolved by Millennials once they get to an age when they are more senior in organizations. Millennials are a generation that is intolerant to unfairness of this sort and notices the subtle biases that add up. They are the most diverse generation in US history. The oldest Millennials are currently in their mid-30’s. In 10-20 years’ time they will be in powerful positions in business, non-profits, education, and government.

Optimistically, I believe Millennials will make a big difference. Pessimistically, I wonder if real change will happen before they are the ones managing suppliers and clients, as thus far the older generations have not shown that they are up to the task.

The myth of the random sample

Sampling is at the heart of market research. We ask a few people questions and then assume everyone else would have answered the same way.

Sampling works in all types of contexts. Your doctor doesn’t need to test all of your blood to determine your cholesterol level – a few ounces will do. Chefs taste a spoonful of their creations and then assume the rest of the pot will taste the same. And, we can predict an election by interviewing a fairly small number of people.

The mathematical procedures that are applied to samples that enable us to project to a broader population all assume that we have a random sample. Or, as I tell research analysts: everything they taught you in statistics assumes you have a random sample. T-tests, hypotheses tests, regressions, etc. all have a random sample as a requirement.

Here is the problem: We almost never have a random sample in market research studies. I say “almost” because I suppose it is possible to do, but over 30 years and 3,500 projects I don’t think I have been involved in even one project that can honestly claim a random sample. A random sample is sort of a Holy Grail of market research.

A random sample might be possible if you have a captive audience. You can random sample some the passengers on a flight or a few students in a classroom or prisoners in a detention facility. As long as you are not trying to project beyond that flight or that classroom or that jail, the math behind random sampling will apply.

Here is the bigger problem: Most researchers don’t recognize this, disclose this, or think through how to deal with it. Even worse, many purport that their samples are indeed random, when they are not.

For a bit of research history, once the market research industry really got going the telephone random digit dial (RDD) sample became standard. Telephone researchers could randomly call land line phones. When land line telephone penetration and response rates were both high, this provided excellent data. However, RDD still wasn’t providing a true random, or probability sample. Some households had more than one phone line (and few researchers corrected for this), many people lived in group situations (colleges, medical facilities) where they couldn’t be reached, some did not have a land line, and even at its peak, telephone response rates were only about 70%. Not bad. But, also, not random.

Once the Internet came of age, researchers were presented with new sampling opportunities and challenges. Telephone response rates plummeted (to 5-10%) making telephone research prohibitively expensive and of poor quality. Online, there was no national directory of email addresses or cell phone numbers and there were legal prohibitions against spamming, so researchers had to find new ways to contact people for surveys.

Initially, and this is still a dominant method today, research firms created opt-in panels of respondents. Potential research participants were asked to join a panel, filled out an extensive demographic survey, and were paid small incentives to take part in projects. These panels suffer from three response issues: 1) not everyone is online or online at the same frequency, 2) not everyone who is online wants to be in a panel, and 3) not everyone in the panel will take part in a study. The result is a convenience sample. Good researchers figured out sophisticated ways to handle the sampling challenges that result from panel-based samples, and they work well for most studies. But, in no way are they a random sample.

River sampling is a term often used to describe respondents who are “intercepted” on the Internet and asked to fill out a survey. Potential respondents are invited via online ads and offers placed on a range of websites. If interested, they are typically pre-screened and sent along to the online questionnaire.

Because so much is known about what people are doing online these days, sampling firms have some excellent science behind how they obtain respondents efficiently with river sampling. It can work well, but response rates are low and the nature of the online world is changing fast, so it is hard to get a consistent river sample over time. Nobody being honest would ever use the term “random sampling” when describing river samples.

Panel-based samples and river samples represent how the lion’s share of primary market research is being conducted today. They are fast and inexpensive and when conducted intelligently can approximate the findings of a random sample. They are far from perfect, but I like that the companies providing them don’t promote them as being random samples. They involve some biases and we deal with these biases as best we can methodologically. But, too often we forget that they violate a key assumption that the statistical tests we run require: that the sample is random. For most studies, they are truly “close enough,” but the problem is we usually fail to state the obvious – that we are using statistical tests that are technically not appropriate for the data sets we have gathered.

Which brings us to a newer, shiny object in the research sampling world: ABS samples. ABS (addressed-based samples) are purer from a methodological standpoint. While ABS samples have been around for quite some time, they are just now being used extensively in market research.

ABS samples are based on US Postal Service lists. Because USPS has a list of all US households, this list is an excellent sampling frame. (The Census Bureau also has an excellent list, but it is not available for researchers to use.) The USPS list is the starting point for ABS samples.

Research firms will take the USPS list and recruit respondents from it, either to be in a panel or to take part in an individual study. This recruitment can be done by mail, phone, or even online. They often append publicly-known information onto the list.

As you might expect, an ABS approach suffers from some of the same issues as other approaches. Cooperation rates are low and incentives (sometimes large) are necessary. Most surveys are conducted online, and not everyone in the USPS list is online or has the same level of online access. There are some groups (undocumented immigrants, homeless) that may not be in the USPS list at all. Some (RVers, college students, frequent travelers) are hard to reach. There is evidence that ABS approaches do not cover rural areas as well as urban areas. Some households use post office boxes and not residential addresses for their mail. Some use more than one address. So, although ABS lists cover about 97% of US households, the 3% that they do not cover are not randomly distributed.

The good news is, if done correctly, the biases that result from an ABS sample are more “correctable” than those from other types of samples because they are measurable.

A recent Pew study indicates that survey bias and the number of bogus respondents is a bit smaller for ABS samples than opt-in panel samples.

But ABS samples are not random samples either. I have seen articles that suggest that of all those approached to take part in a study based on an ABS sample, less than 10% end up in the survey data set.

The problem is not necessarily with ABS samples, as most researchers would concur that they are the best option we have and come the closest to a random sample. The problem is that many firms that are providing ABS samples are selling them as “random samples” and that is disingenuous at best. Just because the sampling frame used to recruit a survey panel can claim to be “random” does not imply that the respondents you end up in a research database constitute a random sample.

Does this matter? In many ways, it likely does not. There are biases and errors in all market research surveys. These biases and errors vary not just by how the study was sampled, but also by the topic of the question, its tone, the length of the survey, etc. Many times, survey errors are not the same throughout an individual survey. Biases in surveys tend to be “unknown knowns” – we know they are there, but aren’t sure what they are.

There are many potential sources of errors in survey research. I am always reminded of a quote from Humphrey Taylor, the past Chairman of the Harris Poll who said “On almost every occasion when we release a new survey, someone in the media will ask, “What is the margin of error for this survey?” There is only one honest and accurate answer to this question — which I sometimes use to the great confusion of my audience — and that is, “The possible margin of error is infinite.”  A few years ago, I wrote a post on biases and errors in research, and I was able to quickly name 15 of them before I even had to do an Internet search to learn more about them.

The reality is, the improvement in bias that is achieved by an ABS sample over a panel-based sample is small and likely inconsequential when considered next to the other sources of error that can creep into a research project. Because of this, and the fact that ABS sampling is really expensive, we tend to only recommend ABS panels in two cases: 1) if the study will result in academic publication, as academics are more accepting of data that comes from and ABS approach, and 2) if we are working in a small geography, where panel-based samples are not feasible.

Again, ABS samples are likely the best samples we have at this moment. But firms that provide them are often inappropriately portraying them as yielding random samples. For most projects, the small improvements in bias they provide is not worth the considerable increased budget and increased study time frame, which is why, for the moment, ABS samples are currently used in a small proportion of research studies. I consider ABS to be “state of the art” with the emphasis on “art” as sampling is often less of a science than people think.

Should you use DIY market research tools?

A market research innovation has occurred over the past decade that is talked about in hushed tones among research suppliers:  the rise of DIY market research tools. Researchers and clients need to become more educated on what these DIY tools are and when it is appropriate to used them.

DIY tools come in a number of flavors. At their core, they allow anybody to log into a system, author a survey, select sample parameters, and hit “go.” Many also provide the ability to tabulate data and graph results. These tools reduce the complexity of fielding studies. For the most part, these tools are created by outside panel and research technology companies but some end clients have invested in their own tools.

Many research suppliers view DIY tools as an existential threat. After all, if clients can do all this themselves what do they need us for? Will our fielding and programming departments become obsolete? Will we have a large portion of what we do automated?

Maybe. But more likely our fielding and programming departments will become smaller and have to adapt to a changing technological world.

There is a clear analogy here to DIY household projects. The tools and materials needed for most home improvement projects are available at big box retailers. Some homeowners are well-equipped to take on projects themselves, others are not, and the key to a successful project is often understanding when it is important to call for professional help. The same is true for market research projects.

Where the analogy fails is when you take on a project you aren’t equipped to handle. If it is a home project you will probably discover that you got in a bit over your head along the way. In market research, however, you can complete an entire project that has serious errors in it but never really notice. The project will result in sub-optimal decision making and nobody may really notice.

In days gone by, the decision of whether to use a research supplier or not was straightforward. If the project was meaningful or complex, clients used suppliers. For many projects, the choice used to be between using a supplier or not doing the project at all. The rise of DIY tools has changed that.

Here are some instances where DIY research makes sense:

  • If the project is relatively simple. By simple, we mean from both a questionnaire design and a sampling perspective.
  • If the risk of making a suboptimal decision based on the information is low. Perhaps the best aspect of DIY tools is they permit clients to research issues that otherwise may have gone unresearched because of time and budget considerations.
  • When getting it done quickly is important. For many projects, there is something to be said for getting it 90% right and getting it done today rather than taking months to get it perfect.
  • If you have someone with supplier-side experience on staff. Suppliers are likely to be a bit more attuned to the nuances of study design and may notice mistakes others might miss.
  • If you have thought through the potential limitations of the DIY approach and have communicated this to your internal client.
  • When you are using the DIY project to pre-test or pilot a study. This is an excellent use of DIY tools: to be sure your questioning and scales are going to work before committing significant resources to a project. A DIY project can make the subsequent project more efficient.

Here are cases when we would caution against using DIY tools:

  • If a consequential decision will be made based on the results. Having the backing of a third-party supplier is important in this case and the investment is likely worth it.
  • When research results need to motivate people internally. Internal decision makers will typically listen more to research results if the study was conducted by a third-party.
  • When a broader perspective is needed. As a client, you know your firm and industry better than most suppliers will. But there are many times when having a broader perspective on a project provides substantial value to it.
  • If the sampling is complicated. If your target audience is obscure and hard to define in a few words, suppliers can be very helpful in getting your sampling right. In a previous post we mention that it is the sampling aspects of projects that most clients don’t think through enough. We have found that the most serious mistakes made in market research deal with sampling, and often these mistakes are hard to notice.
  • If you are conducting a business-to-business study. DIY sampling resources aren’t yet of the same quality for b-to-b research as they are for consumer studies.

DIY studies clearly have their place. They will augment current studies in some cases and replace them in others. I don’t see them as a threat to the highly-customized types of studies Crux Research tends to conduct. Market research spending will continue to grow slowly, but less will be spent on data collection and more on higher value-added aspects of projects.

In the 30 years I have worked in research, the cost of data collection has dropped considerably – I’d say it is about one-third what it used to be. But, during this time the price of research projects has increased. The implication is that clients have come to value the consultative aspects of studies more and have become more reliant on their suppliers to do things that previously clients did for themselves.

That presents a bit of a conundrum: clients are outsourcing more to suppliers at a time when tools are being developed that allow them to do many projects without a supplier. For many clients, money and time would be saved by hiring someone on staff that knows how to use these tools recognizes when a third-party supplier is necessary.

Did Apple just kill telephone market research?

A recent issue of The Economist contained an article that describes a potential threat to the accuracy of opinion polling. The latest iPhones have a software feature that doesn’t just block robocalls but sends all calls from unknown callers automatically to voice mail. This feature combats unwanted calls on mobile phones.

Matching sampling frames to populations of interest is increasingly difficult to accomplish in survey research, particularly telephone studies. I will always remember my first day on the job in 1989 when my supervisor was teaching me how to bid projects. The spreadsheet we used assumed our telephone polls would have a 60% cooperation rate. So, at that time about 6 in 10 phone calls we made resulted in a willing respondent. Currently, telephone studies rarely achieve a cooperation rate above 5%. That is 1 in 20 calls. If you are lucky.

The Do Not Call Registry took effect in 2003. At this time, most survey research was still being conducted by telephone (online research was growing but still represented only about 20% of the market research industry’s revenues). Researchers were initially relieved that market research and polls were exempt from the law but in the end that didn’t matter. People stopped cooperating with telephone studies because they thought they had opted out of research calls when they signed up for the Registry. Response rates plummeted.

The rise of mobile phones caused even more headaches for telephone researchers. There was initially no great way to generate random numbers of cell phones in the same way that could be done for land lines and publicly-available directories of cell phone numbers did not exist. For quite some time, telephone studies were underrepresenting mobile phone users and had no great solution for how to interview respondents who did not even have a land line. Eventually, the industry figured this out and methods for including mobile phones became standard.

This new development of automatically routing mobile calls to voice mail could well signify the end of telephone-based research. If consumers like this feature on iPhones it won’t be long until Android-based phones do the same thing. It will preclude pollsters from effectively reaching mobile-only households. Believe it or not, about 45% of US households still have a land line, but the 55% who do not skew young, urban, and liberal.

Pollsters will figure this out and will oversample mobile only households and weight them up in samples. But that won’t really fix the problem. Samples will miss those that have the latest phones and will eventually miss everybody once all current phones are replaced. Oversampling and weighting can help balance under-represented groups, but can’t fix a problem when a group is not represented at all. Weighting can actually magnify biases in samples.

Implications to this?  Here are a few:

  1. More polls and market research project will be conducted online. This is a good thing as there is evidence that in the 2016 election the online polls were more accurate than the telephone polls. It is hard to believe, but we are at a stage where telephone polls are almost always slower, more expensive, and less accurate than their online counterparts.
  2. Researchers will use more mixed samples, using both telephone and online. In our view this tends to be needlessly complicated and introduces mode effects into these samples. We tend to only recommend mixed-mode data collection in business-to-business projects, where we use the phone to screen to a qualified respondent and then send the questionnaire electronically.
  3. Costs of telephone polls will go up. They are already almost criminally expensive and this will get even worse. For those not in the know, the cost per interview for a telephone poll is often 20 to 30 times the cost of an online interview.
  4. Address Based Samples (ABS) will gain in popularity. As telephone response rates decline, systematic biases in telephone samples increase. ABS, when properly operationalized, is a good alternative (although ABS has its limitations as well). ABS still isn’t really probability sampling, but it is the closest thing we have.
  5. The increased cost of telephone polls will spur even more investment in online panels. The quality of online research will be better off because of it. If there is a silver lining for researchers, this is probably it.

Technology has always tended to move faster than the market research industry has been able to adapt to it, probably because researchers have an academic mindset (thorough, but slow). Research methodologists always seem to eventually come up with a solution, but not always quickly. For now, we’d recommend against trusting any opinion poll that is based on a telephone sample, unless the researchers behind it have specifically made a case for how they are going to address this new issue of software blocking their calls to mobile phones. The good news is push polls and robo polls will soon become almost impossible to conduct.

Among college students, Bernie Sanders is the overwhelming choice for the Democratic nomination

Crux Research poll of college students shows Sanders at 23%, Biden at 16%, and all other candidates under 10%

ROCHESTER, NY – October 10, 2019 – Polling results released today by Crux Research show that if it was up to college students, Bernie Sanders would win the Democratic nomination the US Presidency. Sanders is the favored candidate for the nomination among 23% of college students compared to 16% for Joe Biden. Elizabeth Warren is favored by 8% of college students followed by 7% support for Andrew Yang.

  • Bernie Sanders: 23%
  • Joe Biden: 16%
  • Elizabeth Warren: 8%
  • Andrew Yang: 7%
  • Kamala Harris: 6%
  • Beto O’Rourke: 5%
  • Pete Buttigieg: 4%
  • Tom Steyer: 3%
  • Cory Booker: 3%
  • Michael Bennet: 2%
  • Tulsi Gabbard: 2%
  • Amy Klobuchar: 2%
  • Julian Castro: 1%
  • None of these: 5%
  • Unsure: 10%
  • I won’t vote: 4%

The poll also presented five head-to-head match-ups. Each match-up suggests that the Democratic candidate currently has a strong edge over President Trump, with Sanders having the largest edge.

  • Sanders versus Trump: 61% Sanders; 17% Trump; 12% Someone Else; 7% Not Sure; 3% would not vote
  • Warren versus Trump: 53% Warren; 18% Trump; 15% Someone Else; 9% Not Sure; 5% would not vote
  • Biden versus Trump: 51% Biden; 18% Trump; 19% Someone Else; 8% Not Sure; 4% would not vote
  • Harris versus Trump: 48% Harris; 18% Trump; 20% Someone Else; 10% Not Sure; 4% would not vote
  • Buttigieg versus Trump: 44% Buttigieg; 18% Trump; 22% Someone Else; 11% Not Sure; 5% would not vote

The 2020 election could very well be determined on the voter turnout among young people, which has traditionally been much lower than among older age groups.

###

Methodology
This poll was conducted online between October 1 and October 8, 2019. The sample size was 555 US college students (aged 18 to 29). Quota sampling and weighting were employed to ensure that respondent proportions for age group, sex, race/ethnicity, and region matched their actual proportions in the US college student population.

This poll did not have a sponsor and was conducted and funded by Crux Research, an independent market research firm that is not in any way associated with political parties, candidates, or the media.

All surveys and polls are subject to many sources of error. The term “margin of error” is misleading for online polls, which are not based on a probability sample which is a requirement for margin of error calculations. If this study did use probability sampling, the margin of error would be +/-4%.

About Crux Research Inc.
Crux Research partners with clients to develop winning products and services, build powerful brands, create engaging marketing strategies, enhance customer satisfaction and loyalty, improve products and services, and get the most out of their advertising.

Using quantitative and qualitative methods, Crux connects organizations with their customers in a wide range of industries, including health care, education, consumer goods, financial services, media and advertising, automotive, technology, retail, business-to-business, and non-profits.
Crux connects decision makers with customers, uses data to inspire new thinking, and assures clients they are being served by experienced, senior level researchers who set the standard for customer service from a survey research and polling consultant.

To learn more about Crux Research, visit http://www.cruxresearch.com.

How to be an intelligent consumer of political polls

As the days get shorter and the air gets cooler, we are on the edge of a cool, colorful season. We are not talking about autumn — instead, “polling season” is upon us! As the US Presidential race heats up, one thing we can count on is being inundated with polls and pundits spinning polling results.

Most market researchers are interested in polls. Political polling pre-dates the modern market research industry and most market research techniques used today have antecedents from the polling world. And, as we have stated in a previous post, polls can be as important as the election itself.

The polls themselves influence voting behavior which should place polling organizations in an ethical quandary. Our view is that polls, when properly done, are an important facet of modern democracy. Polls can inform our leaders as to what the electorate cares about and keep them accountable. This season, polls are determining which candidates get on the debate stage and are driving which issues candidates are discussing most prominently.

The sheer number of polls that we are about to see will be overwhelming. Some will be well-conducted, some will be shams, and many will be in between. To help, we thought we’d write this post on how be an intelligent consumer of polls and what to look out for when reading the polls or hearing about them in the media.

  • First, and this is harder than it sounds, you have to put your own biases aside. Maybe you are a staunch conservative or liberal or maybe you are in the middle. Whatever your leaning, your political views are likely going to get in the way of you becoming a good reader of the polls. It is hard to not have a confirmation bias when viewing polls, where you tend to accept a polling result that confirms what you believe or hope will happen and question a result that doesn’t fit with your map of the world. I have found the best way to do this is to first try to view the poll from the other side. Say you are a conservative. Start by thinking about how you would view the poll if you leaned left instead.
  • Next, always, and I mean ALWAYS, discover who paid for the poll. If it is an entity that has a vested interest in the results, such as a campaign, a PAC, and industry group or lobbyist, go no further. Don’t even look at the poll. In fact, if the sponsor of the poll isn’t clearly identified, move on and spend your time elsewhere. Good polls always disclose who paid for it.
  • Don’t just look to who released the poll, review which organization executed it. For the most part, polls executed by major polling organizations (Gallup, Harris, ORC, Pew, etc.) will be worth reviewing as will polls done by colleges with polling centers (Marist, Quinnipiac, Sienna, etc.). But there are some excellent polling firms out there you likely have never heard of. When in doubt, remember that Five Thirty Eight gives pollsters grades based on their past performances.  Despite what you may hear, polls done by major media organizations are sound. They have polling editors that understand all the nuances and have standards for how the polls are conducted. These organizations tend to partner with major polling organizations that likewise have the methodological muscle that is necessary.
  • Never, and I mean NEVER, trust a poll that comes from a campaign itself. At their best, campaigns will cherry pick results from well executed polls to make their candidate look better. At their worst, they will implement a biased poll intentionally. Why? Because much of the media, even established mainstream media, will cover these polls. (As an aside, if you are a researcher don’t trust the campaigns either. From my experience, you have about a 1 in 3 chance of being paid by a campaign for conducting their poll.)
  • Ignore any talk about the margin of error. The margin of error on a poll has become a meaningless statistic that is almost always misinterpreted by the media. A margin of error really only makes sense when a random or probability sample is being used. Without going into detail, there isn’t a single polling methodology in use today that can credibly claim to be using a probability sample. Regardless, being within the margin of error does not mean a race is too close to call anyway. It really just means it is too close to call with 95% certainty.
  • When reading stories on polls in the media, read beyond the headline. Remember, headlines are not written by reporters or pollsters. They are written by editors that in many ways have had their journalistic integrity questioned and have become “click hunters.” Their job is to get you to click on the story and not necessarily to accurately summarize the poll. Headlines are bound to be more sensational that the polling results merit.

All is not lost though. There are plenty of good polls out there worth looking at. Here is the routine I use when I have a few minutes and want to discover what the polls are saying.

  • First, I start at the Polling Report. This is an independent site that compiles credible polls. It has a long history. I remember reading it in the 90’s when it was a monthly mailed newsletter. I start here because it is nothing more than raw poll results with no spin whatsoever. Their Twitter feed shows the most recently submitted polls.
  • I sometimes will also look at Real Clear Politics. They also curate polls, but they also provide analysis. I tend to just stay on their poll page and ignore the analysis.
  • FiveThirtyEight doesn’t provide polling results in great detail, but usually draws longitudinal graphs on the probability of each candidate winning the nomination and the election. Their predictions have valid science behind them and the site is non-partisan. This is usually the first site I look at to discover how others are viewing the polls.
  • For fun, I take a peek at BetFair which is an UK online betting site that allows wagers on elections. It takes a little training to understand what the current prices mean, but in essence this site tells you which candidates people are putting their actual money on. Prediction markets fascinate me; using this site to predict who might win is fun and geeky.
  • I will often check out Pew’s politics site. Pew tends to poll more on issues than “horse race” matchups on who is winning. Pew is perhaps the most highly respected source within the research field.
  • Finally, I go to the media. I tend to start with major media sites that seem to be somewhat neutral (the BBC, NPR, USA TODAY). After reviewing these sites, I then look at Fox News and MSNBC’s website because it is interesting to see how their biases cause them to say very different things about the same polls. I stay away from the cable channels (CNN, Fox, MSNBC) just because I can’t stand hearing boomers argue back and forth for hours on end.

This is, admittedly, way harder than it used to be. We used to just be able to let Peter Jennings or Walter Cronkite tell us what the polls said. Now, there is so much out there that to truly get an objective handle on what is going on takes serious work. I truly think that if you can become an intelligent, unbiased consumer of polls it will make you a better market researcher. Reading polls objectively takes a skill that applies well to data analysis and insight generation, which is what market research is all about.

Why Lori is the Best Shark in the Tank

Shark Tank is one of my favorite TV shows. It showcases aspiring entrepreneurs as they make business presentations to an investor panel, who then choose whether to invest. It is fun to play along and try to predict how the Sharks will react to a business pitch. As a small business owner/entrepreneur, it is fun to imagine how I might do in a Shark Tank presentation. And, it was interesting to watch a college teammate of mine make a successful pitch on the show.

My inner geek came out when watching a recent episode. I got to wondering how much the need to entertain might cloud how the venture capital world is portrayed on the show. How many Shark Tank pitches actually result in successful companies? Is the success rate for Shark Tank businesses any higher than any other small company looking for growth capital? Are there any biases in the way the Shark Tanks choose to invest?

This curiosity led to a wasted work day.

Venture capital, especially at early stages, involves high risk bets. Firms may invest in 100 companies knowing full well that 80 or 90 of them will fail, but that a handful of wild successes will pay off handsomely. It isn’t for the faint of heart. I found an interview with Mark Cuban where he stated he hoped that 15% of his Shark Tank investments would eventually pay off. Even that seems high. Given that he has invested about $32 million so far, that is an admission that $27.5 million of that is expected to be wasted. Gutsy.

I also was able to discover interesting things about the show that are largely hidden from the viewer:

  1. The Sharks themselves are paid to take part. I was able to find discussions that suggested they may make as much as $100K per episode. That is a million dollars or more per season, so perhaps they are playing with house money more than they let on.
  2. Getting on Shark Tank is statistically harder than getting into an Ivy League college. It is estimated that more than 50,000 people apply for each season with less than 1% being successful. That alone should provide some realism as to the probability of success of new businesses.
  3. In the early seasons, an entrepreneur had to give up 2% of revenue or 5% of his/her company to the production firm just to appear on the show. That requirement was removed in later seasons because Mark Cuban refused to remain on the show if it remained.
  4. Many of the deals you see made on the show don’t end up being consummated. Forbes conducted survey research in 2016 that indicated that 43% of Shark Tank deals fell apart in the due diligence stage and 30% of the time the deal changed substantially from what is seen on TV. The deal you see on TV only came to fruition as you saw it about 1 in 4 times (27% of the time).

This makes it challenging to assess the deals and whether or not they paid off. Shark Tank companies are almost all privately-held so their revenue data is tough to come by and we can’t really know for sure what the deal was.

Although we can’t review business outcomes as we might like, we can look closely at the deals themselves. The data we used for this includes all deals and prospective deals from the first nine seasons of the show. So, it does not include the current season, which premiered in October 2018.

In the first nine seasons, there were 803 pitches resulting in 436 closed on-air deals (53% of pitches). Applying the Forbes data would imply that of these 436 deals, 187 of them likely feel apart, and 131 of them likely changed substantially. The net? Our projection would be that 53% of pitches result in handshakes on-air, but post-show only 37% of all original pitches close at all and only 15% of pitches will close at the terms you see on air.

Why would Shark Tank deals fail to close? There is a due diligence stage where investors get to have their accountants review the entrepreneur’s books. I found some articles that indicated that some entrepreneurs got cold feet and refused the deal after the show. Also, some of the deals have contingencies which fail to occur.

It is interesting to look at deals by the gender of the entrepreneur as it shows that Shark Tank entrepreneurs skew heavily male:

  • Men are much more likely than women to appear as entrepreneurs on Shark Tank. Of the 803 pitches, 482 (60%) made by men, 198 (25%) by women, and 119 (15%) by mixed teams of men and women. So, 75% of the time, at least one male is involved in the pitch, and 40% of the time at least one female is involved in the pitch.
  • However, women (59% closed) are more likely than men (51%) to successfully close a deal on air.

There are data that imply that men and women negotiate differently:

  • Men initially ask for higher company valuations ($4.5 million on average) than women ($3.1 million on average).
  • Men also ask for more capital ($342K on average) than women ($238K on average).
  • Men (47%) and women (49%) receive about the same proportion of their initial valuation ask. Men (94%) and women (88%) also receive about the same proportion of cash that they initially ask for.

So, men are far more likely to appear on the show and come with bigger deals on average than women. But they receive (proportionately) about the same discount on their deals as women as they negotiate with the Sharks. If there is a difference in their negotiation skills it is that men start bigger or come to the show with more mature companies.

We can also look at individual Sharks to get a sense of how good of negotiators they are:

  • Mark is the most aggressive Shark. He has the most deals (132, despite not being on the early seasons of the show) as well as the most invested (about $32 million).
  • The cheapest (or most frugal?) Sharks are Barbara and Daymond. Barbara has put forth the least amount of money (about $10 million) and her average deal valuation is $945K. Daymond has put out the second least amount of money (about $12 million) and has an average deal size of $957K. These two Sharks have likely not put much more money into their Shark investments than they have been paid to be on the show.
  • Mr. Wonderful seems to have a “go big or stay home” mentality. He has closed the fewest deals (64) of any Shark. But, his average deal valuation of $2.7 million is the highest of any Shark.
  • Lori and Kevin (31% of pitches) are the most likely to make an offer. Barbara and Daymond (22%) are least likely to make an offer.
  • So, Kevin make the most offers and closes the fewest deals, making him the least desirable Shark from the standpoint of the entrepreneurs.

Barbara is the most likely to invest in a female entrepreneur. She is about as likely to invest in a female entrepreneur as a male entrepreneur despite the fact that so many more men than women appear on the show. Kevin and Robert are the least likely to invest in a female entrepreneur. Mark and Daymond demonstrate no bias, as the invest in about the same proportion as appearances on the show.

ALL

Barbara Lori Mark Daymond Kevin

Robert

Male

60%

44% 53% 60% 57% 67%

71%

Female

25%

42% 33% 27% 25% 19%

17%

Mixed Team

15%

14% 14% 13% 18% 14%

12%

So, who has been the most successful Shark? It can be hard to tell because data are scarce, but my vote would go for Lori. USA Today put out a list of the top 20 best-selling products featured on Shark Tank. Six of the top 10 were from investments Lori made, including the top 3. Eight of the top 10 investments by revenue were made by the two female Sharks, Lori and Barbara.

Who are the worst Sharks in terms of revenue performance? My vote here would be a tie between Mark and Daymond. Mark has just 3 of the top 20 investments and Daymond has just 2. If we can assume that the goal of venture capital is to generate big wins, it is clear that Lori and Barbara are killing it and Mark and Daymond are not.

Shark Tank is a great catalyst for entrepreneurs, but because it is entertainment and not reality it can mischaracterize entrepreneurship in the real world. Sharks may invest for the entertainment value of the show and because investing boosts their personal brand as much as the product. And, it might just be the case that the amount of money they have invested is not much larger than the amount of money they have been paid to be on the show.

Almost all successful people will tell you that learning from their failures was at least as important as their successes, yet Shark Tank never revisits failed investments and it is likely that the bulk of the deals we see on TV do not end up paying off for the investor. The show does not disclose how few of its deals actually come to fruition once the cameras are no longer rolling. Just once I’d like to see an update segment show an investment that failed miserably.

Shark Tank also seems to imply that hard work and grit always triumph, when in reality knowing when to cut losses and having a little bit of luck matters a lot in business success. Grit matters for sure, but not when it’s focus is blind and irrational, and it can be sad to see entrepreneurs who have sacrificed so much and it is clear their business is not going to make it.

At its best, Shark Tank stimulates people to think like an entrepreneur. At its worst, it presents too rosy a picture of small business life which influences people to invent new products and launch companies that are likely to fail, at great consequence to the entrepreneur. It certainly provides great entertainment.

Will Blockchain Disrupt Marketing and Research?

The field of survey research was largely established in the 1930s and matured in the post WWII era as the US economy boomed and companies became more customer-driven. Many early polls were conducted in the most old-fashioned way possible: by going door-to-door with a clipboard and pestering people with questions. Adoption of the telephone in the US (which happened slowly – telephone penetration was less than 50% before WWII and didn’t hit 90% until 1972) made possible an efficient way to reliably gather projectable samples of consumers and the research industry grew quickly.

Then the Internet changed everything. I was fortunate to be at a firm that was leading the charge for online market research at a time when Internet penetration in the US was only about 20%. By the time I left the firm, Internet penetration had reached over 85% and online market research had pretty much supplanted telephone research. What had taken the telephone 40+ years to do to door-to-door polling had happened in less than 10 years, completely transforming an industry.

So, what is next? What nascent technology might transform the market research industry?

Keep your eyes on Blockchain.

Blockchain is best known as the technology that underpins cryptocurrencies like Bitcoin. The actual technology of Blockchain is complex and difficult for most people to understand. (I’d be lying if I said I understood the technology.) But, Blockchain is conceptually simple. It is a way to exchange value and trust between strangers in an un-hackable way and without the need for middlemen. It allows value to be exchanged and stored securely and privately. Whereas the Internet moves information, Blockchain moves value.

Those interested in the potential for Blockchain technology should read The Blockchain Revolution by Don and Alex Tapscott. Or, if you’d like a shortcut, you can watch Don’s excellent Ted Talk.

If Blockchain gains steam and hits a critical mass of acceptance, it has the potential to transform everything including our financial system, our contracts, our elections, our corporate structures, and our governments. It has applicability for any aspect of life that involves an exchange of value that requires an element of trust – which is pretty much everything we do to interact as human beings.

A simple example of how it works is provided by its first widespread application – as a cryptocurrency like Bitcoin. Currently, if I buy a book online, my transaction passes though many intermediaries that are often transparent to me. My money might move from my credit card company to my bank, to another bank, to Amazon, their bank, to the bookseller, to their bank, and I suppose eventually a few crumbs make their way to the author (via their bank of course). There are markups all along the way that are taken by all the intermediaries who don’t add value beyond facilitating the transaction. And, at every step there is an opportunity for my data to be compromised and hacked. The digital shadow left allows all sorts of third parties to know what I am reading and even where I am reading it.

This is an imperfect system at best and one that a cryptocurrency resolves. Via Bitcoin, I can buy a book directly from an author, securely, with no opportunity for others to see what I am doing or to skim value along the way. In fact, the author and I remain strangers.

Blockchain is mostly known currently as Bitcoin’s technology, but its potential dwarfs its current use. Blockchain will clearly transform the financial services industry, and for the better. Buyers and sellers can transact anonymously and securely without a need for intermediaries. Stocks can be exchanged directly by buyers and sellers, and this could lead to a world devoid of investment banks, brokers, and hedge fund managers, or at least one where their roles become strictly advisory.

A useful way to think of the potential of Blockchain is to think of trust. Trust in an economic sense lowers transactions costs and decreases risk. Why do I need lawyers and a contract if I can fully trust my contractor to do what he/she promises? Why do I need Uber when I can contract directly with the driver? As transactions costs decline, we’ll see a much more “democratized” economy. Smaller entities will no longer be at a disadvantage. The costs of coordinating just about anything will decline, resulting in a smaller and very different role for management. If Blockchain really ignites, I’d expect to see flatter corporate structures, very little middle management, and a greater need for truly inspirational leaders.

Any industry reliant on payment systems or risk is ripe for disruption via Blockchain technology. Retail, insurance, government contracting, etc. will all be affected. But, Blockchain isn’t just about payments.  Payments are just a tangible manifestation of what Blockchain really facilitates – which is an exchange of value. Value isn’t always monetary.

Which brings me (finally!) to our field: marketing and marketing research. Marketers and market researchers are “middlemen” – and any middleman has the potential to be affected by Blockchain technology. We stand between the corporation and its customers.

Marketers should realize Blockchain may have important implications to the brand. A brand is essentially a manifestation of trust. In the current digital world, many marketers struggle to retain control of their brands. This is upsetting to those of us trained in historical brand management. Blockchain will result in a greater focus on the brand by customers. They will seek to trust the brand more because Blockchain can enable this trust.

As a researcher I see Blockchain as making it essential that I add value to the process as opposed to being a conduit for the exchange of value. Put more simply, Blockchain will make it even more important that researchers add insight rather than merely gather data. In custom research about half of the cost of a market research project is wrapped up in data collection and that is the part that seems most ripe for disruption. There won’t be as many financial rewards for researchers for the operational aspects of projects. But, there will always be a need to help marketers make sense of the world.

When we design a survey, we are seeking information from a respondent. This information might be classification information (information about who you are), behavioral information (information about what you do), or attitudinal information (information about what you think and feel). In all cases, as a researcher, I am trusting that respondents will provide this information to me willingly and accurately.  As a respondent, you trust me to keep your identity confidential and to provide you an honorarium or incentive for your time. We are exchanging value – you are providing me with information and your time, and I am providing you with compensation and a comfort that you are helping clients better understand the needs of their customers. Blockchain has the potential to make this process more efficient and beneficial to the respondent. And that is important – our industry is suffering from a severe respondent trust problem right now. We don’t have to look much past our plummeting response rates to see that we have lost the respondent trust. Blockchain may be one way we can earn it back.

Blockchain can also authenticate the information we analyze. It can sort out fake data, such as fake postings on websites. To its core, Blockchain makes data transfers simple, secure, and efficient. It can help us more securely store personal information, which in turn will assure our respondents that they can trust us.

Blockchain can provide individuals with greater control over their “digital beings.” Currently, as we go about our lives (smartphone in pocket) we leave digital traces everywhere. This flotsam of our digital lives has value and is gathered and used by companies and governments, and has spawned new research techniques to mine value from this passive data stream. The burgeoning field of Big Data analysis is dependent on this trail we leave. Privacy concerns aside, it doesn’t seem right that consumers are creating a value they do not get to benefit from. Blockchain technology has the potential to allow individuals to retain control and to benefit from the trail of value they are leaving behind as they negotiate a digital world.

Of course as a research supplier I can also see Blockchain as a threat, as suppliers are middlemen between clients and their customers. Blockchain has the potential to replace, or at least enhance, any third-party relationship.  But, I envision Blockchain as being beneficial to smaller suppliers like Crux Research. Blockchain will require suppliers to be more value-added consultants, and less about reliable data collection. That is precisely what smaller suppliers do better than the larger firms, so I would predict that more smaller firms will be started as a result.

Blockchain is clearly in its infancy for marketers. Its potential may prove to be greater than its reality. But, just as we saw with the rise of the Internet, a technology such as this can grow up quickly, and can transform our industry.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.