Posts Tagged 'Crux Research'

A forgotten man: rural respondents

I have attended hundreds of focus groups. These are moderated small group discussions, typically with anywhere from 4 to 12 participants. The discussions take place in a tricked-out conference room, decked with recording equipment and a one-way mirror. Researchers and clients sit behind this one-way mirror in a cushy, multi-tiered lounge. The lounge has comfortable chairs, a refrigerator with beer and wine, and an insane number of M&M’s. Experienced researchers have learned to sit as far away from the M&M’s as possible.

Focus groups are used for many purposes. Clients use them to test out new product ideas or new advertising under development. We recommend them to clients if their objectives do not seem quite ready for survey research. We also like to do focus groups after a survey research project is complete, to put some personality on our data and to have an opportunity to pursue unanswered questions.

I would estimate that at least half of all focus groups being conducted are being held in just three cities: New York, Chicago, and Los Angeles. Most of the other half are held in other major cities or in travel destinations like Las Vegas or Orlando. These city choices can have little to do with the project objectives – focus groups tend to be held near where the client’s offices are or in cities that are easy to fly to. Clients often cities simply because they want to go there.

The result is that early-stage product and advertising ideas are almost always evaluated by urban participants or by suburban participants who live near a large city. Smaller city, small town, and rural consumers aren’t an afterthought in focus group research. They aren’t thought about at all.

I’ve always been conscious of this, perhaps because I grew up in a rural town and have never lived in a major metropolitan area. The people I grew up with an knew best were not being asked to provide their opinions.

This isn’t just an issue in qualitative research, it happens with surveys and polls as well. Rural and small-town America is almost always underrepresented in market research projects.

This wasn’t a large issue for quantitative market research early on, as RDD telephone samples could effectively include rural respondents. Many years ago, I started adding questions into questionnaires that would allow me to look at the differences between urban, suburban, and rural respondents. I would often find differences, but pointing them out met with little excitement with clients who often seemed uninterested in targeting their products or marketing to a small-town audience.

Online samples do not include rural respondents as effectively as RDD telephone samples. The rural respondents that are in online sampling data bases are not necessarily representative of rural people. Weighting them upward does not magically make them representative.

In 30 years, I have not had a single client ask me to correct a sample to ensure that rural respondents are properly represented. The result is that most products and services are designed for suburbia and don’t take the specific needs of small-town folks into account.

All biases only matter if they affect what we are measuring. If rural respondents and suburban respondents feel the same way about something, this issue doesn’t matter. However, it can matter. It can matter for product research, it certainly matters to the educational market research we have conducted, and it is likely a hidden cause of some of the problems that have occurred with election polling.

How to succeed in business without planning

Crux Research is now entering its 16th year. This places us in the 1% of start-ups – as it is well-documented that most new businesses fail in the first few years. Few are lucky enough to make it to 16.

One of the reasons experts tend to say new companies fail is a lack of business planning. However, new business successes are not a consequence of an ability to plan. Success is a function of a couple of basic things:  having a product people want to buy and then delivering it really well. It doesn’t get much more basic than that, but too many times we get lost in our ambitions and ideas and aren’t forthright with ourselves on what we are good at and not so good at.

Business plans are neither necessary nor sufficient to business success. Small businesses that focus on them remind me a bit of when a company I was with put the entire staff through training on personal organization and time management. The people that embraced the training were mostly hyper-organized to begin with and the people who might have most benefited from the training were resistant. Small businesses that rely heavily on business planning probably are using it more as reflection of their natural organizational tendencies than as a true guide to running a business. There are, of course, exceptions.

Entrepreneurs that use business plans seem to fall into two categories. First, they may feel constrained by them. If they have investors, they feel a pressure to stay within the confines of the plan even when the market starts telling them to pivot to a new strategy. They start to think adherence to the plan is, in itself, a definition of success. In short, they tend to be “process” people as opposed to “results” people.  Entrepreneurs that are “results” people are the successful ones. If you are a “process” person I’d suggest that you may be a better fit for a larger company.

Second, and this is more common, many entrepreneurs spend a lot of time in business planning before launching their company and then promptly file the plan away and never look at it again.

That is how it went for me in the corporate world. We would craft annual plans each spring and then never look at them again until late in the year when it was time to create the plan for the following year. It was sort of like giving blueprints to a contractor and then having them build whatever they want to anyway but then giving them a hard time for not following the plan… even if they had built a masterpiece.

We’d also create five-year plans every year. That never made a lot of sense to me as it was an acknowledgement that would couldn’t really plan more than a year out.

I am not against business planning in concept. I agree with Winston Churchill who said “plans are of little importance, but planning is essential.” The true value of most strategic plans is found in the thought process that is gone through to establish them and not in the resulting plan.

I’d suggest that anyone who starts a company list out the top reasons why your company might fail and then think through how you are going to prevent these things from happening. If the reasons for your potential failure are mostly under your control, you’ll be okay. We did this. We identified that if we were going to fail it was going to most likely result from an inability to generate sales, as this wasn’t where our interest or core competence lay. So, we commissioned a company to develop leads for us, had a plan for reaching out to past clients, and planned to hire a salesperson (something that never happened). The point is, while we didn’t really have a business plan, we did think through how to prevent a problem that would prevent our success.  

Before launching Crux Research, we did lay out some key opportunities and threats and thought hard about how to address what we saw as the barriers to our success. Late each fall we think about goals for the coming year, how we can improve what we are doing, and where we can best spend our time in marketing and sales. That sort of thinking is important and it is good to have formalized. But we’ve never had a formal business plan. Maybe we have succeeded in spite of that, but I tend to think we have succeeded because of it. 

Critics would probably say that the issue here is business plans just need to be better and more effective. I don’t think that is the case. The very concept of business planning for a small business can be misguided. It is important to be disciplined as an entrepreneur/small business owner. A focus is paramount to success. But I don’t think it is all that important to be “planned.” You miss too many opportunities that way.

Again, our company has never had a financial plan. I attribute much of our success to that. The company has gone in directions we would never have been able to predict in advance. We try to be opportunistic and open-minded and to stay within the confines of what we know how to do. We prefer having a high level of self-awareness of what we are good and not so good at and to create and respond to opportunities keeping that in mind. As Mike Tyson once said “everyone has a plan until they get punched in the mouth.” I wonder how many small businesses have plans that quickly got punched.

This has led us to many crossroads where we have to choose whether to expand our company or to refuse work. We’ve almost always come down on the side of staying small and being selective in the work we take on.

In the future we are likely to become even more selective in who we take on as clients. We’ve always said that we want a few core things in a client. Projects have to be financially viable, but more importantly they have to be interesting and a situation that will benefit from our experience and insight.  We have developed a client base of really great researchers who double as really great people. It sums up to a business that is a lot of fun to be in.

I would never discourage an entrepreneur from creating a business plan, but I’d advise them to think hard about why they are doing so. You’ll have no choice if you are seeking funding, as no investor is going to give money to a business that doesn’t have a plan. If you are in a low margin business, you have to have a tight understanding of your cash flow and planning that out in advance is important. I’d suggest you keep any plans broad and not detailed, conduct an honest assessment of what you are not so good at as well as good at, and prepare to respond to opportunities you can’t possibly foresee. Don’t judge your success by your adherence to the plan and if your funders appear too insistent on keeping to the plan, find new funders.

Wow! Market research presentations have changed.

I recently led an end-of-project presentation over Zoom. During it, I couldn’t help but think how market research presentations have changed over the years. There was no single event or time period that changed the nature of research presentations, but if you teleported a researcher from the 1990’s to a modern presentation they would feel a bit uncomfortable.

I have been in hundreds of market research presentations — some led by me, some led by others, and I’ve racked up quite a few air miles getting to them. In many ways, today’s presentations are more effective than those in the past. In some other ways, quality has been lost. Below is a summary of some key differences.

Today’s presentations are:

  • Far more likely to be conducted remotely over video or audio. COVID-19 disruptions acted as an accelerant onto this trend which was happening well before 2020. This has made presentations easier to schedule because not everyone has to be available in the office. This allows clients and suppliers to take part from their homes, hotels, and even their vehicles. It seems clear that a lasting effect of the pandemic will be that research presentations will be conducted via Zoom by default. There are plusses and minuses to this. For the first time in 30 years, I find myself working with clients whom I have never met in-person.
  • Much more likely to be bringing in data and perspectives from outside the immediate project. Research projects and presentations tended to be standalone events in the past, concentrating solely on the area of inquiry the study addressed. Today’s presentations are often integrated into a wider reaching strategic discussion that goes beyond the questions the research addresses.
  • More interactive. In yesteryear, the presentation typically consisted of the supplier running through the project results and implications for 45 minutes followed by a period of Q&A. It was rare to be interrupted before the Q&A portion of the meeting. Today’s presentations are often not presentations at all. As a supplier we feel like we are more like emcee’s leading a discussion than experts presenting findings.
  • More inclusive of upper management. We used to present almost exclusively to researchers and mid-level marketers. Now, we tend to see a lot more marketing VP’s and CMOs, strategy officers, and even the CEO on occasion. It used to be rare that our reports would make it to the CEOs desk. Now, I’d say most of the time they do. This is indicative of the increasing role data and research has in business today.
  • Far more likely to integrate the client’s perspective. In the past, internal research staff rarely tried to change or influence our reports and presentations, preferring to keep some distance and then separately add their perspective. Clients have become much more active in reviewing and revising supplier reports and presentations.

Presentations from the 1990’s were:

  • A more thorough presentation of the findings of the study. They told a richer, more nuanced story. They focused a lot more on storytelling and building a case for the recommendations. Today’s presentations often feel like a race to get to the conclusions before you get interrupted.
  • More confrontational. Being challenged on the study method, data quality, and interpretations was more commonplace a few decades ago. I felt a much greater need to prepare and rehearse than I do today because I am not as in control of the flow of the meetings as I was previously. In the past I felt like I had to know the data in great detail, and it was difficult for me to present a project if I wasn’t the lead analyst on it. Today, that is much less of a concern.
  • More strategic. This refers more to the content of the studies than the presentation itself. Since far fewer studies were being done, the ones that were tended to be informing high consequence decisions. While plenty of strategic studies are still conducted, there are so many studies being done today that many of them are informing smaller, low-consequence, tactical decisions.
  • More relaxed. Timelines were more relaxed and as a result research projects were planned well in advance and the projects fed into a wider strategic process. That still happens, but a lot of today’s projects are completed quickly (often too quickly) because information is needed to make a decision that wasn’t even on the radar a few weeks prior.
  • More of a “show.” In the past we rehearsed more, were concerned about the graphical design of the slides, and worried about the layout of the room. Today, there is rarely time for that.
  • More social. Traveling in for a presentation meant spending time beforehand with clients, touring offices, and almost always going to lunch or dinner afterword. Even before the COVID/Zoom era, more recent presentations tended to be “in and out” affairs – where suppliers greet the clients, give a presentation, and leave. While there are many plusses to this, some (I’d actually say most) of the best researchers I know are introverts who were never comfortable with this forced socialization. Those types of people are going to thrive in the new presentation environment.

Client-side researchers were much more planned out in the past. Annually, they would go through a planning phase where all the projects for the year would be budgeted and placed in a timeline. The research department would then execute against that plan. More recently, our clients seem like they don’t really know what projects they will be working on in a few weeks’ time – because many of today’s projects take just days from conception to execution.

I have also noticed that while clients are commissioning more projects they seem to be using fewer suppliers than in the past. I think this is because studies are being done so quickly they don’t have time to manage more than a few supplier relationships. Bids aren’t as competitive and are more likely to be sole-sourced.

Clients are thus developing closer professional relationships with their suppliers. Suppliers are closer partners with clients than ever before, but with this comes a caution. It becomes easy to lose a third-party objectivity when we get too close to the people and issues at hand and when clients have too heavy a hand in the report process. In this sense, I prefer the old days, where we provided a perspective and our clients would then add a POV. Now, we often meld the two into one presentation, and at time we lose the value that comes from a back and forth disagreement over what the findings mean to a business.

If I teleported my 1990’s self to today I would be amazed at how quickly projects go from conception to final presentation. Literally, this happens in about one-third the time it used to. There are many downsides of going too fast and clients rarely focus or care about them. While there are dangers to going too fast, clients seem to prefer getting something 90% right and getting it done tomorrow, than waiting for a perfect project.

There is even a new category of market research called “agile research” that seeks to provide real-time data. I am sure it is a category that will grow, but those employing it need to keep in mind that providing data faster than managers can act on it can actually be a disservice to the client. It is an irony of our field that more data and continuous data can actually slow down decision making.  

Today’s presentations are less stressful, more inclusive, and more strategic. The downside is there are probably too many of them – clients are conducting too many projects on minor issues, they don’t always learn thoroughly from one study before moving onto the next, and researchers are sometimes being rewarded more for getting things done than for providing insight into the business.

Oops, the polls did it again

Many people had trouble sleeping last night wondering if their candidate was going to be President. I couldn’t sleep because as the night wore on it was becoming clear that this wasn’t going to be a good night for the polls.

Four years ago on the day after the election I wrote about the “epic fail” of the 2016 polls. I couldn’t sleep last night because I realized I was going to have to write another post about another polling failure. While the final vote totals may not be in for some time, it is clear that the 2020 polls are going to be off on the national vote even more than the 2016 polls were.

Yesterday, on election day I received an email from a fellow market researcher and business owner. We are involved in a project together and he was lamenting how poor the data quality has been in his studies recently and was wondering if we were having the same problems.

In 2014 we wrote a blog post that cautioned our clients that we were detecting poor quality interviews that needed to be discarded about 10% of the time. We were having to throw away about 1 in 10 of the interviews we collected.

Six years later that percentage has moved to be between 33% and 45% and we tend to be conservative in the interviews we toss. It is fair to say that for most market research studies today, between a third and a half of the interviews being collected are, for a lack of a better term, junk.  

It has gotten so bad that new firms have sprung up that serve as a go-between from sample providers and online questionnaires in order to protect against junk interviews. They protect against bots, survey farms, duplicate interviews, etc. Just the fact that these firms and terms like “survey farms” exist should give researchers pause regarding data quality.

When I started in market research in the late 80s/early 90’s we had a spreadsheet program that was used to help us cost out projects. One parameter in this spreadsheet was “refusal rate” – the percent of respondents who would outright refuse to take part in a study. While the refusal rate varied by study, the beginning assumption in this program was 40%, meaning that on average we expected 60% of the time respondents would cooperate. 

According to Pew and AAPOR in 2018 the cooperation rate for telephone surveys was 6% and falling rapidly.

Cooperation rates in online surveys are much harder to calculate in a standardized way, but most estimates I have seen and my own experience suggest that typical cooperation rates are about 5%. That means for a 1,000-respondent study, at least 20,000 emails are sent, which is about four times the population of the town I live in.

This is all background to try to explain why the 2020 polls appear to be headed to a historic failure. Election polls are the public face of the market research industry. Relative to most research projects, they are very simple. The problems pollsters have faced in the last few cycles is emblematic of something those working in research know but rarely like to discuss: the quality of data collected for research and polls has been declining, and should be alarming to researchers.

I could go on about the causes of this. We’ve tortured our respondents for a long time. Despite claims to the contrary, we haven’t been able to generate anything close to a probability sample in years. Our methodologists have gotten cocky and feel like they can weight any sampling anomalies away. Clients are forcing us to conduct projects on timelines that make it impossible to guard against poor quality data. We focus on sampling error and ignore more consequential errors. The panels we use have become inbred and gather the same respondents across sources. Suppliers are happy to cash the check and move on to the next project.

This is the research conundrum of our times: in a world where we collect more data on people’s behavior and attitudes than ever before, the quality of the insights we glean from these data is in decline.

Post 2016 the polling industry brain trust rationalized and claimed that the polls actually did a good job, convened some conferences to discuss the polls, and made modest methodological changes. Almost all of these changes related to sampling and weighting. But, as it appears that the 2020 polling miss is going to be way beyond what can be explained by sampling (last night I remarked to my wife that “I bet the p-value of this being due to sampling is about 1 in 1,000”), I feel that pollsters have addressed the wrong problem.

None of the changes pollsters made addressed the long-term problems researchers face with data quality. When you have a response rate of 5% and up to half of those are interviews you need to throw away, errors that can arise are orders of magnitude greater than the errors that are generated by sampling and weighting mistakes.

I don’t want to sound like I have the answers.  Just a few days ago I posted that I thought that on balance there were more reasons to conclude that the polls would do a good job this time than to conclude that they would fail. When I look through my list of potential reasons the polls might fail, nothing leaps to me as an obvious cause, so perhaps the problem is multi-faceted.

What I do know is the market research industry has not done enough to address data quality issues. And every four years the polls seem to bring that into full view.

Will the polls be right this time?

The 2016 election was damaging to the market research industry. The popular perception has been that in 2016 the pollsters missed the mark and miscalled the winner. In reality, the 2016 polls were largely predictive of the national popular vote. But, 2016 was largely seen by non-researchers as disastrous. Pollsters and market researchers have a lot riding on the perceived accuracy of 2020 polls.

The 2016 polls did a good job of predicting the national vote total but in a large majority of cases final national polls were off in the direction of overpredicting the vote for Clinton and underpredicting the vote for Trump. That is pretty much a textbook definition of bias. Before the books are closed on the 2016 pollster’s performance, it is important to note that the 2012 polls were off even further and mostly in the direction of overpredicting the vote for Romney and underpredicting the vote for Obama. The “bias,” although small, has swung back and forth between parties.

Election Day 2020 is in a few days and we may not know the final results for a while. It won’t be possible to truly know how the polls did for some weeks or months.

That said, there are reasons to believe that the 2020 polls will do an excellent job of predicting voter behavior and there are reasons to believe they may miss the mark.  

There are specific reasons why it is reasonable to expect that the 2020 polls will be accurate. So, what is different in 2020? 

  • There have been fewer undecided voters at all stages of the process. Most voters have had their minds made up well in advance of election Tuesday. This makes things simpler from a pollster’s perspective. A polarized and engaged electorate is one whose behavior is predictable. Figuring out how to partition undecided voters moves polling more in a direction of “art” than “science.”
  • Perhaps because of this, polls have been remarkably stable for months. In 2016, there was movement in the polls throughout and particularly over the last two weeks of the campaign. This time, the polls look about like they did weeks and even months ago.
  • Turnout will be very high. The art in polling is in predicting who will turn out and a high turnout election is much easier to forecast than a low turnout election.
  • There has been considerable early voting. There is always less error in asking about what someone has recently done than what they intend to do in the future. Later polls could ask many respondents how they voted instead of how they intended to vote.
  • There have been more polls this time. As our sample size of polls increases so does the accuracy. Of course, there are also more bad polls out there this cycle as well.
  • There have been more and better polls in the swing states this time. The true problem pollsters had in 2016 was with state-level polls. There was less attention paid to them, and because the national pollsters and media didn’t invest much in them, the state-level polling is where it all went wrong. This time, there has been more investment in swing-state polling.
  • The media invested more in polls this time. A hidden secret in polling is that election polls rarely make money for the pollster. This keeps many excellent research organizations from getting involved in them or dedicating resources to them. The ones that do tend to do so solely for reputational reasons. An increased investment this time has helped to get more researchers involved in election polling.
  • Response rates are upslightly. 2020 is the first year where we have seen a long-term trend towards declining response rates on survey stabilize and even kick up a little. This is likely a minor factor in the success of the 2020 polls, but it is in the right direction.
  • The race isn’t as close as it was in 2016. This one might only be appreciated by statisticians. Since variability is maximized in a 50/50 distribution the further away from an even race it is the more accurate a poll will be. This is another small factor in the direction of the polls being accurate in 2020.
  • There has not been late breaking news that could influence voter behavior. In 2016, the FBI director’s decision to announce a probe into Clinton’s emails came late in the campaign. There haven’t been any similar bombshells this time.
  • Pollsters started setting quotas and weighting on education. In the past, pollsters would balance samples on characteristics known to correlate highly with voting behavior – characteristics like age, gender, political party affiliation, race/ethnicity, and past voting behavior. In 2016, pollsters learned the hard way that educational attainment had become an additional characteristic to consider when crafting samples because voter preferences vary by education level. The good polls fixed that this go round.
  • In a similar vein, there has been a tighter scrutiny of polling methodology. While the media can still be a cavalier about digging into methodology, this time they were more likely to insist that pollsters outline their methods. This is the first time I can remember seeing news stories where pollsters were asked questions about methodology.
  • The notion that there are Trump supporters who intentionally lie to pollsters has largely been disproven by studies from very credible sources, such as Yale and Pew. Much more relevant is the pollster’s ability to predict turnout from both sides.

There are a few things going on that give the polls some potential to lay an egg.

  • The election will be decided by a small number of swing states. Swing state polls are not as accurate and are often funded by local media and universities that don’t have the funding or the expertise to do them correctly. The polls are close and less stable in these states. There is some indication that swing state polls have been tightening, and Biden’s lead in many of them isn’t much different than Clinton’s lead in 2020.
  • Biden may be making the same mistake Clinton made. This is a political and not a research-related reason, but in 2016 Clinton failed to aggressively campaign in the key states late in the campaign while Trump went all in. History could be repeating itself. Field work for final polls is largely over now, so the polls will not reflect things that happen the last few days.
  • If there is a wild-card that will affect polling accuracy in 2020, it is likely to center around how people are voting. Pollsters have been predicting election day voting for decades. In this cycle votes have been coming in for weeks and the methods and rules around early voting vary widely by state. Pollsters just don’t have past experience with early voting.
  • There is really no way for pollsters to account for potential disqualifications for mail-in votes (improper signatures, late receipts, legal challenges, etc.) that may skew to one candidate or another.
  • Similarly, any systematic voter suppression would likely cause the polls to underpredict Trump. These voters are available to poll, but may not be able to cast a valid vote.
  • There has been little mention of third-party candidates in polling results. The Libertarian candidate is on the ballot in all 50 states. The Green Party candidate is on the ballot in 31 states. Other parties have candidates on the ballot in some states but not others. These candidates aren’t expected to garner a lot of votes, but in a close election even a few percentage points could matter to the results. I have seen national polls from reputable organizations where they weren’t included.
  • While there is little credible data supporting that there are “shy” Trump voters that are intentionally lying to pollsters, there still might be a social desirability bias that would undercount Trump’s support. That social desirability bias could be larger than it was in 2016, and it is still likely in the direction of under predicting Trump’s vote count.
  • Polls (and research surveys) tend to underrepresent rural areas. Folks in rural areas are less likely to be in online panels and to cooperate on surveys. Few pollsters take this into account. (I have never seen a corporate research client correcting for this, and it has been a pet peeve of mine for years.) This is a sample coverage issue that will likely undercount the Trump vote.
  • Sampling has continued to get harder. Cell phone penetration has continued to grow, online panel quality has fallen, and our best option (ABS sampling) is still far from random and so expensive it is beyond the reach of most polls.
  • “Herding” is a rarely discussed, but very real polling problem. Herding refers to pollsters who conduct a poll that doesn’t conform to what other polls are finding. These polls tend to get scrutinized and reweighted until they fit to expectations, or even worse, buried and never released. Think about it – if you are a respected polling organization that conducted a recent poll that showed Trump would win the popular vote, you’d review this poll intensely before releasing it and you might choose not to release it at all because it might put your firm’s reputation at risk to release a poll that looks different than the others. The only polls I have seen that appear to be out of range are ones from smaller organizations who are likely willing to run the risk of being viewed as predicting against the tide or who clearly have a political bias to them.

Once the dust settles, we will compose a post that analyzes how the 2020 polls did. For now, we feel there are a more credible reasons to believe the polls will be seen as predictive than to feel that we are on the edge of a polling mistake.  From a researcher’s standpoint, the biggest worry is that the polls will indeed be accurate, but won’t match the vote totals because of technicalities in vote counting and legal challenges. That would reflect unfairly on the polling and research industries.

Common Misperceptions About Millennials

We’ve been researching Millennials literally since they have been old enough to fill out surveys. Over time, we have found that clients cling to common misperceptions of this generation and that the nature of these misperceptions haven’t evolved as Millennials have come of age.

Millennials are the most studied generation in history, likely because they are such a large group (there are now more Millennials in the US than Boomers) and because they are poised to soon become a dominant force in the economy, in politics, and in our culture.

There are enduring misconceptions about Millennials. Many stem from our inability to grasp that Millennials are distinctly different from their Gen X predecessors. Perhaps the worst mistake we can make is to assume that Millennials will behave in an “X” fashion rather than view them as a separate group.

Below are some common misconceptions we see that relate to Millennials.

  • Today’s kids and teens are Millennials. This is false as Millennials have now largely grown up. If you use the Howe/Strauss Millennial birth years Millennials currently range from about 16 to 38 years old. If you prefer Pew’s breaks Millennials are currently aged 23 to 38. Either way, Millennials are better thought of as being in a young adult/early career life stage than as teenagers.
  • Millennials are “digital natives” who know more about technology than other generations. This is, at best, partially true. The first half of the generation, born in 1982, hardly grew up with today’s interactive technology. The iPhone came out in 2007 when the first Millennial was 25 years old. Millennials discovered these technologies along with the rest of us. A recent Pew study on technological ownership showed that Millennials do own more technology than Boomers and Xers, but that the gap isn’t all that large. For years we have counseled clients that parents and teachers are more technologically advanced than commonly thought. Don’t forget that the entrepreneurial creators of this technology are mainly Boomers and Xers, and not Millennials.
  • Millennials are all saddled with college debt. We want to tread lightly here, as we would not want to minimize the issue of college debt which affects many young people and constrains their lives in many ways. But we do want to put college debt in the proper perspective. The average Millennial has significant debt, but the reality is the bulk of the debt they hold is credit card debt and not college debt. College debt is just 16% of the total debt held by Millennials. According to the College Board 29% of bachelor’s degree graduates have no college debt at all, 24% have under $20,000 in debt, 30% have between $20,000 and $30,000 in debt, and 31% have over $30,000 in college debt. The College Board also reports that a 4-year college graduate can expect to make about $25,000 per year more than a non-graduate. It is natural for people of all generations to have debt in their young adult/early professional life stage and this isn’t unique to Millennials. What is unique is their debt levels are high and multi-faceted. Our view is that college debt per se is not the core issue for Millennials, as most have manageable levels of college debt and college is a financially worthwhile investment for most of them. But college debt levels continue to grow and have a cascading effect and lead to other types of debts. College debt is a problem, but mostly because it is a catalyst for other problems facing Millennials. So, this statement is true, but is more nuanced than is commonly perceived.
  • Millennials are fickle and not loyal to brands. This myth has held sway since before the generation was named. I cannot tell you how many market research projects I have conducted that have shown that Millennials are more brand loyal than other generations. They express positive views of products online at a rate many times greater than the level of complaints they express. Of course, they have typical young person behaviors of variety-seeking and exploration, but they live in a crazy world of information, misinformation, and choice. Brand loyalty is a defense mechanism for them.
  • Millennials are fickle and not loyal to employers. On the employer side, surveys show that Millennials seek stability in employment. They want to be continuously challenged and stay on a learning curve. We feel that issues with employer loyalty for Millennials go both ways and employers have become less paternalistic and value young employees less than in past times. That is the primary driver of Millennials switching employers. There are studies that suggest that Millennials are staying with employers longer than Gen X employees did.
  • Millennials are entrepreneurial. In reality, we expect Millennials to be perhaps the least entrepreneurial of all the modern generations. (We wrote an entire blog post on this issue.)
  • Millennials seek constant praise. This is the generation that grew up with participation trophies and gold stars on everything (provided by their Boomer parents). However, praise is not really what Millennials seek. Feedback is. They come from a world of online reviews, constant educational testing, and close supervision. The result is Millennials have a constant need to know where they stand. This is not the same as praise.
  • Millennials were poorly parented. The generation that was poorly parented was Gen X. These were the latch-key kids who were lightly supervised. Millennials have been close with their parents from birth. At college, the “typical” Millennial has contact with their parent more than 10 times per week. Upon graduation, many of them choose to live with, or nearby their parents even when there is no financial need to do so. Their family ties are strong.
  • Millennials are all the same. Whenever we look at segments, we run a risk of typecasting people and assuming all segment members are alike.  The “art” of segmentation in a market research study is to balance the variability between segments with the variability within them in a way that informs marketers. Millennials are diverse. They are the most racially diverse generation in American history, they span a wide age range, they cover a range of economic backgrounds, and are represented across the political spectrum. The result is while there is value in understanding Millennials as a segment, there is no typical Millennial.

When composing this post, I typed “Millennials are …” into a Google search box. The first thing that came up to complete my query was “Millennials are lazy entitled narcissists.” When I typed “Boomers are …” the first result was “Boomers are thriving.”  When I typed “Gen X is …” the first result was “Gen X is tired.” This alone should convince you that there are serious misconceptions of all generations.

Millennials are the most educated, most connected generation ever. I believe that history will show that Millennials effectively corrected for the excesses of Boomers and set the country and the world on a better course.

Should we get rid of statistical significance?

There has been recent debate among academics and statisticians surrounding the concept of statistical significance. Some high-profile medical studies have just narrowly missed meeting the traditional statistical significance cutoff of 0.05. This has resulted in potentially life changing drugs not being approved by regulators or pursued for further development by pharma companies. These cases have led to a much-needed review and re-education as to what statistical significance means and how it should be applied.

In a 2014 blog post (Is This Study Significant?) we discussed common misunderstandings market researchers have regarding statistical significance. The recent debate suggests this misunderstanding isn’t limited to market researchers – it appears that academics and regulators have the same difficulty.

Statistical significance is a simple concept. However, it seems that the human brain just isn’t wired well to understand probability and that lies at the root of the problem.

A measure is typically classified as statistically significant if its p-value is 0.05 or less. This means that there is a less than 5% probability that the result came from chance or random fluctuation. Two measures are deemed to be statistically different if there is a 19 out of 20 chance or greater that they are.

There are real problems with this approach. Foremost, it is unclear how this 5% probability cutoff was chosen. Somewhere along the line it became a standard among academics. This standard could have just as easily been 4% or 6% or some other number. This cutoff was chosen subjectively.

What are the chances that this 5% cutoff is optimal for all studies, regardless of the situation?

Regulators should look beyond statistical significance when they are reviewing a new medication. Let’s say a study was only significant at 6%, not quite meeting the 5% standard. That shouldn’t automatically disqualify a promising medication from consideration. Instead, regulators should look at the situation more holistically. What will the drug do? What are its side effects? How much pain does it alleviate? What is the risk of making mistakes in approval: in approving a drug that doesn’t work or in failing to approve a drug that does work? We could argue that the level of significance required in the study should depend on the answers to these questions and shouldn’t be the same in all cases.

The same is true in market research. Suppose you are researching a new product and the study is only significant at 10% and not the 5% that is standard. Whether you should greenlight the product for development depends on considerations beyond statistical significance. What is the market potential of the product? What is the cost of its development? What is the risk of failing to greenlight a winning idea or greenlighting a bad idea? Currently, too many product managers rely too much on a research project to give them answers when the study is just one of many inputs into these decisions.

There is another reason to rethink the concept of statistical significance in market research projects. Statistical significance assumes a random or a probability sample. We can’t stress this enough – there hasn’t been a market research study conducted in at least 20 years that can credibly claim to have used a true probability sample of respondents. Some (most notably ABS samples) make a valiant attempt to do so but they still violate the very basis for statistical significance.

Given that, why do research suppliers (Crux Research included) continue to do statistical testing on projects? Well, one reason is clients have come to expect it. A more important reason is that statistical significance holds some meaning. On almost every study we need to draw a line and say that two data poworints are “different enough” to point out to clients and to draw conclusions from. Statistical significance is a useful tool for this. It just should no longer be viewed as a tool where we can say precise things like “these two data points have a 95% chance of actually being different”.

We’d rather use a probability approach and report to clients the chance that two data points would be different if we had been lucky enough to use a random sample. That is a much more useful way to look at data, but it probably won’t be used much until colleges start teaching it and a new generation of researchers emerges.

The current debate over the usefulness of statistical significance is a healthy one to have. Hopefully, it will cause researchers of all types to think deeper about how precise a study needs to be and we’ll move away from the current one-size-fits-all thinking that has been pervasive for decades.

Among college students, Bernie Sanders is the overwhelming choice for the Democratic nomination

Crux Research poll of college students shows Sanders at 23%, Biden at 16%, and all other candidates under 10%

ROCHESTER, NY – October 10, 2019 – Polling results released today by Crux Research show that if it was up to college students, Bernie Sanders would win the Democratic nomination the US Presidency. Sanders is the favored candidate for the nomination among 23% of college students compared to 16% for Joe Biden. Elizabeth Warren is favored by 8% of college students followed by 7% support for Andrew Yang.

  • Bernie Sanders: 23%
  • Joe Biden: 16%
  • Elizabeth Warren: 8%
  • Andrew Yang: 7%
  • Kamala Harris: 6%
  • Beto O’Rourke: 5%
  • Pete Buttigieg: 4%
  • Tom Steyer: 3%
  • Cory Booker: 3%
  • Michael Bennet: 2%
  • Tulsi Gabbard: 2%
  • Amy Klobuchar: 2%
  • Julian Castro: 1%
  • None of these: 5%
  • Unsure: 10%
  • I won’t vote: 4%

The poll also presented five head-to-head match-ups. Each match-up suggests that the Democratic candidate currently has a strong edge over President Trump, with Sanders having the largest edge.

  • Sanders versus Trump: 61% Sanders; 17% Trump; 12% Someone Else; 7% Not Sure; 3% would not vote
  • Warren versus Trump: 53% Warren; 18% Trump; 15% Someone Else; 9% Not Sure; 5% would not vote
  • Biden versus Trump: 51% Biden; 18% Trump; 19% Someone Else; 8% Not Sure; 4% would not vote
  • Harris versus Trump: 48% Harris; 18% Trump; 20% Someone Else; 10% Not Sure; 4% would not vote
  • Buttigieg versus Trump: 44% Buttigieg; 18% Trump; 22% Someone Else; 11% Not Sure; 5% would not vote

The 2020 election could very well be determined on the voter turnout among young people, which has traditionally been much lower than among older age groups.

###

Methodology
This poll was conducted online between October 1 and October 8, 2019. The sample size was 555 US college students (aged 18 to 29). Quota sampling and weighting were employed to ensure that respondent proportions for age group, sex, race/ethnicity, and region matched their actual proportions in the US college student population.

This poll did not have a sponsor and was conducted and funded by Crux Research, an independent market research firm that is not in any way associated with political parties, candidates, or the media.

All surveys and polls are subject to many sources of error. The term “margin of error” is misleading for online polls, which are not based on a probability sample which is a requirement for margin of error calculations. If this study did use probability sampling, the margin of error would be +/-4%.

About Crux Research Inc.
Crux Research partners with clients to develop winning products and services, build powerful brands, create engaging marketing strategies, enhance customer satisfaction and loyalty, improve products and services, and get the most out of their advertising.

Using quantitative and qualitative methods, Crux connects organizations with their customers in a wide range of industries, including health care, education, consumer goods, financial services, media and advertising, automotive, technology, retail, business-to-business, and non-profits.
Crux connects decision makers with customers, uses data to inspire new thinking, and assures clients they are being served by experienced, senior level researchers who set the standard for customer service from a survey research and polling consultant.

To learn more about Crux Research, visit http://www.cruxresearch.com.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.