Are our public places too noisy? Americans think so!

Crux Research recently conducted a poll for the American Speech-Language-Hearing Association. It found that many Americans are concerned about their exposure to noise when taking part in out-of-home leisure activities. Many also say that noise lessens their enjoyment of many activities and causes them to decide not to take part in them at times.

Perhaps most surprising is that Millennials were just about as likely as Boomers to be concerned about noise when taking part in leisure activities.

For more information on this poll, ASHA’s press release is here.

And, a detailed summary of the poll can be found here.

Americans value money and brains over looks

We recently posed a question on a national poll which required Americans to make an interesting choice:

If you could have one of the following, which would you choose?

  • I would have more money than I have today
  • I would be smarter than I am today
  • I would be better looking than I am today

This is a provocative cocktail party question. How would you answer it? How might your answer change depending on your life stage – would you answer it differently 15 years ago or 15 years into the future?

Across all ages (18+), 61% of Americans choose more money. It would be interesting to pose this question internationally to learn if this finding reflects American culture and capitalism or if this result reflects something universal to all people. Overall, 26% of US adults choose being smarter and 12% choose being better looking. So, it can be said that Americans value money and brains over looks.

We should note that there wasn’t a gender difference in the results. Males and females were just as likely to say all three options. There were a couple of interesting racial differences. Hispanics were least likely to say they would like more money and most likely to say they would like to be smarter. Blacks were as likely as others to say “money” but were more likely than others to say “better looking” and less likely to say “smarter.”

But, by far the largest and most interesting differences in this question related to the generation of the respondent. We’ve seen the Millennial generation maligned quite a bit recently, hearing that they are entitled and a bit lazy. We’ve never quite believed that, as the perception that a youth generation is disrespectful and lazy has been true since before the term “generation” was coined.

For instance, this is a quote from Socrates, and is about 2,400 years old:

“Children today are tyrants.  They contradict their parents, gobble their food, and tyrannize their teachers.”

Mark Twain, late in his life, had this to say about children:

“When a child turns 12 you should put him in a barrel, nail the lid down, and feed him through a knot hole… When he turns 16, plug the hole.”                                              

One of the more cynical (and unintentionally humorous) quotations about children came from Clarence Darrow, almost a century ago:

“The first half of our lives is ruined by our parents and the second half by our children.”

But, back to our poll question.  There are currently five living generations:

First birth year

Final birth year Current youngest member

Current oldest member

Silent

1925

1942 75

92

Boom

1943

1960 57

74

Gen X

1961

1981 36

56

Milllennials

1982

2004 13

35

Homelanders 2005 2017 0

12

Which one do you think would be the most apt to choose “more money” in our question? We’d presume that most people would predict it would be Millennials. But, in reality, it is Boomers who are most likely to say money:

More Money Smarter Better Looking
Silent

54%

37%

9%

Boom

71%

19%

11%

Gen X

65%

26%

10%

Milllennials

52% 31%

17%

There are fascinating generational differences in this table.  Howe and Strauss have developed an excellent generational theory, and one aspect of it is that a generational cycle recurs through four archetypes. So, typically, a current youth generations will have a similar type and outlook as the oldest living generation. This theory is supported by the table above. It is the oldest (Silent) and youngest (Millennials) generations that are least concerned with money and relatively most concerned with being smarter.

Boomers come across as the most money-obsessed generation, which is interesting as they are in a life stage where personal net worth tends to peak. 71% of Boomers would prefer more money to being smarter or better looking.  Of course, with all generational conclusions, it could be more of a life stage issue at work – Boomers are currently between 57 and 74 years old and perhaps pre- and early-retirement are particularly money-centric life stages. But, we suspect that if we had conducted this poll over time Boomers would have been highly concerned with money compared to other generations throughout all life stages.

Finally, these results underscore a point we like to make with clients. It is challenging to fully understand a generation unless we widen the sampling frame and interview other generations as well. Had this question just been asked of Millennials, we may have concluded that money was an overriding concern for them. It is only when comparing them to other generations that we see that they value intelligence and smarts more than others.

NEW POLL SHOWS THAT IF US PRESIDENTIAL ELECTION WERE HELD AGAIN, INCREASED TURNOUT WOULD LIKELY RESULT IN A CLINTON VICTORY

Crux Research poll shows 92% of Trump voters and 91% of Clinton voters would not change their vote

ROCHESTER, NY – MARCH 12, 2017 – Polling results released today by Crux Research show that if there were a “do over” and the election were held again tomorrow, Hillary Clinton would likely win the Presidency.  But, this would not happen as a result of voters changing their vote – rather voters who didn’t turn out in the fall would provide an edge to Clinton.

In 2016, the popular vote was 48.0% for Hillary Clinton and 45.9% for Donald Trump (a gap of 2.1)[1].  This new poll shows that if the election were held again among these two candidates, the popular vote would be estimated to be 52.9% Clinton and 47.1% Trump (a gap of 5.8).

Further, few Clinton or Trump supporters would change their voting behaviors:

  • 92% of those who voted for Trump in November would vote for him again tomorrow.
  • 91% of those who voted for Clinton in November would vote for her again tomorrow.

A new election would bring out additional voters.  57% of non-voters in 2016 would intend to vote. Their votes would split approximately 60% for Clinton and 40% for Trump.  So, increased turnout would likely provide a decisive edge to Clinton.

A closer look at swing states (the five states where the winner won by 2 percentage points or less[2]), shows that Clinton  would win these states by a gap of 9.3, likely enough to change the election result.

WHO WOULD WIN TOMORROW?
Suppose there was a “do over” and the US presidential election were held again tomorrow. 
Whom would you vote for?
Actual 2016 Election Result March 2017 Crux Research Poll*
Donald Trump 45.9% 47.1%
Hillary Clinton 48.0% 52.9%
Others 6.0%
*2017 Crux Research poll is among those who say they would vote if the election were held again tomorrow.
VOTE SWITCHING BEHAVIOR
Suppose there was a “do over” and the US presidential election were held again tomorrow. 
Whom would you vote for?
Voted for Trump in 2016 Voted for Clinton in 2016
Donald Trump 92% 1%
Hillary Clinton 1% 91%
Others 4% 7%
Wouldn’t vote 2% 1%
SWING STATES RESULTS
Suppose there was a “do over” and the US presidential election were held again tomorrow. 
Whom would you vote for?
Actual 2016 Election Result in Swing States Swing States March 2017 Crux Research Poll*
Donald Trump 48.0% 47.1%
Hillary Clinton 47.2% 52.9%
Others 4.8%
*2017 Crux Research poll is among those who say they would vote if the election were held again tomorrow.
** Swing states are five states where the election was decided by 2 percentage points or less (PA, MI, WI, FL, and NH).

###

Methodology

This poll was conducted online between March 6 and March 10, 2017. The sample size was 1,010 US adults (aged 18 and over). Quota sampling and weighting were employed to ensure that respondent proportions for age group, sex, race/ethnicity, and region matched their actual proportions in the population.  The poll was also balanced to reflect the actual proportion of voters who voted for each candidate in the 2016 election.

This poll did not have a sponsor and was conducted and funded by Crux Research, an independent market research firm that is not in any way associated with political parties, candidates, or the media.

All surveys and polls are subject to many sources of error.  The term “margin of error” is misleading for online polls, which are not based on a probability sample which is a requirement for margin of error calculations.  If this study did use probability sampling, the margin of error would be +/-3%.

About Crux Research Inc.

Crux Research partners with clients to develop winning products and services, build powerful brands, create engaging marketing strategies, enhance customer satisfaction and loyalty, improve products and services, and get the most out of their advertising.

Using quantitative and qualitative methods, Crux connects organizations with their customers in a wide range of industries, including health care, education, consumer goods, financial services, media and advertising, automotive, technology, retail, business-to-business, and non-profits.

Crux connects decision makers with customers, uses data to inspire new thinking, and assures clients they are being served by experienced, senior level researchers who set the standard for customer service from a survey research and polling consultant.

To learn more about Crux Research, visit www.cruxresearch.com.

[1] http://uselectionatlas.org/RESULTS/index.html

[2] PA, MI, WI, FL, and NH were decided by 2 percentage points or less.

Let’s Make Research and Polling Great Again!

Crux Logo Final 2016

The day after the US Presidential election, we quickly wrote and posted about the market research industry’s failure to accurately predict the election.  Since this has been our widest-read post (by a factor of about 10!) we thought a follow-up was in order.

Some of what we predicted has come to pass. Pollsters are being defensive, claiming their polls really weren’t that far off, and are not reaching very deep to try to understand the core of why their predictions were poor. The industry has had a couple of confabs, where the major players have denied a problem exists.

We are at a watershed moment for our industry. Response rates continue to plummet, clients are losing confidence in the data we provide, and we are swimming in so much data our insights are often not able to find space to breathe. And the public has lost confidence in what we do.

Sometimes it is everyday conversations that can enlighten a problem. Recently, I was staying at an AirBnB in Florida. The host (Dan) was an ardent Trump supporter and at one point he asked me what I did for a living. When I told him I was a market researcher the conversation quickly turned to why the polls failed to accurately predict the winner of the election. By talking with Dan I quickly I realized the implications of Election 2016 polling to our industry. He felt that we can now safely ignore all polls – on issues, approval ratings, voter preferences, etc.

I found myself getting defensive. After all, the polls weren’t off that much.  In fact, they were actually off by more in 2012 than in 2016 – the problem being that this time the polling errors resulted in an incorrect prediction. Surely we can still trust polls to give a good sense of what our citizenry thinks about the issues of the day, right?

Not according to Dan. He didn’t feel our political leaders should pay attention to the polls at all because they can’t be trusted.

I’ve even seen a new term for this bandied about:  poll denialism. It is a refusal to believe any poll results because of their past failures. Just the fact that this has been named should be scary enough for researchers.

This is unnerving not just to the market research industry, but to our democracy in general.  It is rarely stated overtly, but poll results are a key way political leaders keep in touch with the needs of the public, and they shape public policy a lot more than many think. Ignoring them is ignoring public opinion.

Market research remains closely associated with political polling. While I don’t think clients have become as mistrustful about their market research as the public has become about polling, clients likely have their doubts. Much of what we do as market researchers is much more complicated than election polling. If we can’t successfully predict who will be President, why would a client believe our market forecasts?

We are at a defining moment for our industry – a time when clients and suppliers will realize this is an industry that has gone adrift and needs a righting of the course. So what can we do to make research great again?  We have a few ideas.

  1. First and foremost, if you are a client, make greater demands for data quality. Nothing will stimulate the research industry more to fix itself than market forces – if clients stop paying for low quality data and information, suppliers will react.
  2. Slow down! There is a famous saying about all projects.  They have three elements that clients want:  a) fast, b) good, and c) cheap, and on any project you can choose two of these.  In my nearly three decades in this industry I have seen this dynamic change considerably. These days, “fast” is almost always trumping the other two factors.  “Good” has been pushed aside.  “Cheap” has always been important, but to be honest budget considerations don’t seem to be the main issue (MR spending continues to grow slowly). Clients are insisting that studies are conducted at a breakneck pace and data quality is suffering badly.
  3. Insist that suppliers defend their methodologies. I’ve worked for corporate clients, but also many academic researchers. I have found that a key difference between them becomes apparent during results presentations. Corporate clients are impatient and want us to go as quickly as possible over the methodology section and get right into the results.  Academics are the opposite. They dwell on the methodology and I have noticed if you can get an academic comfortable with your methods it is rare that they will doubt your findings. Corporate researchers need to understand the importance of a sound methodology and care more about it.
  4. Be honest about the limitations of your methodology. We often like to say that everything you were ever taught about statistics assumed a random sample and we haven’t seen a study in at least 20 years that can credibly claim to have one.  That doesn’t mean a study without a random sample isn’t valuable, it just means that we have to think through the biases and errors it could contain and how that can be relevant to the results we present. I think every research report should have a page after the methodology summary that lists off the study’s limitations and potential implications to the conclusions we draw.
  5. Stop treating respondents so poorly. I believe this is a direct consequence of the movement from telephone to online data collection. Back in the heyday of telephone research, if you fielded a survey that was too long or was challenging for respondents to answer, it wasn’t long until you heard from your interviewers just how bad your questionnaire was. In an online world, this feedback never gets back to the questionnaire author – and we subsequently beat up our respondents pretty badly.  I have been involved in at least 2,000 studies and about 1 million respondents.  If each study averages 15 minutes that implies that people have spent about 28 and a half years filling out my surveys.  It is easy to lose respect for that – but let’s not forget the tremendous amount of time people spend on our surveys. In the end, this is a large threat to the research industry, as if people won’t respond, we have nothing to sell.
  6. Stop using technology for technology’s sake. Technology has greatly changed our business. But, it doesn’t supplant the basics of what we do or allow us to ignore the laws of statistics.  We still need to reach a representative sample of people, ask them intelligent questions, and interpret what it means for our clients.  Tech has made this much easier and much harder at the same time.  We often seem to do things because we can and not because we should.

The ultimate way to combat “poll denialism” in a “post-truth” world is to do better work, make better predictions, and deliver insightful interpretations. That is what we all strive to do, and it is more important than ever.

 

Battle of the Brands is available for purchase!

boxing-glove

How does your brand compete with others in the battle to win today’s youth?

Crux Research has conducted a syndicated study of 57 youth-oriented brands that is available for purchase on Collaborata.  We have a “data only” option for sale for $4,900 and an option including a full report and consultation/presentation for $9,500.

Brands that succeed with Millennials can enjoy their loyalty for years to come. This study’s 13- to 24-year-old group is often given short shrift by brands that have a more adult target. That can prove to be short-sighted thinking. Teens and young adults not only spend significant amounts of their own money, they also influence the spending of parents, siblings, and other adults in their lives. They are the adult shoppers of the future; building a relationship with them now can translate into loyalty that lasts their lifetime. This study shows you exactly where your brand fares among this critical cohort right now and what you need to do increase young consumers’ engagement with your brand.

More information about this study can be found here.

Objectives for our “Battle of the Brands” project are as follows:

  • Compare and contrast the relative strengths across a variety of measures of 57 youth-oriented brands.
  • See how your brand is “personalized” — learn where it statistically maps across 32 brand personality dimensions.
  • Discover how the 57 brands fare on the key measures of Awareness, Brand Interaction, Brand Connection, Brand Popularity, and Motivation.
  • Take away key insights into why some brand succeed, while others struggle, with these Millennials and Gen Z consumers.
  • These brands have been selected from a wide range of categories, including social causes, media and entertainment, retail, technology, and consumer packaged goods.

Become a co-sponsor of this actionable today! Increase your brand’s youth standing tomorrow.

Happy Birthday to Us!

happy-birthday-images-free-animated-free-animated-funny-happy-birthday-clip-arts-animated-butterfly-clipart-neoclipartcom-high-quality-pictures-dxeqzw

This month, Crux Research turns 11 years old. What started as something transitional for us as we looked for the next big thing quickly morphed into the next big thing itself.

Since our start, we have now conducted 300+ projects for 65+ clients across a wide range of industries and causes. At times, we feel we know a little bit about everything at this point.

We’ve bucked a few trends along the way. We’ve never had a business plan and have never really looked past the next few months. We’ve resisted pressure to grow to a larger company. We don’t necessarily go where the opportunities are and instead prefer to work on projects and with clients that interest us. We’ve also eschewed the normal business week, and work nights, weekends, etc.

Our ability to collect incredible people as clients has only been surpassed by our good fortune to attract staff and helpers. A special thanks to our staff members and our “bench” who have been helping out our team throughout the years.

Onward!  Happy Holidays to all. May your response rates be high and all of your confidence intervals be +/-5%!

An Epic Fail: How Can Pollsters Get It So Wrong?

picture1

Perhaps the only bigger loser than Hillary Clinton in yesterday’s election was the polling industry itself. Those of us who conduct surveys for a living should be asking if we can’t even get something as simple as a Presidential election right, why should our clients have confidence in any data we provide?

First, a recap of how poorly the polls and pundits performed:

  • FiveThirtyEight’s model had Clinton’s likelihood of winning at 72%.
  • Betfair (a prediction market) had Clinton trading at an 83% chance of winning.
  • A quick scan of Real Clear Politics on Monday night showed 25 final national polls. 22 of these 25 polls had Clinton as the winner, and the most reputable ones almost all had her winning the popular vote by 3 to 5 points. (It should be noted that Clinton seems likely to win the popular vote.)

There will be claims that FiveThirtyEight “didn’t say her chances were 100%” or that Betfair had Trump with a “17% chance of winning.” Their predictions were never to be construed to be certain.  No prediction is ever 100% certain, but this is a case where almost all forecasters got it wrong.  That is pretty close to the definition of a bias – something systematic that affected all predictions must have happened.

The polls will claim that the outcome was in the margin of error. But, to claim a “margin of error” defense is statistically suspect, as margins of error only apply to random or probability samples and none of these polls can claim to have a random sample. FiveThirtyEight also had Clinton with 302 electoral votes, way beyond any reasonable error rate.

Regardless, the end result is going to end up barely within the margin of error of most of these polls erroneously use anyway. That is not a free pass for the pollsters at all. All it means is rather than their estimate being accurate 95% of the time, it was predicted to be accurate a little bit less:  between 80% and 90% of the time for most of these polls by my calculations.

Lightning can strike for sure. But this is a case of it hitting the same tree numerous times.

So, what happened? I am sure this will be the subject of many post mortems by the media and conferences from the research industry itself, but let me provide an initial perspective.

First, it seems that it had anything to do with the questions themselves. In reality, most pollsters use very similar questions to gather voter preferences and many of these questions have been in use for a long time.  Asking whom you will vote for is pretty simple. The question itself seems to be an unlikely culprit.

I think the mistakes the pollster’s made come down to some fairly basic things.

  1. Non-response bias. This has to be a major reason why the polls were wrong. In short, non-response bias means that the sample of people who took the time to answer the poll did not adequately represent the people who actually voted.  Clearly this must have occurred. There are many reasons this could happen.  Poor response rates is likely a key one, but poor selection of sampling frames, researchers getting too aggressive with weighting and balancing, and simply not being able to reach some key types of voters well all play into it.
  2. Social desirability bias. This tends to be more present in telephone and in-person polls that involve an interviewer but it happens in online polls as well. This is when the respondent tells you what you want to hear or what he or she thinks is socially acceptable. A good example of this is if you conduct a telephone poll and an online poll at the same time, more people will say they believe in God in the telephone poll.  People tend to answer how they think they are supposed to, especially when responding to an interviewer.   In this case, let’s take the response bias away.  Suppose pollsters reached every single voter who actually showed up in a poll. If we presume “Trump” was a socially unacceptable answer in the poll, he would do better in the actual election than in the poll.  There is evidence this could have happened, as polls with live interviewers had a wider Clinton to Trump gap than those that were self-administered.
  3. Third parties. It looks like Gary Johnson’s support is going to end up being about half of what the pollster’s predicted.  If this erosion benefited Trump, it could very well have made a difference. Those that switched their vote from Johnson in the last few weeks might have been more likely to switch to Trump than Clinton.
  4. Herding. This season had more polls than ever before and they often had widely divergent results.  But, if you look closely you will see that as the election neared, polling results started to converge.  The reason could be that if a pollster had a poll that looked like an outlier, they probably took a closer look at it, toyed with how the sample was weighted, or decided to bury the poll altogether.  It is possible that there were some accurate polls out there that declared a Trump victory, but the pollster’s didn’t release them.

I’d also submit that the reasons for the polling failure are likely not completely specific to the US and this election. We can’t forget that pollsters also missed the recent Brexit vote, the Mexican Presidency, and David Cameron’s original election in the UK.

So, what should the pollsters do? Well, they owe it to the industry to convene, share data, and attempt to figure it out. That will certainly be done via the trade organizations pollsters belong to, but I have been to a few of these events and they devolve pretty quickly into posturing, defensiveness, and salesmanship. Academics will take a look, but they move so slowly that the implications they draw will likely be outdated by the time they are published.  This doesn’t seem to be an industry that is poised to fix itself.

At minimum, I’d like to see the polling organizations re-contact all respondents from their final polls. That would shed a lot of light on any issues relating to social desirability or other subtle biases.

This is not the first time pollsters have gotten it wrong. President Hillary Clinton will be remembered in history along with President Thomas Dewey and President Alf Landon.  But, this time seems different.  There is so much information out there that seeing the signal to the noise is just plain difficult – and there are lessons in that for Big Data analyses and research departments everywhere.

We are left with an election result that half the country is ecstatic about and half is worried about.  However, everyone in the research industry should be deeply concerned. I am hopeful that this will cause more market research clients to ask questions about data quality, potential errors and biases, and that they will value quality more. Those conversations will go a long way to putting a great industry back on the right path.

Will Young People Vote?

picture2

Once again we are in an election cycle where the results could hinge on a simple question:  will young people vote? Galvanizing youth turnout is a key strategy for all candidates. It is perhaps not an exaggeration to say that Millennial voters hold the key to the future political leadership of the country.

But, this is nothing specific to Millennials and to this election. Young voters have effectively been the “swing vote” since the election of Kennedy in 1960. Yet, young voter turnout is consistently low relative to other age groups.

The 26th Amendment was ratified in 1971 giving 18-21 year olds the right to vote for the first time. This means that anyone born in 1953 or later has never been of age at a time when they could not vote in a Presidential election. So, only those who are currently 64 or older (approximately) will have turned 18 at a time when they were not enfranchised.

This right did not come easily. The debate about lowering the voting age started in earnest during World War II, as many soldiers under 21 (especially those drafted into the armed forces) didn’t understand how they could be expected to sacrifice so much for a country if they did not have a say in how it was governed. The movement gained steam during the cultural revolution of the 1960’s and culminated in the passage of the 26th Amendment.

Young people celebrated their new found right to vote, and then promptly failed to take advantage of it. The chart below shows 18-24 year old voter turnout compared to totalvoter turnout for all Presidential election years since the 26th Amendment was ratified.

picture1

Much was made of Obama’s success in galvanizing the young vote in 2008. However, there was only a 2 percentage point gain increase in young voter turnout in 2008 versus 2004. As the chart shows, there was a big falloff in young voter participation in 1996 and 2000, which were the last elections before Millennials comprised the bulk of the 18-24 age group.

It remains that young voters are far less likely to vote than older adults and that trend is likely to continue.