Posts Tagged 'Crux Research'

Oops, the polls did it again

Many people had trouble sleeping last night wondering if their candidate was going to be President. I couldn’t sleep because as the night wore on it was becoming clear that this wasn’t going to be a good night for the polls.

Four years ago on the day after the election I wrote about the “epic fail” of the 2016 polls. I couldn’t sleep last night because I realized I was going to have to write another post about another polling failure. While the final vote totals may not be in for some time, it is clear that the 2020 polls are going to be off on the national vote even more than the 2016 polls were.

Yesterday, on election day I received an email from a fellow market researcher and business owner. We are involved in a project together and he was lamenting how poor the data quality has been in his studies recently and was wondering if we were having the same problems.

In 2014 we wrote a blog post that cautioned our clients that we were detecting poor quality interviews that needed to be discarded about 10% of the time. We were having to throw away about 1 in 10 of the interviews we collected.

Six years later that percentage has moved to be between 33% and 45% and we tend to be conservative in the interviews we toss. It is fair to say that for most market research studies today, between a third and a half of the interviews being collected are, for a lack of a better term, junk.  

It has gotten so bad that new firms have sprung up that serve as a go-between from sample providers and online questionnaires in order to protect against junk interviews. They protect against bots, survey farms, duplicate interviews, etc. Just the fact that these firms and terms like “survey farms” exist should give researchers pause regarding data quality.

When I started in market research in the late 80s/early 90’s we had a spreadsheet program that was used to help us cost out projects. One parameter in this spreadsheet was “refusal rate” – the percent of respondents who would outright refuse to take part in a study. While the refusal rate varied by study, the beginning assumption in this program was 40%, meaning that on average we expected 60% of the time respondents would cooperate. 

According to Pew and AAPOR in 2018 the cooperation rate for telephone surveys was 6% and falling rapidly.

Cooperation rates in online surveys are much harder to calculate in a standardized way, but most estimates I have seen and my own experience suggest that typical cooperation rates are about 5%. That means for a 1,000-respondent study, at least 20,000 emails are sent, which is about four times the population of the town I live in.

This is all background to try to explain why the 2020 polls appear to be headed to a historic failure. Election polls are the public face of the market research industry. Relative to most research projects, they are very simple. The problems pollsters have faced in the last few cycles is emblematic of something those working in research know but rarely like to discuss: the quality of data collected for research and polls has been declining, and should be alarming to researchers.

I could go on about the causes of this. We’ve tortured our respondents for a long time. Despite claims to the contrary, we haven’t been able to generate anything close to a probability sample in years. Our methodologists have gotten cocky and feel like they can weight any sampling anomalies away. Clients are forcing us to conduct projects on timelines that make it impossible to guard against poor quality data. We focus on sampling error and ignore more consequential errors. The panels we use have become inbred and gather the same respondents across sources. Suppliers are happy to cash the check and move on to the next project.

This is the research conundrum of our times: in a world where we collect more data on people’s behavior and attitudes than ever before, the quality of the insights we glean from these data is in decline.

Post 2016 the polling industry brain trust rationalized and claimed that the polls actually did a good job, convened some conferences to discuss the polls, and made modest methodological changes. Almost all of these changes related to sampling and weighting. But, as it appears that the 2020 polling miss is going to be way beyond what can be explained by sampling (last night I remarked to my wife that “I bet the p-value of this being due to sampling is about 1 in 1,000”), I feel that pollsters have addressed the wrong problem.

None of the changes pollsters made addressed the long-term problems researchers face with data quality. When you have a response rate of 5% and up to half of those are interviews you need to throw away, errors that can arise are orders of magnitude greater than the errors that are generated by sampling and weighting mistakes.

I don’t want to sound like I have the answers.  Just a few days ago I posted that I thought that on balance there were more reasons to conclude that the polls would do a good job this time than to conclude that they would fail. When I look through my list of potential reasons the polls might fail, nothing leaps to me as an obvious cause, so perhaps the problem is multi-faceted.

What I do know is the market research industry has not done enough to address data quality issues. And every four years the polls seem to bring that into full view.

Will the polls be right this time?

The 2016 election was damaging to the market research industry. The popular perception has been that in 2016 the pollsters missed the mark and miscalled the winner. In reality, the 2016 polls were largely predictive of the national popular vote. But, 2016 was largely seen by non-researchers as disastrous. Pollsters and market researchers have a lot riding on the perceived accuracy of 2020 polls.

The 2016 polls did a good job of predicting the national vote total but in a large majority of cases final national polls were off in the direction of overpredicting the vote for Clinton and underpredicting the vote for Trump. That is pretty much a textbook definition of bias. Before the books are closed on the 2016 pollster’s performance, it is important to note that the 2012 polls were off even further and mostly in the direction of overpredicting the vote for Romney and underpredicting the vote for Obama. The “bias,” although small, has swung back and forth between parties.

Election Day 2020 is in a few days and we may not know the final results for a while. It won’t be possible to truly know how the polls did for some weeks or months.

That said, there are reasons to believe that the 2020 polls will do an excellent job of predicting voter behavior and there are reasons to believe they may miss the mark.  

There are specific reasons why it is reasonable to expect that the 2020 polls will be accurate. So, what is different in 2020? 

  • There have been fewer undecided voters at all stages of the process. Most voters have had their minds made up well in advance of election Tuesday. This makes things simpler from a pollster’s perspective. A polarized and engaged electorate is one whose behavior is predictable. Figuring out how to partition undecided voters moves polling more in a direction of “art” than “science.”
  • Perhaps because of this, polls have been remarkably stable for months. In 2016, there was movement in the polls throughout and particularly over the last two weeks of the campaign. This time, the polls look about like they did weeks and even months ago.
  • Turnout will be very high. The art in polling is in predicting who will turn out and a high turnout election is much easier to forecast than a low turnout election.
  • There has been considerable early voting. There is always less error in asking about what someone has recently done than what they intend to do in the future. Later polls could ask many respondents how they voted instead of how they intended to vote.
  • There have been more polls this time. As our sample size of polls increases so does the accuracy. Of course, there are also more bad polls out there this cycle as well.
  • There have been more and better polls in the swing states this time. The true problem pollsters had in 2016 was with state-level polls. There was less attention paid to them, and because the national pollsters and media didn’t invest much in them, the state-level polling is where it all went wrong. This time, there has been more investment in swing-state polling.
  • The media invested more in polls this time. A hidden secret in polling is that election polls rarely make money for the pollster. This keeps many excellent research organizations from getting involved in them or dedicating resources to them. The ones that do tend to do so solely for reputational reasons. An increased investment this time has helped to get more researchers involved in election polling.
  • Response rates are upslightly. 2020 is the first year where we have seen a long-term trend towards declining response rates on survey stabilize and even kick up a little. This is likely a minor factor in the success of the 2020 polls, but it is in the right direction.
  • The race isn’t as close as it was in 2016. This one might only be appreciated by statisticians. Since variability is maximized in a 50/50 distribution the further away from an even race it is the more accurate a poll will be. This is another small factor in the direction of the polls being accurate in 2020.
  • There has not been late breaking news that could influence voter behavior. In 2016, the FBI director’s decision to announce a probe into Clinton’s emails came late in the campaign. There haven’t been any similar bombshells this time.
  • Pollsters started setting quotas and weighting on education. In the past, pollsters would balance samples on characteristics known to correlate highly with voting behavior – characteristics like age, gender, political party affiliation, race/ethnicity, and past voting behavior. In 2016, pollsters learned the hard way that educational attainment had become an additional characteristic to consider when crafting samples because voter preferences vary by education level. The good polls fixed that this go round.
  • In a similar vein, there has been a tighter scrutiny of polling methodology. While the media can still be a cavalier about digging into methodology, this time they were more likely to insist that pollsters outline their methods. This is the first time I can remember seeing news stories where pollsters were asked questions about methodology.
  • The notion that there are Trump supporters who intentionally lie to pollsters has largely been disproven by studies from very credible sources, such as Yale and Pew. Much more relevant is the pollster’s ability to predict turnout from both sides.

There are a few things going on that give the polls some potential to lay an egg.

  • The election will be decided by a small number of swing states. Swing state polls are not as accurate and are often funded by local media and universities that don’t have the funding or the expertise to do them correctly. The polls are close and less stable in these states. There is some indication that swing state polls have been tightening, and Biden’s lead in many of them isn’t much different than Clinton’s lead in 2020.
  • Biden may be making the same mistake Clinton made. This is a political and not a research-related reason, but in 2016 Clinton failed to aggressively campaign in the key states late in the campaign while Trump went all in. History could be repeating itself. Field work for final polls is largely over now, so the polls will not reflect things that happen the last few days.
  • If there is a wild-card that will affect polling accuracy in 2020, it is likely to center around how people are voting. Pollsters have been predicting election day voting for decades. In this cycle votes have been coming in for weeks and the methods and rules around early voting vary widely by state. Pollsters just don’t have past experience with early voting.
  • There is really no way for pollsters to account for potential disqualifications for mail-in votes (improper signatures, late receipts, legal challenges, etc.) that may skew to one candidate or another.
  • Similarly, any systematic voter suppression would likely cause the polls to underpredict Trump. These voters are available to poll, but may not be able to cast a valid vote.
  • There has been little mention of third-party candidates in polling results. The Libertarian candidate is on the ballot in all 50 states. The Green Party candidate is on the ballot in 31 states. Other parties have candidates on the ballot in some states but not others. These candidates aren’t expected to garner a lot of votes, but in a close election even a few percentage points could matter to the results. I have seen national polls from reputable organizations where they weren’t included.
  • While there is little credible data supporting that there are “shy” Trump voters that are intentionally lying to pollsters, there still might be a social desirability bias that would undercount Trump’s support. That social desirability bias could be larger than it was in 2016, and it is still likely in the direction of under predicting Trump’s vote count.
  • Polls (and research surveys) tend to underrepresent rural areas. Folks in rural areas are less likely to be in online panels and to cooperate on surveys. Few pollsters take this into account. (I have never seen a corporate research client correcting for this, and it has been a pet peeve of mine for years.) This is a sample coverage issue that will likely undercount the Trump vote.
  • Sampling has continued to get harder. Cell phone penetration has continued to grow, online panel quality has fallen, and our best option (ABS sampling) is still far from random and so expensive it is beyond the reach of most polls.
  • “Herding” is a rarely discussed, but very real polling problem. Herding refers to pollsters who conduct a poll that doesn’t conform to what other polls are finding. These polls tend to get scrutinized and reweighted until they fit to expectations, or even worse, buried and never released. Think about it – if you are a respected polling organization that conducted a recent poll that showed Trump would win the popular vote, you’d review this poll intensely before releasing it and you might choose not to release it at all because it might put your firm’s reputation at risk to release a poll that looks different than the others. The only polls I have seen that appear to be out of range are ones from smaller organizations who are likely willing to run the risk of being viewed as predicting against the tide or who clearly have a political bias to them.

Once the dust settles, we will compose a post that analyzes how the 2020 polls did. For now, we feel there are a more credible reasons to believe the polls will be seen as predictive than to feel that we are on the edge of a polling mistake.  From a researcher’s standpoint, the biggest worry is that the polls will indeed be accurate, but won’t match the vote totals because of technicalities in vote counting and legal challenges. That would reflect unfairly on the polling and research industries.

Common Misperceptions About Millennials

We’ve been researching Millennials literally since they have been old enough to fill out surveys. Over time, we have found that clients cling to common misperceptions of this generation and that the nature of these misperceptions haven’t evolved as Millennials have come of age.

Millennials are the most studied generation in history, likely because they are such a large group (there are now more Millennials in the US than Boomers) and because they are poised to soon become a dominant force in the economy, in politics, and in our culture.

There are enduring misconceptions about Millennials. Many stem from our inability to grasp that Millennials are distinctly different from their Gen X predecessors. Perhaps the worst mistake we can make is to assume that Millennials will behave in an “X” fashion rather than view them as a separate group.

Below are some common misconceptions we see that relate to Millennials.

  • Today’s kids and teens are Millennials. This is false as Millennials have now largely grown up. If you use the Howe/Strauss Millennial birth years Millennials currently range from about 16 to 38 years old. If you prefer Pew’s breaks Millennials are currently aged 23 to 38. Either way, Millennials are better thought of as being in a young adult/early career life stage than as teenagers.
  • Millennials are “digital natives” who know more about technology than other generations. This is, at best, partially true. The first half of the generation, born in 1982, hardly grew up with today’s interactive technology. The iPhone came out in 2007 when the first Millennial was 25 years old. Millennials discovered these technologies along with the rest of us. A recent Pew study on technological ownership showed that Millennials do own more technology than Boomers and Xers, but that the gap isn’t all that large. For years we have counseled clients that parents and teachers are more technologically advanced than commonly thought. Don’t forget that the entrepreneurial creators of this technology are mainly Boomers and Xers, and not Millennials.
  • Millennials are all saddled with college debt. We want to tread lightly here, as we would not want to minimize the issue of college debt which affects many young people and constrains their lives in many ways. But we do want to put college debt in the proper perspective. The average Millennial has significant debt, but the reality is the bulk of the debt they hold is credit card debt and not college debt. College debt is just 16% of the total debt held by Millennials. According to the College Board 29% of bachelor’s degree graduates have no college debt at all, 24% have under $20,000 in debt, 30% have between $20,000 and $30,000 in debt, and 31% have over $30,000 in college debt. The College Board also reports that a 4-year college graduate can expect to make about $25,000 per year more than a non-graduate. It is natural for people of all generations to have debt in their young adult/early professional life stage and this isn’t unique to Millennials. What is unique is their debt levels are high and multi-faceted. Our view is that college debt per se is not the core issue for Millennials, as most have manageable levels of college debt and college is a financially worthwhile investment for most of them. But college debt levels continue to grow and have a cascading effect and lead to other types of debts. College debt is a problem, but mostly because it is a catalyst for other problems facing Millennials. So, this statement is true, but is more nuanced than is commonly perceived.
  • Millennials are fickle and not loyal to brands. This myth has held sway since before the generation was named. I cannot tell you how many market research projects I have conducted that have shown that Millennials are more brand loyal than other generations. They express positive views of products online at a rate many times greater than the level of complaints they express. Of course, they have typical young person behaviors of variety-seeking and exploration, but they live in a crazy world of information, misinformation, and choice. Brand loyalty is a defense mechanism for them.
  • Millennials are fickle and not loyal to employers. On the employer side, surveys show that Millennials seek stability in employment. They want to be continuously challenged and stay on a learning curve. We feel that issues with employer loyalty for Millennials go both ways and employers have become less paternalistic and value young employees less than in past times. That is the primary driver of Millennials switching employers. There are studies that suggest that Millennials are staying with employers longer than Gen X employees did.
  • Millennials are entrepreneurial. In reality, we expect Millennials to be perhaps the least entrepreneurial of all the modern generations. (We wrote an entire blog post on this issue.)
  • Millennials seek constant praise. This is the generation that grew up with participation trophies and gold stars on everything (provided by their Boomer parents). However, praise is not really what Millennials seek. Feedback is. They come from a world of online reviews, constant educational testing, and close supervision. The result is Millennials have a constant need to know where they stand. This is not the same as praise.
  • Millennials were poorly parented. The generation that was poorly parented was Gen X. These were the latch-key kids who were lightly supervised. Millennials have been close with their parents from birth. At college, the “typical” Millennial has contact with their parent more than 10 times per week. Upon graduation, many of them choose to live with, or nearby their parents even when there is no financial need to do so. Their family ties are strong.
  • Millennials are all the same. Whenever we look at segments, we run a risk of typecasting people and assuming all segment members are alike.  The “art” of segmentation in a market research study is to balance the variability between segments with the variability within them in a way that informs marketers. Millennials are diverse. They are the most racially diverse generation in American history, they span a wide age range, they cover a range of economic backgrounds, and are represented across the political spectrum. The result is while there is value in understanding Millennials as a segment, there is no typical Millennial.

When composing this post, I typed “Millennials are …” into a Google search box. The first thing that came up to complete my query was “Millennials are lazy entitled narcissists.” When I typed “Boomers are …” the first result was “Boomers are thriving.”  When I typed “Gen X is …” the first result was “Gen X is tired.” This alone should convince you that there are serious misconceptions of all generations.

Millennials are the most educated, most connected generation ever. I believe that history will show that Millennials effectively corrected for the excesses of Boomers and set the country and the world on a better course.

Should we get rid of statistical significance?

There has been recent debate among academics and statisticians surrounding the concept of statistical significance. Some high-profile medical studies have just narrowly missed meeting the traditional statistical significance cutoff of 0.05. This has resulted in potentially life changing drugs not being approved by regulators or pursued for further development by pharma companies. These cases have led to a much-needed review and re-education as to what statistical significance means and how it should be applied.

In a 2014 blog post (Is This Study Significant?) we discussed common misunderstandings market researchers have regarding statistical significance. The recent debate suggests this misunderstanding isn’t limited to market researchers – it appears that academics and regulators have the same difficulty.

Statistical significance is a simple concept. However, it seems that the human brain just isn’t wired well to understand probability and that lies at the root of the problem.

A measure is typically classified as statistically significant if its p-value is 0.05 or less. This means that there is a less than 5% probability that the result came from chance or random fluctuation. Two measures are deemed to be statistically different if there is a 19 out of 20 chance or greater that they are.

There are real problems with this approach. Foremost, it is unclear how this 5% probability cutoff was chosen. Somewhere along the line it became a standard among academics. This standard could have just as easily been 4% or 6% or some other number. This cutoff was chosen subjectively.

What are the chances that this 5% cutoff is optimal for all studies, regardless of the situation?

Regulators should look beyond statistical significance when they are reviewing a new medication. Let’s say a study was only significant at 6%, not quite meeting the 5% standard. That shouldn’t automatically disqualify a promising medication from consideration. Instead, regulators should look at the situation more holistically. What will the drug do? What are its side effects? How much pain does it alleviate? What is the risk of making mistakes in approval: in approving a drug that doesn’t work or in failing to approve a drug that does work? We could argue that the level of significance required in the study should depend on the answers to these questions and shouldn’t be the same in all cases.

The same is true in market research. Suppose you are researching a new product and the study is only significant at 10% and not the 5% that is standard. Whether you should greenlight the product for development depends on considerations beyond statistical significance. What is the market potential of the product? What is the cost of its development? What is the risk of failing to greenlight a winning idea or greenlighting a bad idea? Currently, too many product managers rely too much on a research project to give them answers when the study is just one of many inputs into these decisions.

There is another reason to rethink the concept of statistical significance in market research projects. Statistical significance assumes a random or a probability sample. We can’t stress this enough – there hasn’t been a market research study conducted in at least 20 years that can credibly claim to have used a true probability sample of respondents. Some (most notably ABS samples) make a valiant attempt to do so but they still violate the very basis for statistical significance.

Given that, why do research suppliers (Crux Research included) continue to do statistical testing on projects? Well, one reason is clients have come to expect it. A more important reason is that statistical significance holds some meaning. On almost every study we need to draw a line and say that two data poworints are “different enough” to point out to clients and to draw conclusions from. Statistical significance is a useful tool for this. It just should no longer be viewed as a tool where we can say precise things like “these two data points have a 95% chance of actually being different”.

We’d rather use a probability approach and report to clients the chance that two data points would be different if we had been lucky enough to use a random sample. That is a much more useful way to look at data, but it probably won’t be used much until colleges start teaching it and a new generation of researchers emerges.

The current debate over the usefulness of statistical significance is a healthy one to have. Hopefully, it will cause researchers of all types to think deeper about how precise a study needs to be and we’ll move away from the current one-size-fits-all thinking that has been pervasive for decades.

Among college students, Bernie Sanders is the overwhelming choice for the Democratic nomination

Crux Research poll of college students shows Sanders at 23%, Biden at 16%, and all other candidates under 10%

ROCHESTER, NY – October 10, 2019 – Polling results released today by Crux Research show that if it was up to college students, Bernie Sanders would win the Democratic nomination the US Presidency. Sanders is the favored candidate for the nomination among 23% of college students compared to 16% for Joe Biden. Elizabeth Warren is favored by 8% of college students followed by 7% support for Andrew Yang.

  • Bernie Sanders: 23%
  • Joe Biden: 16%
  • Elizabeth Warren: 8%
  • Andrew Yang: 7%
  • Kamala Harris: 6%
  • Beto O’Rourke: 5%
  • Pete Buttigieg: 4%
  • Tom Steyer: 3%
  • Cory Booker: 3%
  • Michael Bennet: 2%
  • Tulsi Gabbard: 2%
  • Amy Klobuchar: 2%
  • Julian Castro: 1%
  • None of these: 5%
  • Unsure: 10%
  • I won’t vote: 4%

The poll also presented five head-to-head match-ups. Each match-up suggests that the Democratic candidate currently has a strong edge over President Trump, with Sanders having the largest edge.

  • Sanders versus Trump: 61% Sanders; 17% Trump; 12% Someone Else; 7% Not Sure; 3% would not vote
  • Warren versus Trump: 53% Warren; 18% Trump; 15% Someone Else; 9% Not Sure; 5% would not vote
  • Biden versus Trump: 51% Biden; 18% Trump; 19% Someone Else; 8% Not Sure; 4% would not vote
  • Harris versus Trump: 48% Harris; 18% Trump; 20% Someone Else; 10% Not Sure; 4% would not vote
  • Buttigieg versus Trump: 44% Buttigieg; 18% Trump; 22% Someone Else; 11% Not Sure; 5% would not vote

The 2020 election could very well be determined on the voter turnout among young people, which has traditionally been much lower than among older age groups.

###

Methodology
This poll was conducted online between October 1 and October 8, 2019. The sample size was 555 US college students (aged 18 to 29). Quota sampling and weighting were employed to ensure that respondent proportions for age group, sex, race/ethnicity, and region matched their actual proportions in the US college student population.

This poll did not have a sponsor and was conducted and funded by Crux Research, an independent market research firm that is not in any way associated with political parties, candidates, or the media.

All surveys and polls are subject to many sources of error. The term “margin of error” is misleading for online polls, which are not based on a probability sample which is a requirement for margin of error calculations. If this study did use probability sampling, the margin of error would be +/-4%.

About Crux Research Inc.
Crux Research partners with clients to develop winning products and services, build powerful brands, create engaging marketing strategies, enhance customer satisfaction and loyalty, improve products and services, and get the most out of their advertising.

Using quantitative and qualitative methods, Crux connects organizations with their customers in a wide range of industries, including health care, education, consumer goods, financial services, media and advertising, automotive, technology, retail, business-to-business, and non-profits.
Crux connects decision makers with customers, uses data to inspire new thinking, and assures clients they are being served by experienced, senior level researchers who set the standard for customer service from a survey research and polling consultant.

To learn more about Crux Research, visit http://www.cruxresearch.com.

How to be an intelligent consumer of political polls

As the days get shorter and the air gets cooler, we are on the edge of a cool, colorful season. We are not talking about autumn — instead, “polling season” is upon us! As the US Presidential race heats up, one thing we can count on is being inundated with polls and pundits spinning polling results.

Most market researchers are interested in polls. Political polling pre-dates the modern market research industry and most market research techniques used today have antecedents from the polling world. And, as we have stated in a previous post, polls can be as important as the election itself.

The polls themselves influence voting behavior which should place polling organizations in an ethical quandary. Our view is that polls, when properly done, are an important facet of modern democracy. Polls can inform our leaders as to what the electorate cares about and keep them accountable. This season, polls are determining which candidates get on the debate stage and are driving which issues candidates are discussing most prominently.

The sheer number of polls that we are about to see will be overwhelming. Some will be well-conducted, some will be shams, and many will be in between. To help, we thought we’d write this post on how be an intelligent consumer of polls and what to look out for when reading the polls or hearing about them in the media.

  • First, and this is harder than it sounds, you have to put your own biases aside. Maybe you are a staunch conservative or liberal or maybe you are in the middle. Whatever your leaning, your political views are likely going to get in the way of you becoming a good reader of the polls. It is hard to not have a confirmation bias when viewing polls, where you tend to accept a polling result that confirms what you believe or hope will happen and question a result that doesn’t fit with your map of the world. I have found the best way to do this is to first try to view the poll from the other side. Say you are a conservative. Start by thinking about how you would view the poll if you leaned left instead.
  • Next, always, and I mean ALWAYS, discover who paid for the poll. If it is an entity that has a vested interest in the results, such as a campaign, a PAC, and industry group or lobbyist, go no further. Don’t even look at the poll. In fact, if the sponsor of the poll isn’t clearly identified, move on and spend your time elsewhere. Good polls always disclose who paid for it.
  • Don’t just look to who released the poll, review which organization executed it. For the most part, polls executed by major polling organizations (Gallup, Harris, ORC, Pew, etc.) will be worth reviewing as will polls done by colleges with polling centers (Marist, Quinnipiac, Sienna, etc.). But there are some excellent polling firms out there you likely have never heard of. When in doubt, remember that Five Thirty Eight gives pollsters grades based on their past performances.  Despite what you may hear, polls done by major media organizations are sound. They have polling editors that understand all the nuances and have standards for how the polls are conducted. These organizations tend to partner with major polling organizations that likewise have the methodological muscle that is necessary.
  • Never, and I mean NEVER, trust a poll that comes from a campaign itself. At their best, campaigns will cherry pick results from well executed polls to make their candidate look better. At their worst, they will implement a biased poll intentionally. Why? Because much of the media, even established mainstream media, will cover these polls. (As an aside, if you are a researcher don’t trust the campaigns either. From my experience, you have about a 1 in 3 chance of being paid by a campaign for conducting their poll.)
  • Ignore any talk about the margin of error. The margin of error on a poll has become a meaningless statistic that is almost always misinterpreted by the media. A margin of error really only makes sense when a random or probability sample is being used. Without going into detail, there isn’t a single polling methodology in use today that can credibly claim to be using a probability sample. Regardless, being within the margin of error does not mean a race is too close to call anyway. It really just means it is too close to call with 95% certainty.
  • When reading stories on polls in the media, read beyond the headline. Remember, headlines are not written by reporters or pollsters. They are written by editors that in many ways have had their journalistic integrity questioned and have become “click hunters.” Their job is to get you to click on the story and not necessarily to accurately summarize the poll. Headlines are bound to be more sensational that the polling results merit.

All is not lost though. There are plenty of good polls out there worth looking at. Here is the routine I use when I have a few minutes and want to discover what the polls are saying.

  • First, I start at the Polling Report. This is an independent site that compiles credible polls. It has a long history. I remember reading it in the 90’s when it was a monthly mailed newsletter. I start here because it is nothing more than raw poll results with no spin whatsoever. Their Twitter feed shows the most recently submitted polls.
  • I sometimes will also look at Real Clear Politics. They also curate polls, but they also provide analysis. I tend to just stay on their poll page and ignore the analysis.
  • FiveThirtyEight doesn’t provide polling results in great detail, but usually draws longitudinal graphs on the probability of each candidate winning the nomination and the election. Their predictions have valid science behind them and the site is non-partisan. This is usually the first site I look at to discover how others are viewing the polls.
  • For fun, I take a peek at BetFair which is an UK online betting site that allows wagers on elections. It takes a little training to understand what the current prices mean, but in essence this site tells you which candidates people are putting their actual money on. Prediction markets fascinate me; using this site to predict who might win is fun and geeky.
  • I will often check out Pew’s politics site. Pew tends to poll more on issues than “horse race” matchups on who is winning. Pew is perhaps the most highly respected source within the research field.
  • Finally, I go to the media. I tend to start with major media sites that seem to be somewhat neutral (the BBC, NPR, USA TODAY). After reviewing these sites, I then look at Fox News and MSNBC’s website because it is interesting to see how their biases cause them to say very different things about the same polls. I stay away from the cable channels (CNN, Fox, MSNBC) just because I can’t stand hearing boomers argue back and forth for hours on end.

This is, admittedly, way harder than it used to be. We used to just be able to let Peter Jennings or Walter Cronkite tell us what the polls said. Now, there is so much out there that to truly get an objective handle on what is going on takes serious work. I truly think that if you can become an intelligent, unbiased consumer of polls it will make you a better market researcher. Reading polls objectively takes a skill that applies well to data analysis and insight generation, which is what market research is all about.

Why Your Child Hates Sports

It surprises many to learn that on most measures of well-being today’s youth are the healthiest generation in history. Violent crime against and by young people is historically low. Teen pregnancy and birth rates continue to decline. Most measures of drug and alcohol use among teens and young adults show significant declines from a generation ago. Tobacco use is at a low point. In short, most problems that are a result of choices young people make have shown marked improvement since information on Millennials entered the data sets.

But an important measure of well-being has tracked significantly worse during the Millennial and post-Millennial era:  childhood obesity. According to the CDC, the prevalence of obesity has roughly tripled in the past 40 years. This is a frightful statistic.

This is not new news as many books, documentaries, and scholars have presented possible reasons for the spike in youth obesity. Beyond genetics, there are two likely determinants of obesity: 1) nutrition and 2) physical activity. Discussions of obesity’s “nutritional” causes are fraught with controversy. The food industry involves a lot of interests and money, nutritional science is rarely definitive, and seemingly everyone has their own opinions on what is healthy or unhealthy to eat. The nutritional roots of obesity (while likely very significant) are far from settled.

However, the “physical activity” side of the discussion tends to not be so heated. Nearly everyone agrees that today’s youth aren’t as physically active as they should be. There are likely many causes for this as well, but I believe the way youth sports operate merit some discussion.

When I was young, sports were every bit as important to my life as they became to my Millennial children. The difference is my sports experiences as a child were mostly kid-directed. Almost daily, we gathered in the largest yard in the neighborhood and played whichever sport was in season. It took up an hour or two on most days and sometimes the entire weekend. The biggest difference to today’s youth sports environment is there wasn’t an adult in sight. There were arguments, injuries, and conflicts, all of which got resolved without adult mediation.

Contrast this to today’s youth sports environment. Today’s kids specialize in one sport year-round and from a very young age join travel and elite leagues organized by adults. There is a general dearth of unstructured play time. Correlation and causation are never the same thing but the rise in youth obesity has correlated closely with the rise in youth sports leagues organized by adults. Once adults started making the decisions about sports, our kids got fatter.

As a matter of personal perspective, I have two adult children and I can count six sports (baseball, soccer, ice hockey, track, skiing, cross country) that they played in an adult-organized fashion while growing up. We encountered situations where I had a child who was one of the least talented kids on a team, others where I had a child that was the star of the team, and many others where my child was somewhere in the middle. Between them, my kids were on teams that dominated their leagues and went undefeated, they were on some that lost almost every game, and they were on some teams that both won and lost. I coached for a while and my wife was “team mom” for most teams they were on.

Along the way I noticed that kids seemed to have the most fun when they won just a few more games than they lost. The kids didn’t seem to think it was as fun to dominate the competition and it was even less fun to be constantly on the losing end. 

I remember once when in the car after a hockey game I asked my son what he wanted to happen when he had the puck. He said, “I want to score.” I asked him “suppose you scored every single time you touched the puck. Would that be any fun?” At 10 years old, he didn’t have to think long to say that wouldn’t be very fun at all. But, that is what most hockey dads are hoping will happen.

There seems to be a natural force kids apply to sports equality when adults get out of the way. Left to their own devices, the first things kids will do when choosing up teams is to try to get the teams to be evenly matched. Then, if the game starts to get too one-sided the next thing they will do is swap some players around to balance it out. This seems to be ingrained – nobody teaches kids to do this, but left on their own this is what they tend to do. They will also change the rules of the game to make it more fun.

I’ve encountered many parents who are delusional when it comes to the athletic capabilities of their children. I don’t think I have ever met a dad (including myself) who didn’t think their child was better than he/she really was. We want our kids to succeed of course. But we have to have the right definition of success. Are they having fun? Are they improving? Learning how to work as a team and treat competition with respect? Making friendships? That is what is going to matter down the line.

Far too many parents look to the future too much and don’t let their kids enjoy the moment. They will spend thousands and sacrifice nearly every weekend to send their kid to a camp that might get them noticed by college recruiters. The reality is, their child probably won’t get an athletic scholarship, and if he/she does it probably won’t come close to offsetting the money spent getting him/her to all of the camps and travel league games. Parents also don’t realize that most kids don’t find participating in college sports to be as fun as participating in them was in high school.

When I coached Little League baseball, I used to tell the kids to play catch with their mom or dad every day. I remember a mom once asking me why I was pushing them to do this so much. I told her that playing catch with a baseball in the backyard with your kid is one of the great moments in parenthood. It forces you to talk and listen to your kid. I told her that her son would remember that time with his mom or dad far more than playing on our team.

There are debates over rewards for participation in sports. In my day, you had to win to get the trophy and sometimes you didn’t even get that. Now, kids get trophies for showing up. That is not necessarily a bad thing. As Woody Allen says, “80% of success is showing up.” So, why not reward it?

My youngest son was fortunate to run cross country for a coach that most would classify as a local legend. He has coached the team for 30+ years, has had many state championship teams and individuals, and is widely respected. My favorite memory of him was something I observed when he didn’t know I was looking and it had nothing to do with championships and developing elite athletes.  For the first race of a new season, he took the varsity teams to an out-of-state invitational. The girls team was quite good, and for his 7th (and slowest) runner he brought a freshman girl who was inexperienced and running her very first race. She didn’t do very well and came in about 120th place in the race. I saw the coach come up to her right afterword with a beaming smile on his face. The first thing he said to her was “was that FUN or what?” as he gave her a hug. She smiled, hugged him back and ended up staying on the team for all four years of high school and last weekend (8 years later) I saw her jogging in a local park. She didn’t excel at running in high school, but the coach sparked a lifelong interest in fitness in her.

To me, that signified not just what sports should all be about, but what adults’ role in sports should be all about. We have a real problem with childhood obesity. The cure is to make sports and physical activity more fun, and many times that means getting the adults out of the way.

Will adding a citizenship question to the Census harm the Market Research Industry?

The US Supreme Court appears likely to allow the Department of Commerce to reinstate a citizenship question on the 2020 Census. This is largely viewed as a political controversy at the moment. The inclusion of a citizenship question has proven to dampen response rates among non-citizens, who tend to be people of color. The result will be gains in representation for Republicans at the expense of Democrats (political district lines are redrawn every 10 years as a result of the Census). Federal funding will likely decrease for states with large immigrant populations.

It should be noted that the Census bureau itself has come out against this change, arguing that it will result in an undercount of about 6.5 million people. Yet, the administration has pressed forward and has not committed funds needed by the Census Bureau to fully research the implications. The concern isn’t just about non-response from non-citizens. In tests done by the Census Bureau, non-citizens are also more likely to inaccurately respond to this question than citizens, meaning the resulting data will be inaccurate.

Clearly this is a hot-button political issue. However, there is not much talk of how this change may affect research. Census data are used to calibrate most research studies in the US, including academic research, social surveys, and consumer market research. Changes to the Census may have profound effects on data quality.

The Census serves as a hidden backbone for most research studies whether researchers or clients realize it or not. Census information helps us make our data representative. In a business climate that is becoming more and more data-driven the implications of an inaccurate Census are potentially dire.

We should be primarily concerned that the Census is accurate regardless of the political implications. Adding questions that temper response will not help accuracy. Errors in the Census have a tendency to become magnified in research. For example, in new product research it is common to project study data from about a thousand respondents to a universe of millions of potential consumers. Even a small error in the Census numbers can lead businesses to make erroneous investments. These errors create inefficiencies that reverberate throughout the economy. Political concerns aside, US businesses undoubtably suffer from a flawed Census. Marketing becomes less efficient.

All is not lost though. We can make a strong case that there are better, less costly ways to conduct the Census. Methodologists have long suggested that a sampling approach would be more accurate than the current attempt at enumeration. This may never happen for the decennial Census because the Census methodology is encoded in the US Constitution and it might take an amendment to change it.

So, what will happen if this change is made? I suspect that market research firms will switch to using data that come from the Census’ survey programs, such as the American Community Survey (ACS). Researchers will rely less on the actual decennial census. In fact, many research firms already use the ACS rather than the decennial census (and the ACS currently contains the citizenship question).

The Census bureau will find ways to correct for resulting error, and to be honest, this may not be too difficult from a methodological standpoint. Business will adjust because there will be economic benefits to learning how to deal with a flawed Census, but in the end, this change will take some time for the research industry to address. Figuring things like this out is what good researchers do. While it is unfortunate that this change looks likely to be made, its implications are likely more consequential politically than it will be to the research field.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.