Archive for the 'Uncategorized' Category

Happy Birthday to Us!

happy-birthday-images-free-animated-free-animated-funny-happy-birthday-clip-arts-animated-butterfly-clipart-neoclipartcom-high-quality-pictures-dxeqzw

This month, Crux Research turns 11 years old. What started as something transitional for us as we looked for the next big thing quickly morphed into the next big thing itself.

Since our start, we have now conducted 300+ projects for 65+ clients across a wide range of industries and causes. At times, we feel we know a little bit about everything at this point.

We’ve bucked a few trends along the way. We’ve never had a business plan and have never really looked past the next few months. We’ve resisted pressure to grow to a larger company. We don’t necessarily go where the opportunities are and instead prefer to work on projects and with clients that interest us. We’ve also eschewed the normal business week, and work nights, weekends, etc.

Our ability to collect incredible people as clients has only been surpassed by our good fortune to attract staff and helpers. A special thanks to our staff members and our “bench” who have been helping out our team throughout the years.

Onward!  Happy Holidays to all. May your response rates be high and all of your confidence intervals be +/-5%!

An Epic Fail: How Can Pollsters Get It So Wrong?

picture1

Perhaps the only bigger loser than Hillary Clinton in yesterday’s election was the polling industry itself. Those of us who conduct surveys for a living should be asking if we can’t even get something as simple as a Presidential election right, why should our clients have confidence in any data we provide?

First, a recap of how poorly the polls and pundits performed:

  • FiveThirtyEight’s model had Clinton’s likelihood of winning at 72%.
  • Betfair (a prediction market) had Clinton trading at an 83% chance of winning.
  • A quick scan of Real Clear Politics on Monday night showed 25 final national polls. 22 of these 25 polls had Clinton as the winner, and the most reputable ones almost all had her winning the popular vote by 3 to 5 points. (It should be noted that Clinton seems likely to win the popular vote.)

There will be claims that FiveThirtyEight “didn’t say her chances were 100%” or that Betfair had Trump with a “17% chance of winning.” Their predictions were never to be construed to be certain.  No prediction is ever 100% certain, but this is a case where almost all forecasters got it wrong.  That is pretty close to the definition of a bias – something systematic that affected all predictions must have happened.

The polls will claim that the outcome was in the margin of error. But, to claim a “margin of error” defense is statistically suspect, as margins of error only apply to random or probability samples and none of these polls can claim to have a random sample. FiveThirtyEight also had Clinton with 302 electoral votes, way beyond any reasonable error rate.

Regardless, the end result is going to end up barely within the margin of error of most of these polls erroneously use anyway. That is not a free pass for the pollsters at all. All it means is rather than their estimate being accurate 95% of the time, it was predicted to be accurate a little bit less:  between 80% and 90% of the time for most of these polls by my calculations.

Lightning can strike for sure. But this is a case of it hitting the same tree numerous times.

So, what happened? I am sure this will be the subject of many post mortems by the media and conferences from the research industry itself, but let me provide an initial perspective.

First, it seems that it had anything to do with the questions themselves. In reality, most pollsters use very similar questions to gather voter preferences and many of these questions have been in use for a long time.  Asking whom you will vote for is pretty simple. The question itself seems to be an unlikely culprit.

I think the mistakes the pollster’s made come down to some fairly basic things.

  1. Non-response bias. This has to be a major reason why the polls were wrong. In short, non-response bias means that the sample of people who took the time to answer the poll did not adequately represent the people who actually voted.  Clearly this must have occurred. There are many reasons this could happen.  Poor response rates is likely a key one, but poor selection of sampling frames, researchers getting too aggressive with weighting and balancing, and simply not being able to reach some key types of voters well all play into it.
  2. Social desirability bias. This tends to be more present in telephone and in-person polls that involve an interviewer but it happens in online polls as well. This is when the respondent tells you what you want to hear or what he or she thinks is socially acceptable. A good example of this is if you conduct a telephone poll and an online poll at the same time, more people will say they believe in God in the telephone poll.  People tend to answer how they think they are supposed to, especially when responding to an interviewer.   In this case, let’s take the response bias away.  Suppose pollsters reached every single voter who actually showed up in a poll. If we presume “Trump” was a socially unacceptable answer in the poll, he would do better in the actual election than in the poll.  There is evidence this could have happened, as polls with live interviewers had a wider Clinton to Trump gap than those that were self-administered.
  3. Third parties. It looks like Gary Johnson’s support is going to end up being about half of what the pollster’s predicted.  If this erosion benefited Trump, it could very well have made a difference. Those that switched their vote from Johnson in the last few weeks might have been more likely to switch to Trump than Clinton.
  4. Herding. This season had more polls than ever before and they often had widely divergent results.  But, if you look closely you will see that as the election neared, polling results started to converge.  The reason could be that if a pollster had a poll that looked like an outlier, they probably took a closer look at it, toyed with how the sample was weighted, or decided to bury the poll altogether.  It is possible that there were some accurate polls out there that declared a Trump victory, but the pollster’s didn’t release them.

I’d also submit that the reasons for the polling failure are likely not completely specific to the US and this election. We can’t forget that pollsters also missed the recent Brexit vote, the Mexican Presidency, and David Cameron’s original election in the UK.

So, what should the pollsters do? Well, they owe it to the industry to convene, share data, and attempt to figure it out. That will certainly be done via the trade organizations pollsters belong to, but I have been to a few of these events and they devolve pretty quickly into posturing, defensiveness, and salesmanship. Academics will take a look, but they move so slowly that the implications they draw will likely be outdated by the time they are published.  This doesn’t seem to be an industry that is poised to fix itself.

At minimum, I’d like to see the polling organizations re-contact all respondents from their final polls. That would shed a lot of light on any issues relating to social desirability or other subtle biases.

This is not the first time pollsters have gotten it wrong. President Hillary Clinton will be remembered in history along with President Thomas Dewey and President Alf Landon.  But, this time seems different.  There is so much information out there that seeing the signal to the noise is just plain difficult – and there are lessons in that for Big Data analyses and research departments everywhere.

We are left with an election result that half the country is ecstatic about and half is worried about.  However, everyone in the research industry should be deeply concerned. I am hopeful that this will cause more market research clients to ask questions about data quality, potential errors and biases, and that they will value quality more. Those conversations will go a long way to putting a great industry back on the right path.

Cause Change. Be Changed.

Congratulations to Causewave Community Partners on their successful annual celebration last week.  It was a sellout!

This video does a great job of capturing what the organization is a about and the value of volunteering.  It also includes a cameo from Lisa on our staff!

 

Polls can be as influential as the election

Many of us on the supplier side of the market research industry had our original interest in this field kindled by political polling. The market research industry was largely established as a by-product of polling. It didn’t take the founding fathers of election polling long to realize that, during a time of massive expansion of the US economy in the post WWII era, there was money to be made by polling for companies and brands.

In some ways polling has become more important than the election itself. In 2000 Elizabeth Dole was touted by many as a potential Republican candidate. While many knew her only as the wife of Bob Dole, she seemed to have a lot going for her. She had been Secretary of Labor, head of the Red Cross, was well-spoken, and seemed poised to become the perhaps the first woman with a realistic shot at the White House. She was seen as a viable candidate by most pundits.

But, polls conducted before any primaries had been contested indicated that her support level was low, largely because she was unknown. As a consequence of a poor showing in early polls, she stumbled in fundraising and pulled out of the race without a voter ever having a chance to vote for or against her. Had the initial polls never been taken, she likely would have had enough fundraising support to enter the initial primaries. As she was an excellent communicator, who knows where it might have gone from there.

This made me wonder what the value of early polling is. It certainly seems to limit the viability of lesser-known candidates. I doubt that if the polling environment in 1992 was as it is today if Bill Clinton would have had the chance to emerge as a contender.

As we turn to the current race, on the Republican side there soon could be as many as a dozen declared candidates, and some are predicting up to 20. Fundraising success will become the first screen to winnow the field. And, early poll results will directly affect their ability to fundraise. I believe this is why Jeb Bush has been late to declare his candidacy. He has had an incredible level of success raising money, and once he declares the pollsters will start assessing his viability. He’s best off continuing to fundraise without becoming a declared candidate as declaring probably runs a risk for him.

Further, both Fox and CNN have recently announced that they will only include the top 10 candidates in the first Republican debates. How will they winnow the field? By looking at polling data.

Should we worry that the polling industry has too much say in who gets support? I asked this question to a well-respected pollster once and he said that the issue is more on how well the polls are done. If we do our jobs well we keep politicians abreast of popular opinion and thus are a valuable contributor to democracy. There is nothing wrong with accurately measuring the truth and communicating it.

Of course, when polls are done poorly, the opposite is true. The media has an insatiable appetite for polls. As a consequence, there are many poorly-designed polls released and reported upon. There are even more polls that are really just shilling for the parties and Super PACs in disguise. The media has been either unable or unwilling to differentiate the credible from the bad, and with a continuous news cycle we’ll see more poor quality polls reported upon.

It doesn’t help that even the major pollsters struggle to get it right. In the recent UK elections, pretty much every pollster missed badly.  Even FiveThirtyEight, Nate Silver’s site that tends to be highly critical of polling and a self-appointed arbiter of good and bad polls, had to issue a mea culpa when their own predictions rang hollow.

As long as the media is running 24/7 and starved for content, the polls will continue.  The challenge is to sort out the good from the bad and the signal from the noise.  It isn’t easy but it is important – literally who gets elected as the next US President can depend upon it.

 

New ASHA Survey of U.S. Parents: Significant Percentages Report That Very Young Children Are Using Technology

We are proud to have worked with ASHA on this poll.  It is truly surprising how much technology young children have access to, and parents need to monitor their exposure to noise as it can affect their hearing and speech development at critical ages.

Click here for the full release.

Does Class Size Matter?

Reducing class sizes is a commonly discussed goal in education. However, there may not be a more consequential educational issue where the academic research available is a poorer match to anecdotal evidence than the issue of class size.

Ask any teacher, administrator, or parent you know what they would prefer, and almost all of them will say that smaller class sizes are more conducive to learning than larger class sizes. Peruse any higher education website and you will find most try to trumpet their low student to faculty ratio. And, intuitively, it just makes sense that students will learn better if there are fewer of them in a class.

But, there is actually very little academic evidence that class size matters. Our review of the literature indicates that there is some evidence (gathered long ago) that smaller class sizes have an effect at the youngest grade levels, but little or inconclusive evidence that smaller class sizes matter among older students.

Yet a debate rages regarding class sizes. Teacher unions are understandably in favor of lowering class sizes, as this makes the job of the teacher easier and increases the numbers of teachers that need to be hired. Administrators seem to also favor lowering class sizes, but are wary to do so without much evidence indicating that it will improve academic achievement. Politicians favor it as well, as reducing class sizes certainly sounds like ad admirable goal to pursue.

What is undebatable is that there are significant costs involved in decreasing class sizes. Reducing class sizes means building more classrooms, maintaining larger facilities, and hiring more teachers.  The costs of reducing class sizes are potentially large, which is why it is surprising the issue doesn’t have much academic study and thought behind it.

We feel the issue has been oversimplified. Like most things we study, there are likely decreasing returns as class size is reduced. In other words, there is likely an ideal level for class size. There is probably a point where a class size can be too small, as tiny class sizes don’t allow for student-to-student learning and collaboration, small group projects, etc. As class size increases, it likely hits an ideal point, where the learning efficiency of the classroom is maximized. And, invariably, a class size can grow too large, where supervision of students is compromised.

It is possible that the academic studies that are available have not investigated a wide enough range of class sizes and therefore have not been able to spot this ideal point. Since no school district could (by law) change its average class size by more than a few students, academic researchers are likely concentrating on class size differences that are not large enough to show much of an effect.

However, in the debate over class sizes, there is an important issue we have never seen discussed. It is that the ideal class size is likely not the same for all situations. Even within a school, the ideal class size likely varies by the subject taught, the academic capabilities of the students, the grade level, and importantly, the particular strengths and weaknesses of the teacher.

For example, why do we presume that the same class size is needed for English as is needed for Math, or Foreign Languages? Why do we presume that 7th graders need the same class sizes as 12th graders? Or that a first-year teacher will be most efficient teaching the same class size as a proven teacher with 20 years of experience? Or that every student benefits most from the same class sizes?

We ignore the variability that is inherent in the process.  And, we don’t give our school managers (School Principals) much leeway in how they can manage their resources to take into account this variability.

We’d like to see Principals given a lot more latitude over how to best utilize their staff. In any organization whose success is dependent on the capabilities and productivity of its workers, the main task of a manager is to understand his/her staff’s capabilities and knowing how to properly deploy human resources.

Currently, Principals are given almost no latitude regarding class sizes. The Principal is forced to take a cookie cutter approach – with all teachers being assigned virtually the same number of students. A teacher is largely given the same responsibility on his/her first day on the job as his/her last day. Regardless of his/her subject, experience level, talents, teaching style, grade level, etc.  The teaching staff is the most important asset a Principal has to achieve academic excellence, and it is time to give Principals more responsibility in this area.

Class size absolutely matters. Just not in the same way and same level for every school, teacher, and student.

Whose Job is it to Close the Gap?

Mind-the-Gap

There have been many studies released, from very credible sources, that indicate that a college education clearly pays back. A May 2014 New York Times article indicates that the pay gap between college graduates and non-graduates is widening, even as more students attend college. The College Board has indicated that both individuals and society as a whole benefit from increased levels of education. Pew Research has shown that although the pay gap is increasing, Americans are beginning to question the value of higher education and its affordability.

Today’s colleges face many challenges in helping prepare students for the workforce. As more students attend college and costs continue to rise, higher education institutions will be under increasing pressure to prepare students for the workforce. Gaps in workforce preparedness contribute negatively to employers’ views of graduates, the reputation of colleges, and the well-being of young adults. There is a sense that college curricula are struggling to keep pace with the changing needs of the workforce.

Crux Research recently conducted a study for Chegg which focused on workforce preparedness. We surveyed large samples of students, college faculty, and employers to explore beliefs around accountability and ownership in creating a hirable, attractive, ready-to-work population from U.S. colleges and universities.

This study sheds new light on issues of workforce preparedness, the unique perspectives of faculty and employers, and the need for a new approach to the way faculty and employers work together.

A summary of results of the project can be found at Chegg’s website here.