How to succeed in business without planning

Crux Research is now entering its 16th year. This places us in the 1% of start-ups – as it is well-documented that most new businesses fail in the first few years. Few are lucky enough to make it to 16.

One of the reasons experts tend to say new companies fail is a lack of business planning. However, new business successes are not a consequence of an ability to plan. Success is a function of a couple of basic things:  having a product people want to buy and then delivering it really well. It doesn’t get much more basic than that, but too many times we get lost in our ambitions and ideas and aren’t forthright with ourselves on what we are good at and not so good at.

Business plans are neither necessary nor sufficient to business success. Small businesses that focus on them remind me a bit of when a company I was with put the entire staff through training on personal organization and time management. The people that embraced the training were mostly hyper-organized to begin with and the people who might have most benefited from the training were resistant. Small businesses that rely heavily on business planning probably are using it more as reflection of their natural organizational tendencies than as a true guide to running a business. There are, of course, exceptions.

Entrepreneurs that use business plans seem to fall into two categories. First, they may feel constrained by them. If they have investors, they feel a pressure to stay within the confines of the plan even when the market starts telling them to pivot to a new strategy. They start to think adherence to the plan is, in itself, a definition of success. In short, they tend to be “process” people as opposed to “results” people.  Entrepreneurs that are “results” people are the successful ones. If you are a “process” person I’d suggest that you may be a better fit for a larger company.

Second, and this is more common, many entrepreneurs spend a lot of time in business planning before launching their company and then promptly file the plan away and never look at it again.

That is how it went for me in the corporate world. We would craft annual plans each spring and then never look at them again until late in the year when it was time to create the plan for the following year. It was sort of like giving blueprints to a contractor and then having them build whatever they want to anyway but then giving them a hard time for not following the plan… even if they had built a masterpiece.

We’d also create five-year plans every year. That never made a lot of sense to me as it was an acknowledgement that would couldn’t really plan more than a year out.

I am not against business planning in concept. I agree with Winston Churchill who said “plans are of little importance, but planning is essential.” The true value of most strategic plans is found in the thought process that is gone through to establish them and not in the resulting plan.

I’d suggest that anyone who starts a company list out the top reasons why your company might fail and then think through how you are going to prevent these things from happening. If the reasons for your potential failure are mostly under your control, you’ll be okay. We did this. We identified that if we were going to fail it was going to most likely result from an inability to generate sales, as this wasn’t where our interest or core competence lay. So, we commissioned a company to develop leads for us, had a plan for reaching out to past clients, and planned to hire a salesperson (something that never happened). The point is, while we didn’t really have a business plan, we did think through how to prevent a problem that would prevent our success.  

Before launching Crux Research, we did lay out some key opportunities and threats and thought hard about how to address what we saw as the barriers to our success. Late each fall we think about goals for the coming year, how we can improve what we are doing, and where we can best spend our time in marketing and sales. That sort of thinking is important and it is good to have formalized. But we’ve never had a formal business plan. Maybe we have succeeded in spite of that, but I tend to think we have succeeded because of it. 

Critics would probably say that the issue here is business plans just need to be better and more effective. I don’t think that is the case. The very concept of business planning for a small business can be misguided. It is important to be disciplined as an entrepreneur/small business owner. A focus is paramount to success. But I don’t think it is all that important to be “planned.” You miss too many opportunities that way.

Again, our company has never had a financial plan. I attribute much of our success to that. The company has gone in directions we would never have been able to predict in advance. We try to be opportunistic and open-minded and to stay within the confines of what we know how to do. We prefer having a high level of self-awareness of what we are good and not so good at and to create and respond to opportunities keeping that in mind. As Mike Tyson once said “everyone has a plan until they get punched in the mouth.” I wonder how many small businesses have plans that quickly got punched.

This has led us to many crossroads where we have to choose whether to expand our company or to refuse work. We’ve almost always come down on the side of staying small and being selective in the work we take on.

In the future we are likely to become even more selective in who we take on as clients. We’ve always said that we want a few core things in a client. Projects have to be financially viable, but more importantly they have to be interesting and a situation that will benefit from our experience and insight.  We have developed a client base of really great researchers who double as really great people. It sums up to a business that is a lot of fun to be in.

I would never discourage an entrepreneur from creating a business plan, but I’d advise them to think hard about why they are doing so. You’ll have no choice if you are seeking funding, as no investor is going to give money to a business that doesn’t have a plan. If you are in a low margin business, you have to have a tight understanding of your cash flow and planning that out in advance is important. I’d suggest you keep any plans broad and not detailed, conduct an honest assessment of what you are not so good at as well as good at, and prepare to respond to opportunities you can’t possibly foresee. Don’t judge your success by your adherence to the plan and if your funders appear too insistent on keeping to the plan, find new funders.

Wow! Market research presentations have changed.

I recently led an end-of-project presentation over Zoom. During it, I couldn’t help but think how market research presentations have changed over the years. There was no single event or time period that changed the nature of research presentations, but if you teleported a researcher from the 1990’s to a modern presentation they would feel a bit uncomfortable.

I have been in hundreds of market research presentations — some led by me, some led by others, and I’ve racked up quite a few air miles getting to them. In many ways, today’s presentations are more effective than those in the past. In some other ways, quality has been lost. Below is a summary of some key differences.

Today’s presentations are:

  • Far more likely to be conducted remotely over video or audio. COVID-19 disruptions acted as an accelerant onto this trend which was happening well before 2020. This has made presentations easier to schedule because not everyone has to be available in the office. This allows clients and suppliers to take part from their homes, hotels, and even their vehicles. It seems clear that a lasting effect of the pandemic will be that research presentations will be conducted via Zoom by default. There are plusses and minuses to this. For the first time in 30 years, I find myself working with clients whom I have never met in-person.
  • Much more likely to be bringing in data and perspectives from outside the immediate project. Research projects and presentations tended to be standalone events in the past, concentrating solely on the area of inquiry the study addressed. Today’s presentations are often integrated into a wider reaching strategic discussion that goes beyond the questions the research addresses.
  • More interactive. In yesteryear, the presentation typically consisted of the supplier running through the project results and implications for 45 minutes followed by a period of Q&A. It was rare to be interrupted before the Q&A portion of the meeting. Today’s presentations are often not presentations at all. As a supplier we feel like we are more like emcee’s leading a discussion than experts presenting findings.
  • More inclusive of upper management. We used to present almost exclusively to researchers and mid-level marketers. Now, we tend to see a lot more marketing VP’s and CMOs, strategy officers, and even the CEO on occasion. It used to be rare that our reports would make it to the CEOs desk. Now, I’d say most of the time they do. This is indicative of the increasing role data and research has in business today.
  • Far more likely to integrate the client’s perspective. In the past, internal research staff rarely tried to change or influence our reports and presentations, preferring to keep some distance and then separately add their perspective. Clients have become much more active in reviewing and revising supplier reports and presentations.

Presentations from the 1990’s were:

  • A more thorough presentation of the findings of the study. They told a richer, more nuanced story. They focused a lot more on storytelling and building a case for the recommendations. Today’s presentations often feel like a race to get to the conclusions before you get interrupted.
  • More confrontational. Being challenged on the study method, data quality, and interpretations was more commonplace a few decades ago. I felt a much greater need to prepare and rehearse than I do today because I am not as in control of the flow of the meetings as I was previously. In the past I felt like I had to know the data in great detail, and it was difficult for me to present a project if I wasn’t the lead analyst on it. Today, that is much less of a concern.
  • More strategic. This refers more to the content of the studies than the presentation itself. Since far fewer studies were being done, the ones that were tended to be informing high consequence decisions. While plenty of strategic studies are still conducted, there are so many studies being done today that many of them are informing smaller, low-consequence, tactical decisions.
  • More relaxed. Timelines were more relaxed and as a result research projects were planned well in advance and the projects fed into a wider strategic process. That still happens, but a lot of today’s projects are completed quickly (often too quickly) because information is needed to make a decision that wasn’t even on the radar a few weeks prior.
  • More of a “show.” In the past we rehearsed more, were concerned about the graphical design of the slides, and worried about the layout of the room. Today, there is rarely time for that.
  • More social. Traveling in for a presentation meant spending time beforehand with clients, touring offices, and almost always going to lunch or dinner afterword. Even before the COVID/Zoom era, more recent presentations tended to be “in and out” affairs – where suppliers greet the clients, give a presentation, and leave. While there are many plusses to this, some (I’d actually say most) of the best researchers I know are introverts who were never comfortable with this forced socialization. Those types of people are going to thrive in the new presentation environment.

Client-side researchers were much more planned out in the past. Annually, they would go through a planning phase where all the projects for the year would be budgeted and placed in a timeline. The research department would then execute against that plan. More recently, our clients seem like they don’t really know what projects they will be working on in a few weeks’ time – because many of today’s projects take just days from conception to execution.

I have also noticed that while clients are commissioning more projects they seem to be using fewer suppliers than in the past. I think this is because studies are being done so quickly they don’t have time to manage more than a few supplier relationships. Bids aren’t as competitive and are more likely to be sole-sourced.

Clients are thus developing closer professional relationships with their suppliers. Suppliers are closer partners with clients than ever before, but with this comes a caution. It becomes easy to lose a third-party objectivity when we get too close to the people and issues at hand and when clients have too heavy a hand in the report process. In this sense, I prefer the old days, where we provided a perspective and our clients would then add a POV. Now, we often meld the two into one presentation, and at time we lose the value that comes from a back and forth disagreement over what the findings mean to a business.

If I teleported my 1990’s self to today I would be amazed at how quickly projects go from conception to final presentation. Literally, this happens in about one-third the time it used to. There are many downsides of going too fast and clients rarely focus or care about them. While there are dangers to going too fast, clients seem to prefer getting something 90% right and getting it done tomorrow, than waiting for a perfect project.

There is even a new category of market research called “agile research” that seeks to provide real-time data. I am sure it is a category that will grow, but those employing it need to keep in mind that providing data faster than managers can act on it can actually be a disservice to the client. It is an irony of our field that more data and continuous data can actually slow down decision making.  

Today’s presentations are less stressful, more inclusive, and more strategic. The downside is there are probably too many of them – clients are conducting too many projects on minor issues, they don’t always learn thoroughly from one study before moving onto the next, and researchers are sometimes being rewarded more for getting things done than for providing insight into the business.

Oops, the polls did it again

Many people had trouble sleeping last night wondering if their candidate was going to be President. I couldn’t sleep because as the night wore on it was becoming clear that this wasn’t going to be a good night for the polls.

Four years ago on the day after the election I wrote about the “epic fail” of the 2016 polls. I couldn’t sleep last night because I realized I was going to have to write another post about another polling failure. While the final vote totals may not be in for some time, it is clear that the 2020 polls are going to be off on the national vote even more than the 2016 polls were.

Yesterday, on election day I received an email from a fellow market researcher and business owner. We are involved in a project together and he was lamenting how poor the data quality has been in his studies recently and was wondering if we were having the same problems.

In 2014 we wrote a blog post that cautioned our clients that we were detecting poor quality interviews that needed to be discarded about 10% of the time. We were having to throw away about 1 in 10 of the interviews we collected.

Six years later that percentage has moved to be between 33% and 45% and we tend to be conservative in the interviews we toss. It is fair to say that for most market research studies today, between a third and a half of the interviews being collected are, for a lack of a better term, junk.  

It has gotten so bad that new firms have sprung up that serve as a go-between from sample providers and online questionnaires in order to protect against junk interviews. They protect against bots, survey farms, duplicate interviews, etc. Just the fact that these firms and terms like “survey farms” exist should give researchers pause regarding data quality.

When I started in market research in the late 80s/early 90’s we had a spreadsheet program that was used to help us cost out projects. One parameter in this spreadsheet was “refusal rate” – the percent of respondents who would outright refuse to take part in a study. While the refusal rate varied by study, the beginning assumption in this program was 40%, meaning that on average we expected 60% of the time respondents would cooperate. 

According to Pew and AAPOR in 2018 the cooperation rate for telephone surveys was 6% and falling rapidly.

Cooperation rates in online surveys are much harder to calculate in a standardized way, but most estimates I have seen and my own experience suggest that typical cooperation rates are about 5%. That means for a 1,000-respondent study, at least 20,000 emails are sent, which is about four times the population of the town I live in.

This is all background to try to explain why the 2020 polls appear to be headed to a historic failure. Election polls are the public face of the market research industry. Relative to most research projects, they are very simple. The problems pollsters have faced in the last few cycles is emblematic of something those working in research know but rarely like to discuss: the quality of data collected for research and polls has been declining, and should be alarming to researchers.

I could go on about the causes of this. We’ve tortured our respondents for a long time. Despite claims to the contrary, we haven’t been able to generate anything close to a probability sample in years. Our methodologists have gotten cocky and feel like they can weight any sampling anomalies away. Clients are forcing us to conduct projects on timelines that make it impossible to guard against poor quality data. We focus on sampling error and ignore more consequential errors. The panels we use have become inbred and gather the same respondents across sources. Suppliers are happy to cash the check and move on to the next project.

This is the research conundrum of our times: in a world where we collect more data on people’s behavior and attitudes than ever before, the quality of the insights we glean from these data is in decline.

Post 2016 the polling industry brain trust rationalized and claimed that the polls actually did a good job, convened some conferences to discuss the polls, and made modest methodological changes. Almost all of these changes related to sampling and weighting. But, as it appears that the 2020 polling miss is going to be way beyond what can be explained by sampling (last night I remarked to my wife that “I bet the p-value of this being due to sampling is about 1 in 1,000”), I feel that pollsters have addressed the wrong problem.

None of the changes pollsters made addressed the long-term problems researchers face with data quality. When you have a response rate of 5% and up to half of those are interviews you need to throw away, errors that can arise are orders of magnitude greater than the errors that are generated by sampling and weighting mistakes.

I don’t want to sound like I have the answers.  Just a few days ago I posted that I thought that on balance there were more reasons to conclude that the polls would do a good job this time than to conclude that they would fail. When I look through my list of potential reasons the polls might fail, nothing leaps to me as an obvious cause, so perhaps the problem is multi-faceted.

What I do know is the market research industry has not done enough to address data quality issues. And every four years the polls seem to bring that into full view.

Will the polls be right this time?

The 2016 election was damaging to the market research industry. The popular perception has been that in 2016 the pollsters missed the mark and miscalled the winner. In reality, the 2016 polls were largely predictive of the national popular vote. But, 2016 was largely seen by non-researchers as disastrous. Pollsters and market researchers have a lot riding on the perceived accuracy of 2020 polls.

The 2016 polls did a good job of predicting the national vote total but in a large majority of cases final national polls were off in the direction of overpredicting the vote for Clinton and underpredicting the vote for Trump. That is pretty much a textbook definition of bias. Before the books are closed on the 2016 pollster’s performance, it is important to note that the 2012 polls were off even further and mostly in the direction of overpredicting the vote for Romney and underpredicting the vote for Obama. The “bias,” although small, has swung back and forth between parties.

Election Day 2020 is in a few days and we may not know the final results for a while. It won’t be possible to truly know how the polls did for some weeks or months.

That said, there are reasons to believe that the 2020 polls will do an excellent job of predicting voter behavior and there are reasons to believe they may miss the mark.  

There are specific reasons why it is reasonable to expect that the 2020 polls will be accurate. So, what is different in 2020? 

  • There have been fewer undecided voters at all stages of the process. Most voters have had their minds made up well in advance of election Tuesday. This makes things simpler from a pollster’s perspective. A polarized and engaged electorate is one whose behavior is predictable. Figuring out how to partition undecided voters moves polling more in a direction of “art” than “science.”
  • Perhaps because of this, polls have been remarkably stable for months. In 2016, there was movement in the polls throughout and particularly over the last two weeks of the campaign. This time, the polls look about like they did weeks and even months ago.
  • Turnout will be very high. The art in polling is in predicting who will turn out and a high turnout election is much easier to forecast than a low turnout election.
  • There has been considerable early voting. There is always less error in asking about what someone has recently done than what they intend to do in the future. Later polls could ask many respondents how they voted instead of how they intended to vote.
  • There have been more polls this time. As our sample size of polls increases so does the accuracy. Of course, there are also more bad polls out there this cycle as well.
  • There have been more and better polls in the swing states this time. The true problem pollsters had in 2016 was with state-level polls. There was less attention paid to them, and because the national pollsters and media didn’t invest much in them, the state-level polling is where it all went wrong. This time, there has been more investment in swing-state polling.
  • The media invested more in polls this time. A hidden secret in polling is that election polls rarely make money for the pollster. This keeps many excellent research organizations from getting involved in them or dedicating resources to them. The ones that do tend to do so solely for reputational reasons. An increased investment this time has helped to get more researchers involved in election polling.
  • Response rates are upslightly. 2020 is the first year where we have seen a long-term trend towards declining response rates on survey stabilize and even kick up a little. This is likely a minor factor in the success of the 2020 polls, but it is in the right direction.
  • The race isn’t as close as it was in 2016. This one might only be appreciated by statisticians. Since variability is maximized in a 50/50 distribution the further away from an even race it is the more accurate a poll will be. This is another small factor in the direction of the polls being accurate in 2020.
  • There has not been late breaking news that could influence voter behavior. In 2016, the FBI director’s decision to announce a probe into Clinton’s emails came late in the campaign. There haven’t been any similar bombshells this time.
  • Pollsters started setting quotas and weighting on education. In the past, pollsters would balance samples on characteristics known to correlate highly with voting behavior – characteristics like age, gender, political party affiliation, race/ethnicity, and past voting behavior. In 2016, pollsters learned the hard way that educational attainment had become an additional characteristic to consider when crafting samples because voter preferences vary by education level. The good polls fixed that this go round.
  • In a similar vein, there has been a tighter scrutiny of polling methodology. While the media can still be a cavalier about digging into methodology, this time they were more likely to insist that pollsters outline their methods. This is the first time I can remember seeing news stories where pollsters were asked questions about methodology.
  • The notion that there are Trump supporters who intentionally lie to pollsters has largely been disproven by studies from very credible sources, such as Yale and Pew. Much more relevant is the pollster’s ability to predict turnout from both sides.

There are a few things going on that give the polls some potential to lay an egg.

  • The election will be decided by a small number of swing states. Swing state polls are not as accurate and are often funded by local media and universities that don’t have the funding or the expertise to do them correctly. The polls are close and less stable in these states. There is some indication that swing state polls have been tightening, and Biden’s lead in many of them isn’t much different than Clinton’s lead in 2020.
  • Biden may be making the same mistake Clinton made. This is a political and not a research-related reason, but in 2016 Clinton failed to aggressively campaign in the key states late in the campaign while Trump went all in. History could be repeating itself. Field work for final polls is largely over now, so the polls will not reflect things that happen the last few days.
  • If there is a wild-card that will affect polling accuracy in 2020, it is likely to center around how people are voting. Pollsters have been predicting election day voting for decades. In this cycle votes have been coming in for weeks and the methods and rules around early voting vary widely by state. Pollsters just don’t have past experience with early voting.
  • There is really no way for pollsters to account for potential disqualifications for mail-in votes (improper signatures, late receipts, legal challenges, etc.) that may skew to one candidate or another.
  • Similarly, any systematic voter suppression would likely cause the polls to underpredict Trump. These voters are available to poll, but may not be able to cast a valid vote.
  • There has been little mention of third-party candidates in polling results. The Libertarian candidate is on the ballot in all 50 states. The Green Party candidate is on the ballot in 31 states. Other parties have candidates on the ballot in some states but not others. These candidates aren’t expected to garner a lot of votes, but in a close election even a few percentage points could matter to the results. I have seen national polls from reputable organizations where they weren’t included.
  • While there is little credible data supporting that there are “shy” Trump voters that are intentionally lying to pollsters, there still might be a social desirability bias that would undercount Trump’s support. That social desirability bias could be larger than it was in 2016, and it is still likely in the direction of under predicting Trump’s vote count.
  • Polls (and research surveys) tend to underrepresent rural areas. Folks in rural areas are less likely to be in online panels and to cooperate on surveys. Few pollsters take this into account. (I have never seen a corporate research client correcting for this, and it has been a pet peeve of mine for years.) This is a sample coverage issue that will likely undercount the Trump vote.
  • Sampling has continued to get harder. Cell phone penetration has continued to grow, online panel quality has fallen, and our best option (ABS sampling) is still far from random and so expensive it is beyond the reach of most polls.
  • “Herding” is a rarely discussed, but very real polling problem. Herding refers to pollsters who conduct a poll that doesn’t conform to what other polls are finding. These polls tend to get scrutinized and reweighted until they fit to expectations, or even worse, buried and never released. Think about it – if you are a respected polling organization that conducted a recent poll that showed Trump would win the popular vote, you’d review this poll intensely before releasing it and you might choose not to release it at all because it might put your firm’s reputation at risk to release a poll that looks different than the others. The only polls I have seen that appear to be out of range are ones from smaller organizations who are likely willing to run the risk of being viewed as predicting against the tide or who clearly have a political bias to them.

Once the dust settles, we will compose a post that analyzes how the 2020 polls did. For now, we feel there are a more credible reasons to believe the polls will be seen as predictive than to feel that we are on the edge of a polling mistake.  From a researcher’s standpoint, the biggest worry is that the polls will indeed be accurate, but won’t match the vote totals because of technicalities in vote counting and legal challenges. That would reflect unfairly on the polling and research industries.

Researchers should be mindful of “regression toward the mean”

There is a concept in statistics known as regression toward the mean that is important for researchers to consider as we look at how the COVID-19 pandemic might change future consumer behavior. This concept is as challenging to understand as it is interesting.

Regression toward the mean implies that an extreme example in a data set tends to be followed by an example that is less extreme and closer to the “average” value of the population. A common example is if two parents that are above average in height have a child, that child is demonstrably more likely to be closer to average height than the “extreme” height of their parents.

This is an important concept to keep in mind in the design of experiments and when analyzing market research data. I did a study once where we interviewed the “best” customers of a quick service restaurant, defined as those that had visited the restaurant 10 or more times in the past month. We gave each of them a coupon and interviewed them a month later to determine the effect of the coupon. We found that they actually went to the restaurant less often the month after receiving the coupon than the month before.

It would have been easy to conclude that the coupon caused customers to visit less frequently and that there was something wrong with it (which is what we initially thought). What really happened was a regression toward the mean. Surveying customers who had visited a large number of times in one month made it likely that these same customers would visit a more “average” amount in a following month whether they had a coupon or not. This was a poor research design because we couldn’t really assess the impact of the coupon which was our goal.

Personally, I’ve always had a hard time understanding and explaining regression toward the mean because the concept seems to be counter to another concept known as “independent trials”. You have a 50% chance of flipping a fair coin and having it come up heads regardless of what has happened in previous flips. You can’t guess whether the roulette wheel will come up red or black based on what has happened in previous spins. So, why would we expect a restaurant’s best customers to visit less in the future?

This happens when we begin with a skewed population. The most frequent customers are not “average” and have room to regress toward the mean in the future. Had we surveyed all customers across the full range of patronage there would be no mean to regress to and we could have done a better job of isolating the effect of the coupon.

Here is another example of regression toward the mean. Suppose the Buffalo Bills quarterback, Josh Allen, has a monster game when they play the New England Patriots. Allen, who has been averaging about 220 yards passing per game in his career goes off and burns the Patriots for 450 yards. After we are done celebrating and breaking tables in western NY, what would be our best prediction for the yards Allen will throw for the second time the Bills play the Patriots?

Well, you could say the best prediction is 450 yards as that is what he did the first time. But, regression toward the mean would imply that he’s more likely to throw close to his historic average of 220 yards the second time around. So, when he throws for 220 yards the second game it is important to not give undue credit to Bill Belichick for figuring out how to stop Allen.

Here is another sports example. I have played (poorly) in a fantasy baseball league for almost 30 years. In 2004, Derek Jeter entered the season as a career .317 hitter. After the first 100 games or so he was hitting under .200. The person in my league that owned him was frustrated so I traded for him. Jeter went on to hit well over .300 the rest of the season. This was predictable because there wasn’t any underlying reason (like injury) for his slump. His underlying average was much better than his current performance and because of the concept of regression toward the mean it was likely he would have a great second half of the season, which he did.

There are interesting HR examples of regression toward the mean. Say you have an employee that does a stellar job on an assignment – over and above what she normally does. You praise her and give her a bonus. Then, you notice that on the next assignment she doesn’t perform on the same level. It would be easy to conclude that the praise and bonus caused the poor performance when in reality her performance was just regressing back toward the mean. I know sales managers who have had this exact problem – they reward their highest performers with elaborate bonuses and trips and then notice that the following year they don’t perform as well. They then conclude that their incentives aren’t working.

The concept is hard at work in other settings. Mutual funds that outperform the market tend to fall back in line the next year. You tend to feel better the day after you go to the doctor. Companies profiled in “Good to Great” tend to have hard times later on.

Regression toward the mean is important to consider when designing sampling plans. If you are sampling an extreme portion of a population it can be a relevant consideration. Sample size is also important. When you have just a few cases of something, mathematically an extreme response can skew your mean.

The issue to be wary of is that when we fail to consider regression toward the mean, we tend to overstate the importance of correlation between two things. We think our mutual fund manager is a genius when he just got lucky, that our coupon isn’t working, or that Josh Allen is becoming the next Drew Brees. All of these could be true, but be careful in how you interpret data that result from extreme or small sample sizes.

How does this relate to COVID? Well, at the moment, I’d say we are still in an “inflated expectations” portion of a hype curve when we think of what permanent changes may take place resulting from the pandemic. There are a lot of examples. We hear that commercial real estate is dead because businesses will keep employees working from home. Higher education will move entirely online. In-person qualitative market research will never happen again. Business travel is gone forever. We will never again work in an office setting. Shaking hands is a thing of the past.

I’m not saying there won’t be a new normal that results from COVID, but if we believe in regression toward the mean and the hype curve we’d predict that the future will look more like the past than how it is currently being portrayed. The post-COVID world will certainly look more like the past than a more extreme version of the present. We will naturally regress back toward the past and not to a more extreme version of current behaviors. The “mean” being regressed to has likely changed, but not as much as the current, extreme situation implies.

“Margin of error” sort of explained (+/-5%)

It is now September of an election year. Get ready for a two-month deluge of polls and commentary on them. One thing you can count on is reporters and pundits misinterpreting the meaning behind “margin of error.” This post is meant to simplify the concept.

Margin of error refers to sampling error and is present on every poll or market research survey. It can be mathematically calculated. All polls seek to figure out what everybody thinks by asking a small sample of people. There is always some degree of error in this.

The formula for margin of error is fairly simple and depends mostly on two things: how many people are surveyed and their variability of response. The more people you interview, the lower (better) the margin of error. The more the people you interview give the same response (lower variability), the better the margin of error. If a poll interviews a lot of people and they all seem to be saying the same thing, the margin of error of the poll is low. If the poll interviews a small number of people and they disagree a lot, the margin of error is high.

Most reporters understand that a poll with a lot of respondents is better than one with fewer respondents. But most don’t understand the variability component.

There is another assumption used in the calculation for sampling error as well: the confidence level desired. Almost every pollster will use a 95% confidence level, so for this explanation we don’t have to worry too much about that.

What does it mean to be within the margin of error on a poll? It simply means that the two percentages being compared can be deemed different from one another with 95% confidence. Put another way, if the poll was repeated a zillion times, we’d expect that at least 19 out of 20 times the two numbers would be different.

If Biden is leading Trump in a poll by 8 points and the margin of error is 5 points, we can be confident he is really ahead because this lead is outside the margin of error. Not perfectly confident, but more than 95% confident.

Here is where reporters and pundits mess it up.  Say they are reporting on a poll with a 5-point margin of error and Biden is leading Trump by 4 points. Because this lead is within the margin of error, they will often call it a “statistical dead heat” or say something that implies that the race is tied.

Neither is true. The only way for a poll to have a statistical dead heat is for the exact same number of people to choose each candidate. In this example the race isn’t tied at all, we just have a less than 95% confidence that Biden is leading. In this example, we might be 90% sure that Biden is leading Trump. So, why would anyone call that a statistical dead heat? It would be way better to be reporting the level of confidence that we have that Biden is winning, or the p-value of the result. I have never seen a reporter do that, but some of the election prediction websites do.

Pollsters themselves will misinterpret the concept. They will deem their poll “accurate” as long as the election result is within the margin of error. In close elections this isn’t helpful, as what really matters is making a correct prediction of what will happen.

Most of the 2016 final polls were accurate if you define being accurate as coming within the margin of error. But, since almost all of them predicted the wrong winner, I don’t think we will see future textbooks holding 2016 out there as a zenith of polling accuracy.

Another mistake reporters (and researchers make) is not recognizing that the margin of error only refers to sampling error which is just one of many errors that can occur on a poll. The poor performance of the 2016 presidential polls really had nothing to do with sampling error at all.

I’ve always questioned why there is so much emphasis on sampling error for a couple of reasons. First, the calculation of sampling error assumes you are working with a random sample which in today’s polling world is almost never the case. Second, there are many other types of errors in survey research that are likely more relevant to a poll’s accuracy than sampling error. The focus on sampling error is driven largely because it is the easiest error to mathematically calculate. Margin of error is useful to consider, but needs to be put in context of all the other types of errors that can happen in a poll.

Online education will need to change before it rules higher education

We recently conducted a poll of college students around the world about their experiences with online education this spring that resulted from the pandemic. The short answer is students didn’t fare well and are highly critical of the ability of online education to engage them and to deliver instruction. This isn’t a subtle, nuanced finding. A large majority of college students worldwide thought the online education they received this spring was ineffective and unengaging.

I held out hope that the pandemic would be the event that finally kickstarted online education. Our poll results have me doubting it will, which is a shame as online education holds enormous potential. It is a new technology that is, for some reason, being held back. If you think about it, we have had all the technology needed to take education online for at least 10 years, yet for the most part the traditional university system has remained as it was a generation ago.

I’ve always been interested in new “media” technologies because I’ve noticed a pattern in their emergence. Almost always, they begin as a nifty new delivery system for content that was developed with the “old” media. The earliest radio shows largely consisted of people reading the newspapers aloud and playing music. Early television mostly adapted content from radio – serialized dramas, variety shows, baseball games, etc. The Internet 1.0 largely just electronically expectorated content that existed in other forms.

After a bit of a gestation period, “new” media eventually thrive as they take advantage of their technological uniqueness and content evolves along with the new distribution system. The result is something really special and not just a new way to deliver old things.

There are many examples. Radio moved to become central to family entertainment and ritual in a way the newspaper could not. Television developed the Saturday morning lineup, the situation comedy, talk shows, etc., none of which could have worked as well on radio. And, the Internet evolved and became interactive, with user-created content, product reviews, with a melding of content and commerce that isn’t possible in other media. In all cases, the “new” media gestated awhile by mimicking the old but once they found their way their value grew exponentially. The old media didn’t go away, but got repositioned to a narrower niche.

This hasn’t happened in higher education. Streaming your lecture on Zoom might be necessary during a pandemic but it is not what online education should be about. Students consistently tell us it doesn’t work for them. Parents and students don’t feel it provides the value they expect from college, which is why we are starting to see lawsuits where students are demanding tuition refunds from colleges that moved education online this spring.

We composed a post a little while ago that posited that the reason digital textbooks really haven’t made much of a difference in colleges is because textbook publishers have prevented this synergy from happening. Most digital textbooks today are simply a regurgitation of a printed textbook that you can read on a computer. Our surveys show that the number one way a digital textbook is read remains by viewing a PDF. That is hardly taking advantage of what today’s technology has to offer.

The potential for the digital textbook is much greater. In fact, it wouldn’t be a textbook at all. Instead, there could be a digital nexus of all that is going on in a course, conducted, coached, and curated by the instructor. Imagine a “book” that could take you on an interactive tour. It could link you to lectures by world-renown people. It could show practitioners applying the knowledge they gained in the course. It could contain formative assessments where you could determine how you are progressing and then adapt to focus you where you need individualized help. A tutor would be a link away. Other students could comment and help you.

Your instructor could become a coach rather than a sage. This wouldn’t be a textbook at all, but a melding of course materials and instruction and collaborative tools.

This technology exists today, yet publishers and colleges have too much of a self-interest not to innovate. Education is suffering because of it.

This spring most college instructors had one or two weeks to figure out how to move their instruction online with little help from textbook publishers or technology companies. They had no choice but to adapt their existing course to a new delivery system. So, they pointed a camera on themselves and called it online education.

It is no wonder that online education largely failed our students. Every poll I have seen, including a few Crux has conducted, has shown that students found online education to be vastly inferior to traditional instruction this spring.

But, did you know this isn’t new? College students have long been critical of online education. I’ve asked questions about online education to college students for almost 20 years. While many appreciate the convenience of an online course and that it can cost less, a very large majority of those taking online courses say they aren’t an effective way to learn. Almost all say that they would have learned better in a traditional course. It is a rare student that chooses an online course because it is an effective way to learn. When they choose an online course, it is because it fits better into their life situation and not because it is an effective way to learn.

Why? Because online course providers really haven’t taken advantage of a “new” medium. They are still adapting traditional education and placing it online rather than embracing the uniqueness that online education can provide. They are firmly ensconced in Internet 1.0 a decade or two after all other industries have moved on. Compared to a decade ago we shop completely differently. We watch entertainment completely differently. We communicate with others completely differently. Yet, our children attend college the same way their parents and grandparents did.

Course management systems do exist, but to date they haven’t fundamentally changed the nature of a college course. We ask about course management systems on surveys as well, and college students find them to be moderately helpful, but hardly game changing.

One of Crux’s largest clients is a supplemental education company that provides resources to college students who don’t feel they are getting the support they need from their college or their professors. This company has been one of the best performing companies in the US since COVID-19 hit and so many courses moved online. This client is well-managed and has a great vision and brilliant employees. But, if educators had fully figured out how to effectively educate online, I don’t think they could be as successful as they have been because students wouldn’t have such a pressing need for outside help. Because of higher education’s unwillingness or inability to adapt, I expect this client to thrive for a long time.

It is sad in a way to think that our colleges and universities, who should be on the forefront of technology and innovation, are sadly lacking in adapting course materials and instruction to the Internet. Especially when you consider that these are the same entities that largely invented the Internet.

Living near Rochester NY, it is easy to see a parallel to the Eastman Kodak company. Kodak had one of the strongest brands in the world, was tightly identified with imaging and photography, and had invented almost all of the core technologies needed for digital photography. All at a time when the number of images consumers were about to capture was about to explode literally by a factor of about 10,000, maybe 100,000. But, because of an inability to break out of an old way of thinking and an inertia to hang on too long to an “old” media, one of America’s great companies was essentially reduced to a business school case in how to grab defeat from the jaws of opportunity.

Is this a cautionary tale for colleges and universities? Sure. I suspect that elite college brands will continue to do well as they cater to a wealthy demographic that has done quite well during the pandemic. But, for the rest of us, who send students to non-elite institutions, I expect to see colleges face enormous financial pressures and to see many college brands go the road of Kodak over the next decade. Their ticket to a better path is to more effectively use technology.

Online education has the potential to cure some of what ails the US higher education system. It can adapt quickly to market demand for workers. It can provide much wider access to the best and brightest teachers. It can aggregate a mass of students who might be interested in a highly specialized field, and thus become more targeted. And, it may finally be what finally fixes the high cost of higher education.

Will online education will thrive in the US? Not until it changes to take advantage of what an interconnected world has to offer. The time is right for colleges to truly tap into the power of what online education can be. This is really the only way colleges will be able to charge the tuition levels they have become accustomed to charging and until online education becomes synonymous with quality education, many colleges will struggle.

This is taking far too long but I am hopeful that kickstarting this process will be one silver lining to come out of the upheaval to education that has been caused by the pandemic.

I have more LinkedIn contacts named “Steve” than contacts who are Black

There have been increasing calls for inclusiveness and fairness across America and the world. The issues presented by the MeToo and Black Lives Matter movements affect all sectors of society and the business world. Market research is no exception. Recent events have spurred me to reflect on my experiences and to think about whether the market research field is diverse enough and ready to make meaningful changes. Does market research have structural, systemic barriers preventing women and minorities from succeeding?

My recollections are anecdotal – just one person’s experiences when working in market research for more than 30 years. What follows isn’t based on an industry study or necessarily representative of all researchers’ experiences.

Women in Market Research

When it comes to gender equity in the market research field, my gut reaction is to think that research is a good field for women and one that I would recommend. I reviewed Crux Research’s client base and client contacts. In 15 years, we have worked with about 150 individual research clients across 70 organizations. 110 (73%) of those 150 clients are female. This dovetails with my recollection of my time at a major research supplier. Most of my direct clients there were women.

Crux’s client base is largely mid-career professionals – I’d say our typical client is a research manager or director in his/her 30’s or 40’s. I’d conclude that in my experience, women are well represented at this level.

But, when I look through our list of 70 clients and catalog who the “top” research manager is at these organizations, I find that 42 (60%) of the 70 research VPs and directors are male. And, when I catalog who these research VP’s report into, typically a CMO, I find that 60 (86%) of the 70 individuals are male. To recap, among our client base, 73% of the research managers are female, 40% of the research VPs are female, and 14% of the CMO’s are female.

This meshes with my experience working at a large supplier. While I was there, women were well-represented in our research director and VP roles but there were almost no women represented in the C-suite or among those that report to them. There seem to be a clear but firm glass ceiling in place in market research suppliers and in clients.

Minorities in Market Research

My experience paints a bleaker picture when I think of ethnic minority representation in market research. Of our 150 individual research clients, just 25 (17%) have been non-white and just 3 (2%) have been black. Moving up the corporate ladder, in only 5 (13%) of our 70 clients is the top researcher in the organization non-white and in only 4 (6%) of the 70 companies is the CMO non-white, and none of the CMOs are black. Undoubtedly, we have a long way to go.

A lack of staff diversity in research suppliers and market research corporate staffs is a problem worth resolving for a very important reason: market researchers and pollsters are the folks providing the information to the rest of the world on diversity issues. Our field can’t possibly provide an appropriate perspective to decision makers if we aren’t more diverse. Our lack of diversity affects the conversation because we provide the data the conversation is based upon.  

Non-profits seem to be a notable exception when it comes to ethnic diversity. I have had large non-profit clients that have wonderfully diverse employee bases, to the point where it is not uncommon to attend meetings and Zoom calls where I am the only white male in the session. These non-profits make an effort to recruit and train diverse staffs and their work benefits greatly from the diversity of perspectives this brings. There is a palpable openness of ideas in these organizations. Research clients and suppliers would do well to learn from their example.  

I can’t think of explicit structural barriers that limit the progression of minorities thought the market research ranks, but that just illustrates the problem: the barriers aren’t explicit, they are more subtle and implicit. Which is what makes them so intractable.

We have to make a commitment to develop more diverse employee bases. I worked directly for the CEO of a major supplier for a number of years. One thing I respected about him was he was confident enough in himself that he was not afraid to hire people who were smarter than him or didn’t think like him or came from an entirely different background. It made him unique. In my experience, most hiring managers unintentionally hire “mini-me’s” – younger variants of themselves whom they naturally like in a job interview. Well, if the hiring managers are mostly white males and they are predisposed to hire a lot of “mini-me’s” over time this perpetuates a privilege and is an example of an unintentional, but nonetheless structural bias that limits the progress of women and minorities.

If you don’t think managers tend to hire in their own image, consider a recent Economist article that states “In 2018 there were more men called Steve than there were women among the chief executives of FTSE 100 companies.” I wouldn’t be surprised if there are more market researchers in the US named Steve than there are black market researchers.

To further illustrate that we naturally seek people like ourselves, I reviewed my own LinkedIn contact list. This list is made up of former colleagues, clients, people I have met along the way, etc. It is a good representation of the professional circle I exist within. It turns out that my LinkedIn contact list is 60% female and has 25% non-whites. But, just 3% of my LinkedIn contacts are black. And, yes, I have more LinkedIn contacts named Steve than I have contacts who are black.

This is a problem because as researchers we need to do our best to cast aside our biases and provide an objective analysis of the data we collect. We cannot do that well if we do not have a diverse array of people working on our projects.

Many managers will tell you that they would like to hire a minority for a position but they just don’t get quality candidates applying. This is not taking ownership of the issue. What are you doing to generate candidates in the first place?

It is all too easy to point the finger backwards at colleges and universities and say that we aren’t getting enough qualified candidates of color. And that might be true. MBA programs continue to enroll many more men than women and many more whites than non-whites. They should be taken to task for this. As employers we also need to be making more demands on them to recruit women and minorities to their programs in the first place.

I like that many research firms have come out with supportive statements and financial contributions to relevant causes recently. This is just a first step and needs to be the catalyst to more long-lasting cultural changes in organizations.

We need to share best practices, and our industry associations need to step up and lead this process. Let’s establish relationships with HCBU’s and other institutions to train the next generation of black researchers.

The need to be diverse is also important in the studies we conduct. We need to call more attention to similarities and differences in our analyses – and sample enough minorities in the first place so that we can do this. Most researchers do this already when we have a reason to believe before we launch the study that there might be important differences by race/ethnicity. However, we need to do this more as a matter of course, and become more attuned to highlighting the nuances in our data sets that are driven by race.

Our sample suppliers need to do a better job of recruiting minorities to our studies, and to ensure that the minorities we sample are representative of a wider population. As their clients, we as suppliers need to make more demands about the quality of the minority samples we seek.

We need an advocacy group for minorities in market research. There is an excellent group, Women in Research https://www.womeninresearch.org/ advocating for women. We need an analogous organization for minorities.

Since I am in research, I naturally think that measurement is key to the solution. I’ve long thought that organizations only change what they can measure. Does your organization’s management team have a formal reporting process that informs them of the diversity of their staff, of their new hires, of the candidates they bring in for interviews? If they do not, your organization is not poised to fix the problem. If your head of HR cannot readily tell you what proportion of your staff is made up of minorities, your firm is likely not paying enough attention.

Researchers will need to realize that their organizations will become better and more profitable when they recruit and develop a more diverse client base. Even though it is the right thing to do, we need to view resolving these issues not solely as altruism. It is in our own self-interest to work on this problem. It is truly the case that if we aren’t part of the solution, we are likely part of the problem. And again, because we are the ones that inform everyone else about public opinion on these issues, we need to lead the way.

My belief is it that this issue will be resolved by Millennials once they get to an age when they are more senior in organizations. Millennials are a generation that is intolerant to unfairness of this sort and notices the subtle biases that add up. They are the most diverse generation in US history. The oldest Millennials are currently in their mid-30’s. In 10-20 years’ time they will be in powerful positions in business, non-profits, education, and government.

Optimistically, I believe Millennials will make a big difference. Pessimistically, I wonder if real change will happen before they are the ones managing suppliers and clients, as thus far the older generations have not shown that they are up to the task.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.