Archive for November, 2022

Why the Media Cried (Red) Wolf

Journalists are puzzled as to why a predicted “red wave” (a Republican resurgence) did not materialize in the 2022 midterm elections. The signals that the red wave would fail to form were clear. The failure of journalists to foresee the success of Democratic candidates was caused by their inability to discern the good polls from the bad.

Established, media- and college-branded polls performed historically well in this cycle. They provided all the data necessary to foresee that a red wave would not emerge.

So why was there such a widespread view that the Republicans would have a big night?

The answer is that journalists have become indiscriminate in their polling coverage. Conservative-leaning pollsters released a flood of poor-quality polls in the last two weeks before the election. These polls pointed to a brewing red tsunami, and the media covered them with little, if any, due diligence.

I have had conversations with long-time pollsters who, through rolled eyes, tell me they think some of these pollsters are simply making up their numbers. In this cycle, pollsters obtained cross-tabulations from a Trafalgar poll that indicated that almost two-thirds of Gen Z Voters would vote for a MAGA candidate in Georgia (when one-third would have represented a historic swing). Yet, respected journalists widely reported the results of this very same poll.

Trafalgar’s 2022 polls were demonstrably inaccurate. Trafalgar released 19 statewide polls in the week preceding the election. These polls chose the correct winner in just 11 of these polls. Just seven were within their margin of error, and Trafalgar’s mean polling error is likely to end up being more than double the mean polling error of “name-brand” pollsters.

It is understandable that right-leaning media are interested in these polls, as they provide a hopeful, confirmatory message their audience wants to hear. Since reputable polls have erred in a liberal direction in the past few cycles, there is a sense that we cannot trust them anymore.

Journalists ignored that polls have always fluctuated between missing in a liberal or conservative direction. Because polls have been off in a liberal direction in the past two presidential elections, journalists have assumed a liberal bias is here to stay. In 2022, this proved to be incorrect.

It isn’t just the media that provide oxygen to these polls. Poll aggregators (particularly RealClearPolitics) had a horrible cycle because they were indiscriminate in which polls were included in their averages. Predictive modelers (such as FiveThirtyEight) had a solid night that could have been tremendous if they could get out of a mentality that every poll has something of value to contribute to their models.

Reporting on polls with suspect methods is simply bad journalism. Trusted journalists would never release a story without considerable fact-checking of their sources. Yet, they continue to cover polls that are not transparent, have poor track records, have no defensible methodology, and are shunned by the polling establishment.  

This is journalistic malpractice, and the result can be dire. When the election results do not match expectations set by the polls, an environment is fostered where election denialism thrives. January 6th happened partly because the partisan polls the protesters focused on had Donald Trump winning the election, and good journalists fueled this mentality by reporting on these polls. They provided these polls with a legitimacy they did not deserve.

Statistical laws imply that we cannot know in advance which polls will be correct in any given election. But we know which ones meet industry standards for methodology and disclosure and that, in the long term, have been proven to get it right far more often than they get it wrong.

It is no secret that pollsters face technological headwinds, but their occasional misses are not for lack of trying. After each election, pollsters convene, share findings, and discuss how to improve polls for the next election. In this sense, polling is one of the most honest professions.

Do you know who is missing from these conversations and not contributing to this honesty? The conservative-leaning pollsters.

My advice to journalists is this: stick to credible polls and stop giving every poll a voice. Rely more on the pollsters themselves for editorial decisions on what goes in the polls and the interpretations of their results. Stop creating the news by being too involved in the content of polls and return to doing what you do best: report on poll findings and provide context.

Above all, fact-check the polls like you would any other source.

Polling’s Winners and Losers from the Midterms

The pollsters did well last night.

Right now (the morning after the election), it is hard to know if 2022 will go down as a watershed moment when pollsters once again found their footing or if it will merely be a stay of execution. The 2018 midterms were also quite good for pollsters, yet the 2020 election was not.

To be clear, there are still many votes to count, so it is unfair to judge the polls too quickly. In POLL-ARIZED, I criticize media members who do. Nonetheless, below is a list of what I see as some winners and losers and some that seem like they are in the middle.

The Winners

  • Pre-election polling in general. For the most part, the polls did a good job of pointing out the close races, and exit polls suggest that they did an excellent job of highlighting the issues that concern voters most. I suspect the polling error rate will be far below the historical average of five+ points for midterm elections.
  • The “good” pollsters. The better-known polling brands, especially those with media partnerships, and some college polling centers had good results.
  • John King’s brain. Say what you want about CNN, but watching someone who knows the name of every county in America, the candidates in every election district, and the results of past elections perform without a net and stick the landing is impressive.
  • The CNN magic wall. I know other networks have them, but I can’t be the only data geek who marvels at the database systems and APIs behind CNN’s screen. It must have cost millions and involved dozens of people.
  • The Iowa Poll’s response rate. Their methodology statement says they contacted 1,118 Iowa residents for a final sample size of 801, with a response rate of 72%. This reminds me of the good old days. I would like to see pollsters spend more time benchmarking what Selzer & Co. are doing right with this poll.

The Losers

  • The partisan pollsters, particularly Trafalgar. These pollsters were way off this cycle. They have been way off in most cycles. I hope that non-partisan media outlets will stop covering them. They provide a story that outlets and viewers seeking a confirmation bias enjoy, but objective media should leave them behind for good.
  • The media who failed to see that there were so many less-reputable conservative polls released over the past two weeks. Most media were hoodwinked by this and ran a narrative that a red storm was brewing.
  • Response rates. I delved into the methodology of many final polls this cycle; most had net response rates of less than 2%. That is about half what response rates were just two years ago. The fact that the pollsters did so well with this low response is a testament to the brilliance of methodologists, but the data they have to work with is getting worse each cycle. They will not be able to keep pulling rabbits out of their hat.
  • The prediction markets. I have long hoped that the betting markets can emerge to provide a plausible alternative to polls regarding predicting elections so that the polls can focus on issues and not the horse races. These markets did not have a good night.
  • FiveThirtyEight’s pollster ratings. It is too early to make a definitive statement, but some of their highly rated pollsters had poor results, while many with middling grades did well. These ratings are helpful when they are accurate and have a defensible method behind them. When these gradings are inaccurate, they ruin reputations and businesses, so FiveThirtyEight must embrace that producing objective and accurate ratings is a serious responsibility.

The “So-So”

  • The Iowa Poll. Even with the high response, this poll seemed to overstate the Republican vote this time. They did get all the winners correct. This poll has a strong history of success, so it might be fair to chalk the slight miss up to normal sampling fluctuation. It isn’t statistically possible to get it right every single time. I must admit I have a bias of rooting for this poll.
  • The modelers, such as FiveThirtyEight and the Economist. On the hand, the concept of a probabilistic forecast is spot on. On the other, it is not particularly informative in coin-toss races. In this cycle, the forecasts they made for Senate and House seats weren’t much different than what could have been made by just tossing a coin in the contested races. Their median predictions for House and Senate seats overstated where the Republicans will end up, possibly because they also fell prey to the release of so many conservative-leaning polls in the campaign’s final stages.
  • Polling error direction. In the past few cycles, the polling error has been in the direction of overcounting Democrats. In 2022, this error seemed to move in the other direction. Historically, these errors have been uncorrelated from election to election, so I must admit that I’ve probably jumped the gun by suggesting in POLL-ARIZED the pro-Democrat error direction was structural and here to stay.
  • The media’s coverage of the polls on election day. In 2016 and 2020, the press reveled in bashing the pollsters. This time, they hardly talked about them at all. That seemed a bit unfair – if pollsters are going to be criticized when they do poorly, they should be celebrated when they do well.

All-in-all, a good night for the pollsters. But, I don’t want to rush to a conclusion that the polls are now fixed because, in reality, the pollsters didn’t change much in their methods from 2020. I hope the industry will study what went right, as we tend to re-examine our methods when they fail, not when they succeed.

The value of looking at data from more than one perspective

About 20 years ago, I flew to the Midwest to present the findings from an extensive project. My audience included the head of marketing, my direct research client, and the firm’s CEO. We constructed an insightful study that profiled the market my client played in, their position, and their competitive strengths and weaknesses.

I spent about an hour presenting the study findings and fielding questions. It went great. It was one of those meetings where I knew our work would affect this company, and the CEO seemed to buy into taking action based on our recommendations.

Then, with about five minutes to go in the meeting, I asked if there were any follow-up analyses they would like us to do. The CEO said, “yes, there is one thing …”

He then instructed me to take a couple of weeks to do a new analysis and then to fly back out and present it to him. I was at first taken aback, as I thought the project was over, and I was ready to declare victory and move on to other things.

The analysis he requested? He told me to imagine that his largest competitor would call me tomorrow. I could use everything I knew about my client and the information gathered in our study. If this competitor called me, what would I tell them about how to position against my client? What are the implications of our research from his competitor’s point of view?

This is a brilliant idea. I have always believed that although research can often be quite insightful, it is more about what clients do with our data that matters. This CEO knew full well that his competitor probably had their own research firm doing a similar project to what I had just presented. He wanted to view the world from his competitor’s perspective.

It worked. I returned in a couple of weeks and did a role-play presentation where I pretended they were their competition. This led to a game-theory discussion of how their competition would likely react to initiatives they were considering, how they could address their weaknesses, and where their strengths mattered.

Since then, I have proposed similar analyses to many clients. I have been surprised at how few have taken me up on the offer. So, late in presentations, I often slip in a few slides showing what I would tell their competition based on the study findings if I worked for them.

If I were a client-side researcher, I’d ask my researchers to do this regularly. It forces us to do a better job at checking our biases, as, like it or not, we want our data to show our clients are succeeding. We know how much work they put in, and it isn’t easy to tell them where their weaknesses are. Looking at the data from another angle gives us the space to be more agnostic in our conclusions and provides better insight to the clients. It makes us more agnostic to the data and less likely to tell clients what they want to hear.

The request from this CEO made me a better, more empathetic researcher. We worked with his firm for about 15 years; he recently retired. He will always be in my “client hall of fame” because of his willingness to view research results objectively and his insistence that we consider all perspectives.

Clients hire us so they can learn from us, but often they don’t realize how much we learn from them.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.