Pre-Election Polling and Baseball Share a Lot in Common

The goal of a pre-election poll is to predict which candidate will win an election and by how much. Pollsters work towards this goal by 1) obtaining a representative sample of respondents, 2) determining which candidate a respondent will vote for, and 3) predicting the chances each respondent will take the time to vote.

All three of these steps involve error. It is the first one, obtaining a representative sample of respondents, which has changed the most in the past decade or so.

It is the third characteristic that separates pre-election polling from other forms of polling and survey research. Statisticians must predict how likely each person they interview will be to vote. This is called their “Likely Voter Model.”

As I state in POLL-ARIZED, this is perhaps the most subjective part of the polling process. The biggest irony in polling is that it becomes an art when we hand the data to the scientists (methodologists) to apply a Likely Voter Model.

It is challenging to understand what pollsters do in their Likely Voter Models and perhaps even more challenging to explain.  

An example from baseball might provide a sense of what pollsters are trying to do with these models.

Suppose Mike Trout (arguably the most underappreciated sports megastar in history) is stepping up to the plate. Your job is to predict Trout’s chances of getting a hit. What is your best guess?

You could take a random guess between 0 and 100%. But, since that would give you a 1% chance of being correct, there must be a better way.

A helpful approach comes from a subset of statistical theory called Bayesian statistics. This theory says we can start with a baseline of Trout’s hit probability based on past data.

For instance, we might see that so far this year, the overall major league batting average is .242. So, we might guess that Trout’s probability of getting a hit is 24%.

This is better than a random guess. But, we can do better, as Mike Trout is no ordinary hitter.

We might notice there is even better information out there. Year-to-date, Trout is batting .291. So, our guess for his chances might be 29%. Even better.

Or, we might see that Trout’s lifetime average is .301 and that he hit .333 last year. Since we believe in a concept called regression to the mean, that would lead us to think that his batting average should be better for the rest of the season than it is currently. So, we revise our estimate upward to 31%.

There is still more information we can use. The opposing pitcher is Justin Verlander. Verlander is a rare pitcher who has owned Trout in the past – Trout’s average is just .116 against Verlander. This causes us to revise our estimate downward a bit. Perhaps we take it to about 25%.

We can find even more information. The bases are loaded. Trout is a clutch hitter, and his career average with men on base is about 10 points higher than when the bases are empty. So, we move our estimate back up to about 28%.

But it is August. Trout has a history of batting well early in and late in the season, but he tends to cool off during the dog days of summer. So, we decide to end this and settle on a probability of 25%.

This sort of analysis could go on forever. Every bit of information we gather about Trout can conceivably help make a better prediction for his chances. Is it raining? What is the score? What did he have for breakfast? Is he in his home ballpark? Did he shave this morning? How has Verlander pitched so far in this game? What is his pitch count?

There are pre-election polling analogies in this baseball example, particularly if you follow the probabilistic election models created by organizations like FiveThirtyEight and The Economist.

Just as we might use Trout’s lifetime average as our “prior” probability, these models will start with macro variables for their election predictions. They will look at the past implications of things like incumbency, approval ratings, past turnout, and economic indicators like inflation, unemployment, etc. In theory, these can adjust our assumptions of who will win the election before we even include polling data.

Of course, using Trout’s lifetime average or these macro variables in polling will only be helpful to the extent that the future behaves like the past. And therein lies the rub – overreliance on past experience makes these models inaccurate during dynamic times.

Part of why pollsters missed badly in 2020 is unique things were going on – a global pandemic, changed methods of voting, increased turnout, etc. In baseball, perhaps this is a year with a juiced baseball, or Trout is dealing with an injury.

The point is that while unprecedented things are unpredictable, they happen with predictable regularity. There is always something unique about an election cycle or a Mike Trout at bat.

The most common question I am getting from readers of POLL-ARIZED is, “will the pollsters get it right in 2024?” My answer is that since pollsters are applying past assumptions in their model, they will get it right to the extent that the world in 2024 looks like the world did in 2020, and I would not put my own money on it.

I make a point in POLL-ARIZED that pollsters’ models have become too complex. While in theory, the predictive value of a model never gets worse when you add in more variables, in practice, this has made these models uninterpretable. Pollsters include so many variables in their likely voter models that many of their adjustments cancel each other out. They are left with a model with no discernable underlying theory.

If you look closely, we started with a probability of 24% for Trout. Even after looking at a lot of other information and making reasonable adjustments, we still ended up with a prediction of 25%. The election models are the same way. They include so many variables that they can cancel out each other’s effects and end up with a prediction that looks much like the raw data did before the methodologists applied their wizardry.

This effort is better spent at getting better input for the models by investing in generating the trust needed to increase the response rates we get to our surveys and polls. Improving the quality of our data input will increase the predictive quality of the polls more than coming up with more complicated ways to weight the data.

Of course, in the end, one candidate wins, and the other loses, and Mike Trout either gets a hit, or he doesn’t, so the actual probability moves to 0% or 100%. Trout cannot get 25% of a hit, and a candidate cannot win 79% of an election.

As I write this, I looked up the last time Trout faced Verlander. It turns out Verlander struck him out!

Things That Surprised Me When Writing a Book

I recently published a book outlining the challenges election pollsters face and the implications of those challenges for survey researchers.

This book was improbable. I am not an author nor a pollster, yet I wrote a book on polling. It is a result of a curiosity that got away from me.

Because I am a new author, I thought it might be interesting to list unexpected things that happened along the way. I had a lot of surprises:

  • How quickly I wrote the first draft. Many authors toil for years on a manuscript. The bulk of POLL-ARIZED was composed in about three weeks, working a couple of hours daily. The book covers topics central to my career, and it was a matter of getting my thoughts typed and organized. I completed the entire first draft before telling my wife I had started it.
  • How long it took to turn that first draft into a final draft. After I had all my thoughts organized, I felt a need to review everything I could find on the topic. I read about 20 books on polling and dozens of academic papers, listened to many hours of podcasts, interviewed polling experts, and spent weeks researching online. I convinced a few fellow researchers to read the draft and incorporated their feedback. The result was a refinement of my initial draft and arguments and the inclusion of other material. This took almost a year!
  • How long it took to get the book from a final draft until it was published. I thought I was done at this point. Instead, it took another five months to get it in shape to publish – to select a title, get it edited, commission cover art, set it up on Amazon and other outlets, etc. I used Scribe Media, which was expensive, but this process would have taken me a year or more if I had done it without them.
  • That going for a long walk is the most productive writing tactic ever. Every good idea in the book came to me when I trekked in nature. Little of value came to me when sitting in front of a computer. I would go for long hikes, work out arguments in my head, and brew a strong cup of coffee. For some reason, ideas flowed from my caffeinated state of mind.
  • That writing a book is not a way to make money. I suspected this going in, but it became clear early on that this would be a money-losing project. POLL-ARIZED has exceeded my sales expectations, but it cost more to publish than it will ever make back in royalties. I suspect publishing this book will pay back in our research work, as it establishes credibility for us and may lead to some projects.
  • Marketing a book is as challenging as writing one. I guide large organizations on their marketing strategy, yet I found I didn’t have the first clue about how to promote this book. I would estimate that the top 10% of non-fiction books make up 90% of the sales, and the other 90% of books are fighting for the remaining 10%.
  • Because the commission on a book is a few dollars per copy, it proved challenging to find marketing tactics that pay back. For instance, I thought about doing sponsored ads on LinkedIn. It turns out that the per-click charge for those ads was more than the book’s list price. The best money I spent to promote the book was sponsored Amazon searches. But even those failed to break even.
  • Deciding to keep the book at a low price proved wise. So many people told me I was nuts to hold the eBook at 99 cents for so long or keep the paperback affordable. I did this because it was more important to me to get as many people to read it as possible than to generate revenue. Plus, a few college professors have been interested in adopting the book for their survey research courses. I have been studying the impact of book prices on college students for about 20 years, and I thought it was right not to contribute to the problem.
  • BookBub is incredible if you are lucky enough to be selected. BookBub is a community of voracious readers. I highly recommend joining if you read a lot. Once a week, they email their community about new releases they have vetted and like. They curate a handful of titles out of thousands of submissions. I was fortunate that my book got selected. Some authors angle for a BookBub deal for years and never get chosen. The sales volume for POLL-ARIZED went up by a factor of 10 in one day after the promotion ran.
  • Most conferences and some podcasts are “pay to play.” Not all of them, but many conferences and podcasts will not support you unless you agree to a sponsorship deal. When you see a research supplier speaking at an event or hear them on a podcast, they may have paid the hosts something for the privilege. This bothers me. I understand why they do this, as they need financial support. Yet, I find it disingenuous that they do not disclose this – it is on the edge of being unethical. It harms their product. If a guest has to pay to give a conference presentation or talk on a podcast, it pressures them to promote their business rather than have an honest discussion of the issues. I will never view these events or podcasts the same. (If you see me at an event or hear me on a podcast, be assured that I did not pay anything to do so.)
  • That the industry associations didn’t want to give the book attention. If you have read POLL-ARIZED, you will know that it is critical (I believe appropriately and constructively) of the polling and survey research fields. The three most important associations rejected my proposals to present and discuss the book at their events. This floored me, as I cannot think of any topics more essential to this industry’s future than those I raise in the book. Even insights professionals who have read the book and disagree with my arguments have told me that I am bringing up points that merit discussion. This cold shoulder from the associations made me feel better about writing that “this is an industry that doesn’t seem poised to fix itself.”
  • That clients have loved the book. The most heartwarming part of the process is that it has reconnected me with former colleagues and clients from a long research career. Everyone I have spoken to who is on the client-side of the survey research field has appreciated the book. Many clients have bought it for their entire staff. I have had client-side research directors I have never worked with tell me they loved the book.
  • That some of my fellow suppliers want to kill me. The book lays our industry bare, and not everyone is happy about that. I had a competitor ask me, ” Why are you telling clients to ask us what our response rates are?” I stand behind that!
  • How much I learned along the way. There is something about getting your thoughts on paper that creates a lot of learning. There is a saying that the best way to learn a subject is to teach it. I would add that trying to write a book about something can teach you what you don’t know. That was a thrill for me. But then again, I was the type of person who would attend lectures for classes I wasn’t even taking while in college. I started writing this book to educate myself, and it has been a great success in that sense.
  • How tough it was for me to decide to publish it. There was not a single point in the process when I did not consider not publishing this book. I found I wanted to write it a lot more than publish it. I suffered from typical author fears that it wouldn’t be good enough, that my peers would find my arguments weak, or that it would bring unwanted attention to me rather than the issues the book presents. I don’t regret publishing it, but it would never have happened without encouragement from the few people who read it in advance.
  • The respect I gained for non-fiction authors. I have always been a big reader. I now realize how much work goes into this process, with no guarantee of success. I have always told people that long-form journalism is the profession I respect the most. Add “non-fiction” writers to that now!

Almost everyone who has contacted me about the book has asked me if I will write another one. If I do, it will likely be on a different topic. If I learned anything, this process requires selecting an issue you care about passionately. Journalists are people who can write good books about almost anything. The rest of us mortals must choose a topic we are super interested in, or our books will be awful.

I’ve got a few dancing around in my head, so who knows, maybe you’ll see another book in the future.

For now, it is time to get back to concentrating on our research business!

The Insight that Insights Technology is Missing

The market research insights industry has long been characterized by a resistance to change. This likely results from the academic nature of what we do. We don’t like to adopt new ways of doing things until they have been proven and studied.

I would posit that the insights industry has not seen much change since the transition from telephone to online research occurred in the early 2000s. And even that transition created discord within the industry, with many traditional firms resistant to moving on from telephone studies because online data collection had not been thoroughly studied and vetted.

In the past few years, the insights industry has seen an influx of capital, mostly from private equity and venture capital firms. The conditions for this cash infusion have been ripe: a strong and growing demand for insights, a conservative industry that is slow to adapt, and new technologies arising that automate many parts of a research project have all come together simultaneously.

Investing organizations see this enormous business opportunity. Research revenues are growing, and new technologies are lowering costs and shortening project timeframes. It is a combustible business situation that needs a capital accelerant.

Old school researchers, such as myself, are becoming nervous. We worry that automation will harm our businesses and that the trend toward DIY projects will result in poor-quality studies. Technology is threatening the business models under which we operate.

The trends toward investment in automation in the insights industry are clear. Insights professionals need to embrace this and not fight it.

However, although the movement toward automation will result in faster and cheaper studies, this investment ignores the threats that declining data quality creates. In the long run, this automation will accelerate the decline in data quality rather than improve it.

It is great that we are finding ways to automate time-consuming research tasks, such as questionnaire authoring, sampling, weighting, and reporting. This frees up researchers to concentrate on drawing insights out of the data. But, we can apply all the automation in the world to the process, yet if we do not do something about data quality, it will not increase the value clients receive.

I argue in POLL-ARIZED that the elephant in the research room is the fact that very few people want to take our surveys anymore. When I began in this industry, I routinely fielded telephone projects with 70-80% response rates. Currently, telephone and online response rates are between 3-4% for most projects.

Response rates are not everything. You can make a compelling argument that they do not matter at all. There is no problem as long as the 3-4% response we get is representative. I would rather have a representative 3% answer a study than a biased 50%.

But, the fundamental problem is that this 3-4% is not representative. Only about 10% of the US population is currently willing to take surveys. What is happening is that this same 10% is being surveyed repeatedly. In the most recent project Crux fielded, respondents had taken an average of 8 surveys in the past two weeks. So, we have about 10% of the population taking surveys every other day, and our challenge is to make them represent the rest of the population.

Automate all you want, but the data that are the backbone of the insights we are producing quickly and cheaply is of historically low quality.

The new investment flooding into research technology will contribute to this problem. More studies will be done that are poorly designed, with long, tortuous questionnaires. Many more surveys will be conducted, fewer people will be willing to take them, and response rates will continue to fall.

There are plenty of methodologists working on these problems. But, for the most part, they are working on new ways to weight the data we can obtain rather than on ways to compel more response. They are improving data quality, but only slightly, and the insights field continues to ignore the most fundamental problem we have: people do not want to take our surveys.

For the long-term health of our field, that is where the investment should go.

In POLL-ARIZED, I list ten potential solutions to this problem. I am not optimistic that any of them will be able to stem the trend toward poor data quality. But, I am continually frustrated that our industry has not come together to work towards expanding respondent trust and the base of people willing to take part in our projects.

The trend towards research technology and automation is inevitable. It will be profitable. But, unless we address data quality issues, it will ultimately hasten the decline of this field.

POLL-ARIZED available on May 10

I’m excited to announce that my book, POLL-ARIZED, will be available on May 10.
 
After the last two presidential elections, I was fearful my clients would ask a question I didn’t know how to answer: “If pollsters can’t predict something as simple as an election, why should I believe my market research surveys are accurate?”
 
POLL-ARIZED results from a year-long rabbit hole that question led me down! In the process, I learned a lot about why polls matter, how today’s pollsters are struggling, and what the insights industry should do to improve data quality.
 
I am looking for a few more people to read an advance copy of the book and write an Amazon review on May 10. If you are interested, please send me a message at poll-arized@cruxresearch.com.

Questions You Are Not Asking Your Market Research Supplier That You Should Be Asking

It is no secret that providing representative samples for market research projects has become challenging. While clients are always focused on obtaining respondents quickly and efficiently, it is also important that they are concerned with the quality of their data. The reality is that quality is slipping.

While there are many causes of this, one that is not discussed much is that clients rarely ask their suppliers the tough questions they should. Clients are not putting pressure on suppliers to focus on data quality. Since clients ultimately control the purse strings of projects, suppliers will only improve quality if clients demand it.

I can often tell if I have an astute client by their questions when we are designing studies. Newer or inexperienced clients tend to start by talking about the questionnaire topics. Experienced clients tend to start by talking about the sample and its representativeness.

Below is a list of a few questions that I believe clients should be asking their suppliers on every study. The answers to these are not always easy to come by, but as a client, you want to see that your supplier has contemplated these questions and pays close attention to the issues they highlight.

For each, I have also provided a correct or acceptable answer to expect from your supplier.

  • What was the response rate to my study? While it was once commonplace to report response rates, suppliers try to dodge this issue. Most data quality issues stem from low response rates. Correct Answer: For most studies, under 5%. Unless the survey is being fielded among a highly engaged audience, such as your customers, you should be suspicious of any answer over 15%. “I don’t know” is an unacceptable answer. Suppliers will also try to convince you that response rates do not matter when every data quality issue we experience stems from inadequate response to our surveys.
  • How many respondents did you remove in fielding for quality issues? This is an emerging issue. The number of bad-quality respondents in studies has grown substantially in just the last few years. Correct answer: at least 10%, but preferably between 25% and 40%. If your supplier says 0%, you should question whether they are properly paying attention to data quality issues. I would guide you to find a different supplier if they cannot describe a process to remove poor-quality respondents. There is no standard way of doing this, but each supplier should have an established process.
  • How were my respondents sourced? This is an essential question seldom asked unless our client is an academic researcher. It is a tricky question to answer. Correct answer: This is so complicated that I have difficulty providing a cogent response to our clients. Here, the hope is that your supplier has at least some clue as to how the panel companies get their respondents and know who to go to if a detailed explanation is needed. They should connect you with someone who can explain this in detail.
  • What are you doing to protect against bots? Market research samples are subject to the ugly things that happen online – hackers, bots, cheaters, etc. Correct answer: Something proactive. They might respond that they are working with the panel companies to prevent bots or a third-party firm to address this. If they are not doing anything or don’t seem to know that bots are a big issue for surveys, you should be concerned.
  • What is in place to ensure that my respondents are not being used for competitors or vice-versa? Often, clients should care that the people answering their surveys have not done another project in your product category recently. I have had cases where two suppliers working for the same client (one being us) used the same sample source and polluted the sample base for both projects because we did not know the other study was fielding. Correct answer: Something if this is important to you. If your research covers brand or advertising awareness, you should account for this. If you are commissioning work with several suppliers, this takes considerable coordination.
  • Did you run simulated data through my survey before fielding? This is an essential, behind-the-scenes step that all suppliers that know what they are doing take. Running thousands of simulated surveys through the questionnaire tests survey logic and ensures that the right people get to the right questions. While it doesn’t prevent all errors, it catches many of them. Correct answer: Yes. If the supplier does not know what simulated data is, it is time to consider a new supplier.
  • How many days will my study be in the field? Many errors in data quality stem from conducting studies too quickly. Correct answer: Varies, but this should be 10-21 days for a typical project. If your study better have difficult-to-find respondents, this could be 3-4 weeks. If the data collection period is shorter than ten days, you WILL have data quality errors that arise, so be sure you understand the tradeoffs for speed. Don’t insist on field speed unless you need to.
  • Can I have a copy of the panel company’s answers to the ESOMAR questions? ESOMAR has put out a list of questions to help buyers of online samples. Every sample supplier worth using will have created a document that answers these questions. Correct answer: Yes. Do not work with a company that has not put together a document answering these questions, as all the good ones have. However, after reading this document, don’t expect to understand how your respondents are being sourced.
  • How do you handle requests down the road when the study is over? It is a longstanding pet peeve of most clients that suppliers charge for basic customer support after the project is over. Make sure you have set expectations properly upfront and put these expectations into the contract. Correct answer: Forever. Our company only charges if support requests become substantial. Many suppliers will provide support for three- or six months post-study and will charge for this support. I have never understood this, as I am flattered when a client calls me to discuss a study that was done years ago, as this means our study is continuing to make an impact. We do not charge for this follow-up unless the request requires so much time that we have to.

There are probably many other questions clients should be asking suppliers. Clients need to get tougher on insisting on data quality. It is slipping, and suppliers are not investing enough to improve response rates and develop trust with respondents. If clients pressure them, the economic incentives will be there to create better techniques to obtain quality research data.

Let’s Appreciate Statisticians Who Make Data Understandable

Statistical analyses are amazing, underrated tools. All scientific fields depend on discoveries in statistics to make inferences and draw conclusions. Without statistics, advances in engineering, medicine, and science that have greatly improved the quality of life would not have been possible. Statistics is the Rodney Dangerfield of academic subjects – it never gets the respect it deserves.

Statistics is central to market research and polling. We use statistics to describe our findings and understand the relationships between variables in our data sets. Statistics are the most important tools we have as researchers.

However, we often misuse these tools. I firmly believe that pollsters and market researchers overdo it with statistics. Basic, statistical analyses are easy to understand, but complicated ones are not. Researchers like to get into complex statistics because it lends an air of expertise to what we do.

Unfortunately, most sophisticated techniques are impossible to convey to “normal” people who may not have a statistical background, and this tends to describe the decision-makers we support.

I learned long ago that when working with a dataset, any result that will be meaningful will likely be uncovered by using simple descriptive statistics and cross-tabulations. Multivariate techniques can tease out more subtle relationships in the data. Still, the clients (primarily marketers) we work with are not looking for subtleties – they want some conclusions that leap off the page from the data.

If a result is so subtle that it needs complicated statistics to find, it is likely not a large enough result to be acted upon by a client.

Because of this, we tend to use multivariate techniques to confirm what we see with more straightforward methods. Not always – as there are certainly times when the client objectives call for sophisticated techniques. But, as researchers, our default should be to use the most straightforward designs possible.

I always admire researchers who make complicated things understandable. That should be the goal of statistical analyses. George Terhanian of Electric Insights has developed a way to use sophisticated statistical techniques to answer some of the most fundamental questions a marketer will ask.

In his article “Hit? Stand? Double? Master’ likely effects’ to make the right call”, George describes his revolutionary process. It is sophisticated behind the scenes, but I like the simplicity in the questions it can address.

He has created a simulation technique that makes sense of complicated data sets. You may measure hundreds of things on a survey and have an excellent profile of the attitudes and behaviors of your customer base. But, where should you focus your investments? This technique demonstrates the likely effects of changes.

As marketers, we cannot directly increase sales. But we can establish and influence attitudes and behaviors that result in sales. Our problem is often to identify which of these attitudes and behaviors to address.

For instance, if I can convince my customer base that my product is environmentally responsible, how many of them can I count on to buy more of my product? The type of simulator described in this article can answer this question, and as a marketer, I can then weigh if the investment necessary is worth the probable payoff.

George created a simulator on some data from a recent Crux Poll. Our poll showed that 17% of Americans trust pollsters. George’s analysis shows that trust in pollsters is directly related to their performance in predicting elections.

Modeling the Crux Poll data showed that if all Americans “strongly agreed” that presidential election polls do a good job of predicting who will win, trust in pollsters/polling organizations would increase by 44 million adults. If Americans feel “extremely confident” that pollsters will accurately predict the 2024 election, trust in pollsters will increase by an additional 40 million adults.

If we are worried that pollsters are untrusted, this suggests that improving the quality of our predictions should address the issue.

Putting research findings in these sorts of terms is what gets our clients’ attention. 

Marketers need this type of quantification because it can plug right into financial plans. Researchers often hear that the reports we provide are not “actionable” enough. There is not much more actionable than showing how many customers would be expected to change their behavior if we successfully invest in a marketing campaign to change an attitude.

Successful marketing is all about putting the probabilities in your favor. Nothing is certain, but as a marketer, your job is to decide where best place your resources (money and time). This type of modeling is a step in the right direction for market researchers.

Associations and Trade Groups for Market Researchers and Pollsters

The market research and polling fields have some excellent trade associations. These organizations help lobby for the industry, conduct studies on issues relating to research, host in-person events and networking opportunities, and post jobs in the market research field. They also host many excellent online seminars. These organizations establish standards for research projects and codes of conduct for their memberships.

Below is a listing of some of the most influential trade groups for market researchers and pollsters. I would recommend that, at minimum, all researchers should get on the email lists of these organizations, as that allows you to see what events and seminars they have coming up. Many of their online seminars are free.

  • ESOMAR. ESOMAR is perhaps the most “worldwide” of all the research trade associations and probably the biggest. ESOMAR was established in 1948 and is headquartered in Europe (Amsterdam). With 40,000 members across 130 countries, it is an influential organization.
  • Insights Association. The Insights Association is U.S.-based. It was created in a merger of two longstanding associations: CASRO and MRA. This organization runs many events and has a certification program for market researchers.
  • Advertising Research Foundation (ARF). ARF concentrates on advertising and media research. ARF puts on a well-known trade show/conference each year and has an important awards program for advertising research, known as the Ogilvy’s. The ARF is likely the most essential trade organization to be a part of if you work in an ad agency or the media or focus on advertising research.
  • Market Research Society. MRS is the U.K. analog to the Insights Association. This organization reaches beyond the U.K. and has some great online courses.
  • The American Association for Public Opinion Research (AAPOR). AAPOR is an influential trade group regarding public opinion polling and pre-election polling. They win the award for longevity, as they have been around since 1947. I consider AAPOR to be the most “academic” of the trade groups, as in addition to researchers and clients, they have quite a few college professors as members. They publish Public Opinion Quarterly, a key academic journal for polling and survey research. AAPOR is a small organization with a large impact.
  • The Research Society. The Research Society is Australia’s key trade association for market researchers.

Many countries have their own trade associations, and there are some associations specific to particular industries, such as pharmaceuticals and health care.

Below are other types of organizations that are not trade associations but are of interest to survey researchers.

  • The Roper Center for Public Opinion Research. The Roper Center is an archive of past polling data, mainly from the U.S. It is currently housed at Cornell University. It can be fascinating to use it to see what American opinion looked like decades ago.
  • The Archive of Market and Social Research (AMSR). AMSR is likely of most interest to U.K. researchers. It is an archive of U.K. history through the lens of polls and market research studies that have been collected.
  • The University of Georgia. The University of Georgia has a leading academic program that trains future market researchers. This university is quite involved in the market research industry and sponsors many exciting seminars. There are some other universities with market research programs, but the University of Georgia is by far the one that is the most tightly connected with the industry.
  • The Burke Institute. The Burke Institute offers many seminars and courses of interest to market research. Many organizations encourage their staff members to take Burke Institute courses.
  • Women in Research (WiRe). WiRe is a group that advances the voice of women in market research. This organization has gained significantly in prominence over the past few years and is doing great work.
  • Green Book. Green Book is a directory of market research firms. Back “in the day,” the Green Book was the printed green directory used by most researchers to find focus group facilities. This organization hosts message boards, conducts industry studies and seminars.
  • Quirk’s. Quirk’s contains interesting articles and runs webinars and conferences.

CRUX POLL SHOWS THAT JUST 17% OF AMERICANS TRUST POLLSTERS

ROCHESTER, NY – OCTOBER 20, 2021 – Polling results released today by Crux Research indicate that just 17% of U.S. adults have “very high trust” or “high trust” in pollsters/polling organizations.

Just 21% of U.S. adults felt that polling organizations did an “excellent” or “good” job in predicting the 2020 U.S. Presidential election. 40% of adults who were polled in the 2020 election felt the poll they responded to was biased.

Trust in pollsters is higher among Democrats than it is among Republicans and Independents. Pollster trust is highest among adults under 30 years old and lowest among those over 50. This variability can contribute to the challenges pollsters face, as cooperation with polls may also vary among these groups.

It has been a difficult stretch of time for pollsters. 51% of Americans feel that Presidential election polls are getting less accurate over time. And, just 12% are confident that polling organizations will correctly predict the next President in 2024.

The poll results show that there are trusted institutions and professions in America. Nurses are the most trusted profession, followed by medical doctors and pharmacists. Telemarketers, car salespersons, social media companies, Members of Congress, and advertising agencies are the least trusted professions.

###

Methodology

This poll was conducted online between October 6 and October 17, 2021. The sample size was 1,198 U.S. adults (aged 18 and over). Quota sampling and weighting were employed to ensure that respondent proportions for age group, sex, race/ethnicity, education, and region matched their actual proportions in the population.   

This poll did not have a sponsor and was conducted and funded by Crux Research, an independent market research firm that is not in any way associated with political parties, candidates, or the media.

All surveys and polls are subject to many sources of error. The term “margin of error” is misleading for online polls, which are not based on a probability sample which is a requirement for margin of error calculations. If this study did use probability sampling, the margin of error would be +/-3%.

About Crux Research Inc.

Crux Research partners with clients to develop winning products and services, build powerful brands, create engaging marketing strategies, enhance customer satisfaction and loyalty, improve products and services, and get the most out of their advertising.

Using quantitative and qualitative methods, Crux connects organizations with their customers in a wide range of industries, including health care, education, consumer goods, financial services, media and advertising, automotive, technology, retail, business-to-business, and non-profits.

Crux connects decision makers with customers, uses data to inspire new thinking, and assures clients they are being served by experienced, senior level researchers who set the standard for customer service from a survey research and polling consultant. To learn more about Crux Research, visit www.cruxresearch.com.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.