Archive for the 'Methodology' Category



A Math Myth?

math_symbols_m

I just finished reading The Math Myth: And Other STEM Delusions by Andrew Hacker. I found the book to be so provocative and interesting that it merits the first ever book review on this blog.

The central thesis of the book is that in the US, we (meaning policy makers, educators, parents, and employers) have become obsessed with raising rigor and academic standards in math. This obsession has reached a point where we are convinced that our national security, international business competitiveness, and hegemony as an economic power rides on improving the math skills of all our high school and college graduates.

Hacker questions this national fixation. First, raising math standards has some serious costs. Not only has it caused significant disruption within schools and among educators and parents (ask any educator about the upheaval the Common Core has caused), but it has also cost significant money. But, most importantly, Hacker makes a strong case that raising of math standards has ensured that many students will be left behind and unprepared for the future.

Currently, about one in four high school students does not complete high school. Once enrolled in college, only a bit more than half of enrollees will graduate. While there are many reasons for these failures, Hacker points out that the chief ACADEMIC reason is math.

I think everyone can think of someone who struggled mightily in math. I personally took Calculus in high school and two further courses in college. I have often wondered why. It seemed to be more of a rite of passage than an academic pursuit with any realistic end in mind for me. It was certainly painful.

Math has humbled many a bright young person. I have a niece who was an outstanding high school student (an honors student, took multiple AP courses, etc.). She went to a reputable four-year college. In her first year at college, she failed a required math course in Calculus. This remains the only course she had gotten below a B in during her entire academic life. Her college-mandated math experience made her feel like a failure and reconsider whether she belonged in college. Fortunately for her she had good supports in place and succeeded in her second go round at the course. Many others are not so lucky.

And to what end? My niece has ended up in a quantitative field and is succeeding nicely. Yet, I doubt she has ever had to calculate the area under a curve, run a derivative, or understand a differential equation.

The reality is very few people do. Hacker, using Bureau of Labor Statistics data, estimates that about 5% of the US workforce currently uses math beyond basic arithmetic in their jobs. This means that only about 1 in 20 of our students will need to know basic algebra or beyond in their employment. 95% will do just fine with the math that most people master by the end of 8th grade.

And, despite the focus on STEM education, Hacker uses BLS data to show that the number of engineering jobs in the US is projected to grow at a slower rate than the economy as a whole. In addition, despite claims by policy makers that there is a dearth of qualified engineers, real wages for engineers have been falling and not rising, implying that supply is exceeding demand.

Yet, our high school standards and college entry standards require a mastery of not just algebra, but also geometry and trigonometry.

Most two-year colleges have a math test that all incoming students must pass – regardless of the program of study they intend to follow. As anyone who has worked with community colleges can attest to, remediation of math skills for incoming students is a major issue two-year institutions face. Hacker questions this. Why, for example, should a student intending to study cosmetology need to master algebra? When is the last time your haircutter needed to understand how to factor a polynomial?

The problem lies in what the requirement that all students master advanced math skills does to people’s lives unnecessarily. Many aspiring cosmetologists won’t pass this test and won’t end up enrolling in the program and will have to find new careers because they cannot get licensed. What interest does this serve?

Market research is a quantitative field. Perhaps not as much as engineering and sciences, but our field is focused on numbers and statistics and making sense of them. However, in about 30 years of working with researchers and hiring them, I can tell you that I have not once encountered a single researcher who doesn’t have the technical math background necessary to succeed. In fact, I’d say that most of the researchers I’ve known have mastered the math necessary for our field by the time they entered high school.

However, I have encountered many researchers who do not have the interpretive skills needed to draw insights from the data sets we gather. And, I’d say that MOST of the researchers I have encountered cannot write well and cannot communicate findings effectively to their clients.

Hacker calls these skills “numeracy” and advocates strongly for them. Numeracy skills are what the vast majority of our graduates truly need to master.  These are practical numerical skills, beyond the life skills that we are often concerned about (e.g. understanding the impact of debt, how compound interest works, how to establish a family budget).  Numeracy (which requires basic arithmetic skills) is making sense of the world by using numbers, and being able to critically understand the increasing amount of numerical data that we are exposed to.

Again, I have worked with researchers who have advanced skills in Calculus and multivariate statistical methods, yet have few skills in numeracy. Can you look at some basic cross-tabs and tell a story? Can you be presented with a marketing situation and think of how we can use research to gather data to make a decision more informed? These skills, rather than advanced mathematical or statistical skills, are what are truly valued in our field. If you are in our field for long, you’ll noticed that the true stars of the field (and the people being paid the most) are rarely the math and statistical jedis – they tend to be the people who have mastered both numeracy and communication.

This isn’t the first time our country has become obsessed with STEM achievement. I can think of three phases in the past century where we’ve become similarly single-minded about education. The first was the launch of Sputnik in 1957.This caused a near panic in the US that we were falling behind the Soviets and our educational system changed significantly as a result. The second was the release of the Coleman Report in 1966.This report criticized the way schools are funded and, based on a massive study, concluded that spending additional money on education did not necessarily create greater achievement. It once again produced a near-panic that our schools were not keeping up, and many educational reforms were made. The third “shock” came in the form of A Nation at Risk, which was published during the Gen X era in 1983. This governmental report basically stated that our nation’s schools were failing. Panicked policy makers responded with reforms, perhaps the most important being that the federal government started taking on an activist role in education. We now have the “Common Core Era” – which, if you take a long view, can be seen as history repeating itself.

Throughout all of these shocks, the American economy thrived. While other economies have become more competitive, for some reason we have come to believe that if we can just get more graduates that understand differential equations, we’ll somehow be able to embark on a second American century.

Many of the criticisms Hacker levies towards math have parallels in other subjects. Yes, I am in a highly quantitative field and I haven’t had to know what a quadratic equation is since I was 16 years old. But, I also haven’t had to conjugate French verbs, analyze Shakespearean sonnets, write poetry, or know what Shay’s Rebellion was all about. We study many things that don’t end up being directly applicable to our careers or day-to-day lives. That is part of becoming a well-rounded person and an intelligent citizen. There is nothing wrong with learning for the sake of learning.

However, there are differences in math. Failure to progress sufficiently in math prevents movement forward in our academic system – and prevents pursuit of formal education in fields that don’t require these skills. We don’t stop people from becoming welders, hair-cutters, or auto mechanics because they can’t grasp the nuances of literature, speak a foreign language, or have knowledge of US History. But, if they don’t know algebra, we don’t let them enroll in these programs.

This is in no way a criticism of the need to encourage capable students from studying advanced math. As we can all attest to whenever we drive over a bridge, drive a car, use social media, or receive medical treatment, having incredible engineers is essential to the quality of our life. We should all want the 5% of the workforce that needs advanced math skills to be as well trained as possible.Our future world depends on them. Fortunately, the academic world is set up for them and rewards them.

But, we do have to think of alternative educational paths for the significant number of young people who will, at some point, find math to be a stumbling block to their future.

I highly recommend reading this book. Even if you do not agree with its premise or conclusions, it is a good example of how we need to think critically about our public policy declarations and the unintended consequences they can cause.

If you don’t have the time or inclination to read the entire book, Hacker wrote an editorial for the NY Times that eventually spawned the book. It is linked below.

Is Algebra Necessary?

 

Asking about gender and sexual orientation on surveys

When composing questionnaires, there are times when the simplest of questions have to adjust to fit the times. Questions we draft become catalysts for larger discussions. That has been the case with what was once the most basic of all questions – asking a respondent for their gender.

This is probably the most commonly asked question in the history of survey research. And it seems basic – we typically just ask:

  • Are you… male or female?

Or, if we are working with younger respondents, we ask:

  • Are you … a boy or a girl?

The question is almost never refused and I’ve never seen any research to suggest this is anything other than a highly reliable measure.

Simple, right?

But, we are in the midst of an important shift in the social norms towards alternative gender classifications. Traditionally, meaning up until a couple of years ago, if we wanted to classify homosexual respondents we wouldn’t come right out and ask the question, for fear that it would be refused or be found to be an offensive question for many respondents. Instead, we would tend to ask respondents to check off a list of causes that they support. If they chose “gay rights”, we would then go ahead and ask if they were gay or straight. Perhaps this was too politically correct, but it was an effective way to classify respondents in a way that wasn’t likely to offend.

We no longer ask it that way. We still ask if the respondent is male or female, but we follow up to ask if they are heterosexual, lesbian, gay, bisexual, transgender, etc.

We recently completed a study among 4-year college students where we posed this question.  Results were as follows:

  • Heterosexual = 81%
  • Bisexual = 8%
  • Lesbian = 3%
  • Gay = 2%
  • Transgender = 1%
  • Other = 2%
  • Refused to answer = 3%

First, it should be noted that 3% refused to answer is less than the 4% that refused to answer the race/ethnicity question on the same survey.  Conclusion:  asking today’s college students about sexual orientation is less sensitive than asking them about their race/ethnicity.

Second, it is more important than ever to ask this question. These data show that about 1 in 5 college students identify as NOT being heterosexual. Researchers need to start viewing these students as a segment, just as we do age or race. This is the reality of the Millennial market:  they are more likely to self-identify as not being heterosexual and more likely to be accepting of alternative lifestyles. Failure to understand this group results in a failure to truly understand the generation.

We have had three different clients ask us if we should start asking this question younger – to high school or middle school students. For now, we are advising against it unless the study has clear objectives that point to a need. Our reasoning for this is not that we feel the kids will find the question to be offensive, but that their parents and educators (whom we are often reliant on to be able to survey minors) might. We think that will change over time as well.

So, perhaps nothing is as simple as it seems.

Crux Research is Going to the Ogilvy’s!

Crux Research is excited to announce that our client, Truth Initiative, is a finalist for two David Ogilvy Awards. These awards are presented by the Advertising Research Foundation (ARF) annually to recognize excellence in advertising research. Ogilvy Awards honor the creative use of research in the advertising development process by research firms, advertising agencies and advertisers.

Truth Initiative is a longstanding client of Crux Research. Truth Initiative is America’s largest non-profit public health organization dedicated to making tobacco use a thing of the past. Truth is a finalist in two Ogilvy categories:

For both of these campaigns, Crux Research worked closely with CommSight and Truth Initiative to test the effectiveness of the approaches and executions prior to launch and to track the efficacy of the campaigns once in market.

We are honored and proud to be a part of these campaigns, to have had the opportunity to work with Truth Initiative and CommSight, and most importantly, to have played a supporting role in Truth’s mission to make youth smoking a thing of the past.

The 2016 ARF David Ogilvy Awards Ceremony will be held March 15 in New York.  More information can be found Ogilvy Awards.

How can you predict an election by interviewing only 400 people?

This might be the most commonly asked question researchers get at cocktail parties (to the extent that researchers go to cocktail parties). It is also a commonly unasked question among researchers themselves: how can we predict an election by only talking to 400 people? 

The short answer is we can’t. We can never predict anything with 100% certainty from a research study or poll. The only way we could predict the election with 100% certainty would be to interview every person who will end up voting. Even then, since people might change their mind between the poll and the election we couldn’t say our prediction was 100% likely to come true.

To provide an example, if I want to flip a coin 100 times, my best estimate before I do it would be that I will get “heads” 50 times. But, it isn’t 100% certain the coin will land on heads 50 times.

The reason it is hard to comprehend how we predict elections by talking to so few people is our brains aren’t trained to understand probability. If we interview 400 people and find that 53% will vote for Hillary Clinton and 47% for Donald Trump, as long as the poll was conducted well, this result becomes our best prediction for what the vote will be. It is similar to predicting we will get 50 heads out of 100 coin tosses.  53% is our best prediction given the information we have. But, it isn’t an infallible prediction.

Pollsters provide a sampling error, which is +/-5% in this case. 400 is a bit of a magic number. It results in a maximum possible sampling error of +/-5% which has long been an acceptable standard. (Actually, we need 384 interviews for that, but researchers will use 400 instead because it sounds better.)

What that means is that if we repeated this poll over and over, we would expect to find Clinton to receive between 48% and 58% of the intended vote, 95% of the time. We’d expect Trump to receive between 42% and 52% of the intended vote, 95% of the time. On average though, if we kept doing poll after poll, our best guess would be if we averaged Clinton’s result it would be 53%.

In the coin flipping example, if we repeatedly flipped the coin 400 times, we should get between 45% and 55% heads 95% of the time. But, our average and most common result will be 50% heads.

Because the ranges of the election poll (48%-58% for Clinton and 42%-52% for Trump) overlap, you will often see reporters (and the candidate that is in second place) say that the poll is a “statistical dead heat.” There is no such thing as a statistical dead heat in polling unless the exact number of respondents prefer each candidate, which may never have actually happened in the history of polling.

There is a much better way to report the findings of the poll. We can statistically determine the “odds” that the 53% for Clinton is actually higher than the 47% for Trump. If we repeated the poll many times, what is the probability that the percentage we found for Clinton would be higher than what we found for Trump? In other words, what is the probability that Clinton is going to win?

The answer in this case is 91%.  Based on our example poll, Clinton has a 91% chance of winning the election. Say that instead of 400 people we interviewed 1,000. The same finding would imply that Clinton has a 99% chance of winning. This is a much more powerful and interesting way to report polling results, and we are surprised we have never seen a news organization use polling data in this way.

Returning to our coin flipping example, if we flip a coin 400 times and get heads 53% of the time, there is a 91% chance that we have a coin that is unfair, and biased towards heads. If we did it 1,000 times and got heads 53% of the time, there would be a 99% chance that the coin is unfair. Of course, a poll is a snapshot in time. The closer it is to the election, the more likely it is that the numbers will not change.  And, polling predictions assume many things that are rarely true:  that we have a perfect random sample, that all subgroups respond at the same rate, that questions are clear, that people won’t change their mind on Election Day, etc.

So, I guess the correct answer to “how can we predict the election from surveying 400 people” is “we can’t, but we can make a pretty good guess.”

How to Be a Good Research Client

We’ve been involved in hundreds of client relationships, some more satisfying than others. Client-supplier relationships have all of the makings of a stressful partnership:  a lot of money is at stake, projects can make or break careers, and there can be strong personalities on both sides. But, when the client-supplier relationship really works, it can be long-lasting and productive.

As a supplier, we are always looking for clients and projects that hit on three dimensions at the same time: 1) projects that study topics or business situations that are interesting to work on, 2) projects that are led by individuals that are a pleasure to work with, and 3) projects that work out financially. The projects we complete that are of the highest quality are the ones that hit all three of these dimensions at the same time.

So, if you are a client, how can you manage your project to the greatest success with your suppliers?  In short, you want to be sure your projects hit on these dimensions.

You should also view the client-supplier relationship as a partnership. You are paying the bills and are ultimately the boss, but your suppliers provide two important capabilities you don’t have: 1) they are set up to efficiently fulfill projects, and 2) professionally, they bring a broader perspective to your project than you likely have. You want to take advantage of this perspective. The best projects combine a supplier’s knowledge of research and business situations from other contexts with a client’s knowledge of their industry, brands, and internal situations.

There is a balance of control of a project that can swing too far one way or another. On one extreme is when the client wants little involvement in the project. They seem to just want to write a check and get the project done but not too have to manage it. This is never a prescription for a quality project but it happens commonly. I once had a client that wrote a check for a project, gave me a list of objectives, and then traveled to Asia for 4 months and couldn’t be reached. While I appreciated and was flattered by his trust, the project would have been better served with more involvement from him.

Another scenario, which is more common, is the micro-managing client. This is one that wants to be involved in every research task. This can be debilitating for a supplier. Often, we try to push back and keep you informed, yet involved only in the most necessary elements of a study.  When clients push back and insist on too much involvement the supplier will capitulate. But, the supplier quickly devolves to be an “order taker” and mentally checks out of the project.  As a client you can tell this is happening if your supplier stops volunteering advice and if your conversations get shorter and shorter as the project moves along. Odds are you’ve reached a point where your supplier is frustrated with you and not telling you and just wants the project to be over.

The key is to keep yourself involved in all aspects where you bring more value to the project than the supplier possibly can. You will know your objectives best. You will know what has to happen when the project is over. But, you likely add little value to project execution.

We are blessed to have clients who largely strike the right balance. They are involved in key stages and always know the status of their study. But, they respect the advice we give along the way, understand the strengths we bring, and listen to our advice even if they choose not to take it. They come to us with questions that go beyond research to hear our perspective.

In short, we don’t like the micromanaging client or the absentee client. We do our best work with clients that are clearly in control of their project, but treat us as key partners along the way.

10 Tips to Writing an Outstanding Questionnaire

I have written somewhere between a zillion and a gazillion survey questions in my career. I am approaching 3,000 projects managed or overseen and I have been the primary questionnaire author on at least 1,000 of them.  Doing the math, if an average questionnaire is 35 questions long, it means I have written or overseen 35,000+ survey questions. That is 25 questions a week for 26 years!

More importantly, I’ve had to analyze the results of these questions, which is where one really starts to understand if they worked or not.

I started in the (land line) telephone research days. Back then, it was common practice for questionnaire authors to step into the phone center to conduct interviews during the pre-test or first interview day.  While I disliked doing this, the experience served as the single best education on how to write a survey question I could have had.  I quickly understood if a question was working, was understood by the respondent, etc. It was a trial by fire and in addition to discovering that I don’t have what it takes to be a telephone interviewer I quickly learned what was and wasn’t working with the questions I was writing.

Something in this learning process is lost in today’s online research world. We never really experience first-hand the struggles a respondent has with our questions and thus don’t get to apply this to the next study.  For this reason I am thankful I started in the halcyon days of telephone research. Today’s young researchers don’t have the opportunity to develop these skills in the same way.

There are many guides to writing survey questions out there that cover the basics. Here I thought I’d take a broader view and list some of the top things to keep in mind when writing survey questions.  These are things I wish I had discovered far earlier!

  1. Begin with the end in mind. This concept is straight out of the 7 Habits of Highly Effective People and is central to questionnaire design.  Good questionnaire writers are thinking ahead to how they will analyze the resulting data.  In fact, I have found that if this is done well, writing the research report becomes straightforward.  I have also discovered that when training junior research staff it is always better to help them develop their report writing skills first and then move to questionnaire development.  Once you are an apt report writer questionnaire writing flows naturally because it begins with the end in mind.  It is also a reason why most good analysts/writers run from situations where they have to write a report from a questionnaire someone else has written.
  2. Start with an objective list. We start with a clear objective list the client has signed off on. Every question should be tied to the objective list or it doesn’t make it in the questionnaire. This is an excellent way to manage clients who might have multiple people providing input. It helps them prioritize. Most projects that end up not fully satisfying clients are ones where the objectives weren’t clear or agreed upon at the onset.
  3. Keep it simple – ridiculously simple. One of the most fortuitous things that happened to me in my career is that for a few years I exclusively wrote questionnaires that were intended for young respondents.  When I went back to writing “adult” survey questions I didn’t change a thing as I realized that what works for a 3rd grader is short, clear, unambiguous questions with one possible outcome.  The same thing is true for adults.
  4. Begin with a questionnaire outline. Outlines are easier to work through with clients than questionnaires. The outlines keep the focus on the types of questions we are asking and keep us from dwelling on the precise wording or scales. Writing the outline is actually more difficult than writing the questionnaire.
  5. Use consistent scales. Try not to use more than 2-3 scale types on the same questionnaire as it is confusing to the respondents.
  6. Don’t write long questions. There is evidence that respondents don’t read them. You are better off being more wordy in the answer choices than in the question itself, as online many respondents just look at the answer choices and don’t  even read the question you spent hours tweaking the wording on.
  7. Don’t get cute. We have a software system that allows us to do all sorts of sexy things, like drag-and-drop, slider scales, etc.  We rarely use them, as there is evidence that the bells and whistles are distracting and good old fashion pick lists and radio buttons provide more reliable measures.
  8. Consider mobile. From major research panels, the percentage of respondents answering on mobile devices is just 15% or so currently, but that is rapidly changing. Not only does your questionnaire have to work on the limited screen real estate of a mobile device, but it also is increasingly less likely to be answered by someone tethered to a desktop and laptop screen in a situation where you have their attention.  Your questionnaires are soon going to be answered by people multitasking, walking the dog, hanging with friends, etc.  This context needs to be appreciated.
  9. Ask the question you are getting paid to ask. Too many times I see questionnaires that dance around the main issue of the study without ever directly asking the respondent the central question. While it is nice to back into some issues with good data analysis skills, there is no substitute to simply asking direct questions. We also see questionnaires that allow too many “not sure/no opinion” type options. You are getting paid to find out what the target audience’s opinion is, so if this seems like a frequent response you have probably not phrased the question well.
  10. Think like a respondent and not a client. This is perhaps the most important advice I can give. The respondent doesn’t live and breathe the product or service you are researching like you client does. Survey writers must appreciate this context and ask questions that can be answered. There is a saying that if you “ask a question you will get an answer” – but that is no indication that the respondent understood your question or viewed it in the same context as you client.

Anecdotally, I have found that staff with the strongest data analytics skills and training can be some of the poorest questionnaire writers. I think that is because they can deploy their statistical skills on the back end to make up for their questionnaire writing deficiencies. But, across 3,000 projects I would say less than 100 of them truly required statistical skills beyond what you might learn in the second stats course you take in college. It really isn’t about statistical skills; it is more about translating study objectives into language a target audience can embrace.

Good questionnaire writing is not rocket science (but it is brain surgery). Above all, seek to simplify and not to complicate.

The most profitable industry?

http://www.inc.com/graham-winfrey/the-5-most-profitable-industries-in-the-us.html

According to this Inc. article, online survey software is the most profitable industry in the US. There are only a handful of top-shelf systems out there and they are pricey. In fact, after personnel and taxes, they are our #3 expense.  I think the reason this industry is so profitable is there really isn’t a lot of competition and, after having programmers learn a system for years there is a significant barrier preventing firms from moving to a new system.

There are quite a few “quick and dirty” systems out there for DIY-ers. But, in terms of the major systems that most suppliers tend to use, a lack of competition has driven the pricing very high. I think that will change over time, as it has with online panels. Online panels used to be expensive, but became reasonably priced over time, as new firms emerged in the space.  That hasn’t happened yet for online survey software – at least not for the really good systems.

Polls can be as influential as the election

Many of us on the supplier side of the market research industry had our original interest in this field kindled by political polling. The market research industry was largely established as a by-product of polling. It didn’t take the founding fathers of election polling long to realize that, during a time of massive expansion of the US economy in the post WWII era, there was money to be made by polling for companies and brands.

In some ways polling has become more important than the election itself. In 2000 Elizabeth Dole was touted by many as a potential Republican candidate. While many knew her only as the wife of Bob Dole, she seemed to have a lot going for her. She had been Secretary of Labor, head of the Red Cross, was well-spoken, and seemed poised to become the perhaps the first woman with a realistic shot at the White House. She was seen as a viable candidate by most pundits.

But, polls conducted before any primaries had been contested indicated that her support level was low, largely because she was unknown. As a consequence of a poor showing in early polls, she stumbled in fundraising and pulled out of the race without a voter ever having a chance to vote for or against her. Had the initial polls never been taken, she likely would have had enough fundraising support to enter the initial primaries. As she was an excellent communicator, who knows where it might have gone from there.

This made me wonder what the value of early polling is. It certainly seems to limit the viability of lesser-known candidates. I doubt that if the polling environment in 1992 was as it is today if Bill Clinton would have had the chance to emerge as a contender.

As we turn to the current race, on the Republican side there soon could be as many as a dozen declared candidates, and some are predicting up to 20. Fundraising success will become the first screen to winnow the field. And, early poll results will directly affect their ability to fundraise. I believe this is why Jeb Bush has been late to declare his candidacy. He has had an incredible level of success raising money, and once he declares the pollsters will start assessing his viability. He’s best off continuing to fundraise without becoming a declared candidate as declaring probably runs a risk for him.

Further, both Fox and CNN have recently announced that they will only include the top 10 candidates in the first Republican debates. How will they winnow the field? By looking at polling data.

Should we worry that the polling industry has too much say in who gets support? I asked this question to a well-respected pollster once and he said that the issue is more on how well the polls are done. If we do our jobs well we keep politicians abreast of popular opinion and thus are a valuable contributor to democracy. There is nothing wrong with accurately measuring the truth and communicating it.

Of course, when polls are done poorly, the opposite is true. The media has an insatiable appetite for polls. As a consequence, there are many poorly-designed polls released and reported upon. There are even more polls that are really just shilling for the parties and Super PACs in disguise. The media has been either unable or unwilling to differentiate the credible from the bad, and with a continuous news cycle we’ll see more poor quality polls reported upon.

It doesn’t help that even the major pollsters struggle to get it right. In the recent UK elections, pretty much every pollster missed badly.  Even FiveThirtyEight, Nate Silver’s site that tends to be highly critical of polling and a self-appointed arbiter of good and bad polls, had to issue a mea culpa when their own predictions rang hollow.

As long as the media is running 24/7 and starved for content, the polls will continue.  The challenge is to sort out the good from the bad and the signal from the noise.  It isn’t easy but it is important – literally who gets elected as the next US President can depend upon it.

 


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.