How to Be a Good Research Supplier

Crux Logo Final 2016

A while back, we posted “How to Be a Good Research Client” to help clients understand the makings of a successful partnership from the supplier perspective. Here, we’d like to do the opposite: advise suppliers on how to position for success with their clients.

Being an outstanding supplier goes beyond the technical abilities of understanding statistics, experimental design, business, and marketing. There are many researchers who have these skills, but are not great suppliers. They are necessary, but not sufficient skills.

It starts with empathy – a good supplier will understand not just the business situation the client is facing, but also the internal pressures he/she faces. We’ve found over time that suppliers who have spent time as clients themselves understand what happens to projects after the final presentation in a way that many suppliers just cannot.

So, here goes:  Our 10 tips on how to be a good research supplier.

  1. Begin by seeking out the right clients. There is simply too much pressure, especially at larger research firms, to take on every project that comes your way. It really helps if you have guidelines as to which clients you will accept: which ones match with your skills in a unique way, are doing things you are genuinely interested in studying, and have individuals who are good project managers.
  2. Be honest about what you are good at and not so good at. Research isn’t quite like law or medicine where every task seems to devolve to a specialty, but there are specialties. You are not good at everything and neither is your firm. Once you realize this, you can concentrate on where you provide unique value.
  3. Understand what is at stake. Some market research projects influence how millions of dollars are spent. Still others are a part of a substantial initiative within a company. Somewhere, there is somebody whose career hinges on the success of this initiative. While the research project might come and go in a matter of months to you as a supplier and be one of a dozen you are working on, the success of the project might make or break someone’s career. It is good to never lose sight of that.
  4. Price projects to be profitable. You should price projects to make a strong profit for your firm and then not waiver easily on price. Why? Because then you can put all thoughts of profitability out of your mind at the onset and focus on delivering a great project. Never, and we mean never, take on an unprofitable project because of the prospects of further projects coming down the road. It doesn’t serve you or your client well.
  5. Don’t nickel and dime clients. They will ask for things you didn’t bid on or anticipate. Not everything they need will be foreseeable. An extra banner table. A second presentation. A few extra interviews. Follow ups you didn’t expect. Just do it and don’t look back. Larger research firms are prone to charging for every little thing the client asks. After a while, the client stops asking and moves onto another firm. Projects can be expensive. Nickle and diming your clients for small requests is about as frustrating as buying a new car and having the dealer charge you extra to put floor mats in it.
  6. Never be the one your client is waiting on. If there is one rule here that we feel we have mastered at Crux it is this one. There are a lot of moving parts in a project. You often need things from clients and they need things from you along the way. Never be the one people are waiting on. Stay late if you have to, come in early, work from home … do anything but be the “rate limiting factor” on a project. Your clients will love you for it.
  7. Be around “forever” for follow ups. We have seen suppliers put in contracts that they are available for 3 months from when the project is over for follow up discussions. Why? We love it when clients call years after a project is over as it means the project is still having an influence on their business. Be there as long as it takes for them.
  8. Be human. It took us a little time to learn this one. We used to be very workmanlike and professional around clients to the point of being a bit “stiff.” Then we realized clients want to work with people who are professional about the task at hand, but also fun to be around, and well, human. Granted, they don’t want to hear about every stress of your personal life, but relax a little and be who you are. If that doesn’t work out well for you, you might not be in the right career.
  9. Make them feel like they are your only client. You might have a dozen projects on your plate, family commitments tugging at you, coworkers driving you crazy, and a myriad of other things competing for your time. Time management isn’t easy in a deadline driven field such as research. But it also isn’t your client’s problem. They should feel like you have nothing else to do all day but work on their project. The focus needs to be on them, and not your time management. When you are late for a meeting because you had another run over, you are telling your client they aren’t your number one priority.
  10. Follow up. The project might be over for you, but it lives on longer for your client.  Be sure to follow up a few weeks down the line to see if there is anything else you can do. You’ll be surprised at how often the next project comes up during this conversation.

Congratulations to Truth Initiative!

Congratulations again to our client, Truth Initiative!  Last week Truth won 4 Effies and 2 Big Apple awards for its anti-tobacco campaigns.

Read more about these wins here:  truth campaign grabs 4 effies, 2 big apple awards.

Cause Change. Be Changed.

Congratulations to Causewave Community Partners on their successful annual celebration last week.  It was a sellout!

This video does a great job of capturing what the organization is a about and the value of volunteering.  It also includes a cameo from Lisa on our staff!

 

Asking about gender and sexual orientation on surveys

When composing questionnaires, there are times when the simplest of questions have to adjust to fit the times. Questions we draft become catalysts for larger discussions. That has been the case with what was once the most basic of all questions – asking a respondent for their gender.

This is probably the most commonly asked question in the history of survey research. And it seems basic – we typically just ask:

  • Are you… male or female?

Or, if we are working with younger respondents, we ask:

  • Are you … a boy or a girl?

The question is almost never refused and I’ve never seen any research to suggest this is anything other than a highly reliable measure.

Simple, right?

But, we are in the midst of an important shift in the social norms towards alternative gender classifications. Traditionally, meaning up until a couple of years ago, if we wanted to classify homosexual respondents we wouldn’t come right out and ask the question, for fear that it would be refused or be found to be an offensive question for many respondents. Instead, we would tend to ask respondents to check off a list of causes that they support. If they chose “gay rights”, we would then go ahead and ask if they were gay or straight. Perhaps this was too politically correct, but it was an effective way to classify respondents in a way that wasn’t likely to offend.

We no longer ask it that way. We still ask if the respondent is male or female, but we follow up to ask if they are heterosexual, lesbian, gay, bisexual, transgender, etc.

We recently completed a study among 4-year college students where we posed this question.  Results were as follows:

  • Heterosexual = 81%
  • Bisexual = 8%
  • Lesbian = 3%
  • Gay = 2%
  • Transgender = 1%
  • Other = 2%
  • Refused to answer = 3%

First, it should be noted that 3% refused to answer is less than the 4% that refused to answer the race/ethnicity question on the same survey.  Conclusion:  asking today’s college students about sexual orientation is less sensitive than asking them about their race/ethnicity.

Second, it is more important than ever to ask this question. These data show that about 1 in 5 college students identify as NOT being heterosexual. Researchers need to start viewing these students as a segment, just as we do age or race. This is the reality of the Millennial market:  they are more likely to self-identify as not being heterosexual and more likely to be accepting of alternative lifestyles. Failure to understand this group results in a failure to truly understand the generation.

We have had three different clients ask us if we should start asking this question younger – to high school or middle school students. For now, we are advising against it unless the study has clear objectives that point to a need. Our reasoning for this is not that we feel the kids will find the question to be offensive, but that their parents and educators (whom we are often reliant on to be able to survey minors) might. We think that will change over time as well.

So, perhaps nothing is as simple as it seems.

Crux Research is Going to the Ogilvy’s!

Crux Research is excited to announce that our client, Truth Initiative, is a finalist for two David Ogilvy Awards. These awards are presented by the Advertising Research Foundation (ARF) annually to recognize excellence in advertising research. Ogilvy Awards honor the creative use of research in the advertising development process by research firms, advertising agencies and advertisers.

Truth Initiative is a longstanding client of Crux Research. Truth Initiative is America’s largest non-profit public health organization dedicated to making tobacco use a thing of the past. Truth is a finalist in two Ogilvy categories:

For both of these campaigns, Crux Research worked closely with CommSight and Truth Initiative to test the effectiveness of the approaches and executions prior to launch and to track the efficacy of the campaigns once in market.

We are honored and proud to be a part of these campaigns, to have had the opportunity to work with Truth Initiative and CommSight, and most importantly, to have played a supporting role in Truth’s mission to make youth smoking a thing of the past.

The 2016 ARF David Ogilvy Awards Ceremony will be held March 15 in New York.  More information can be found Ogilvy Awards.

How can you predict an election by interviewing only 400 people?

This might be the most commonly asked question researchers get at cocktail parties (to the extent that researchers go to cocktail parties). It is also a commonly unasked question among researchers themselves: how can we predict an election by only talking to 400 people? 

The short answer is we can’t. We can never predict anything with 100% certainty from a research study or poll. The only way we could predict the election with 100% certainty would be to interview every person who will end up voting. Even then, since people might change their mind between the poll and the election we couldn’t say our prediction was 100% likely to come true.

To provide an example, if I want to flip a coin 100 times, my best estimate before I do it would be that I will get “heads” 50 times. But, it isn’t 100% certain the coin will land on heads 50 times.

The reason it is hard to comprehend how we predict elections by talking to so few people is our brains aren’t trained to understand probability. If we interview 400 people and find that 53% will vote for Hillary Clinton and 47% for Donald Trump, as long as the poll was conducted well, this result becomes our best prediction for what the vote will be. It is similar to predicting we will get 50 heads out of 100 coin tosses.  53% is our best prediction given the information we have. But, it isn’t an infallible prediction.

Pollsters provide a sampling error, which is +/-5% in this case. 400 is a bit of a magic number. It results in a maximum possible sampling error of +/-5% which has long been an acceptable standard. (Actually, we need 384 interviews for that, but researchers will use 400 instead because it sounds better.)

What that means is that if we repeated this poll over and over, we would expect to find Clinton to receive between 48% and 58% of the intended vote, 95% of the time. We’d expect Trump to receive between 42% and 52% of the intended vote, 95% of the time. On average though, if we kept doing poll after poll, our best guess would be if we averaged Clinton’s result it would be 53%.

In the coin flipping example, if we repeatedly flipped the coin 400 times, we should get between 45% and 55% heads 95% of the time. But, our average and most common result will be 50% heads.

Because the ranges of the election poll (48%-58% for Clinton and 42%-52% for Trump) overlap, you will often see reporters (and the candidate that is in second place) say that the poll is a “statistical dead heat.” There is no such thing as a statistical dead heat in polling unless the exact number of respondents prefer each candidate, which may never have actually happened in the history of polling.

There is a much better way to report the findings of the poll. We can statistically determine the “odds” that the 53% for Clinton is actually higher than the 47% for Trump. If we repeated the poll many times, what is the probability that the percentage we found for Clinton would be higher than what we found for Trump? In other words, what is the probability that Clinton is going to win?

The answer in this case is 91%.  Based on our example poll, Clinton has a 91% chance of winning the election. Say that instead of 400 people we interviewed 1,000. The same finding would imply that Clinton has a 99% chance of winning. This is a much more powerful and interesting way to report polling results, and we are surprised we have never seen a news organization use polling data in this way.

Returning to our coin flipping example, if we flip a coin 400 times and get heads 53% of the time, there is a 91% chance that we have a coin that is unfair, and biased towards heads. If we did it 1,000 times and got heads 53% of the time, there would be a 99% chance that the coin is unfair. Of course, a poll is a snapshot in time. The closer it is to the election, the more likely it is that the numbers will not change.  And, polling predictions assume many things that are rarely true:  that we have a perfect random sample, that all subgroups respond at the same rate, that questions are clear, that people won’t change their mind on Election Day, etc.

So, I guess the correct answer to “how can we predict the election from surveying 400 people” is “we can’t, but we can make a pretty good guess.”

The Cost of Not Going to College Is Probably Not As High As You Think

Each year, there are a number of studies that show the same thing:  there has never been a time when the salary gap between high school graduates and college graduates has been higher. According to Pew, this salary gap currently averages $17,500. The College Board puts the gap at $21,100.The implication seems clear:  stay in school, go to college, and reap the benefits.

However, there is actually a lot of nuance to this story and the true causes of this wage gap are rarely discussed. First, the fact that an average college graduate makes, say $20,000 more than a high school graduate entering the workforce does not mean that if you coax a high schooler who was not going to go to college to attend, he/she will make that much more. In fact, you should expect that particular student to make a much lower wage premium. Why?

The data both Pew and the College Board cite suffers from what researchers would call a “self-selection bias.” In short, high school graduates who enter the workforce immediately after graduation aren’t a comparable base of individuals to those who choose to go to college. The result is an apples-to-oranges comparison that makes the economic value of going to college versus going straight to the workforce to seem greater than it actually is.

To get a true measure of the “college premium” we’d have to run an experiment. We’d take a large sample of high school seniors and assign them to either “work” or “college” randomly. The difference in the “work” and “college” group would be the true college premium, and would be much less than the $20,000 that is claimed. (Of course this experiment could never actually happen!)

Why? Because the high school senior who chooses to college has higher earning potential than the one who chooses to work and would earn more even if he/she did not go to college. Similarly, the high school senior who chooses to work rather than college would be expected to make less than the average that current college students make if he/she chose to go to college.

Another example of this same concept would be the salary figures colleges promulgate. The median starting salary for a Stanford graduate is $61,300 per year. The average starting salary for a 4-year college graduate is $45,370.

Does this imply a Stanford education is responsible for a $15,930 starting salary premium compared to an “average” 4-year college? Absolutely not. To understand the Stanford premium, we’d have to take all incoming college students and randomly assign them to colleges. Then, in four years we can compare the average starting salary of graduates and make a credible claim that the Stanford premium is the difference. It will be much less than $15,930. Why? Because the incoming Stanford student has a potential earning power that is higher than the typical incoming college student. Much of the current “Stanford premium” would be due to this self-selection of the student and not to the education they receive at Stanford.

The information that is put out there regarding the college premium has unintended, but serious consequences. First, it pushes many students to choose college who will not gain the salary premium they expect. Many of these students will take substantial loans, may drop out, and will left in a financial mess that takes much of their adult lives to recover from. It is a little known fact that just 53% of those who enroll in a 4-year college actually end up graduating.

Second, this thinking drives many strong students to go to more expensive colleges. Many don’t realize that the salary premiums they will command likely have more to do with who they are than where they choose to attend college. It is likely that the key determinants of a young person’s success will not be where he/she went to college but more their own talents, hard work, and ambition.

Finally, our political leaders jump on statistics such as the college premium. They perpetuate a myth that all students should go to college, establish programs to make this possible, etc. This has contributed to unemployment among college graduates, declines in starting salaries among those who do, a crisis in middle skills employment, and a mismatch of labor to available jobs.

This is not to say a college education is not a worthy pursuit. In fact, it is a good idea for most, and jobs should not be the sole goal of college. However, we don’t do right by high school students by overstating this gap and having a singular mindset that college is the only path to success.



Follow

Get every new post delivered to your Inbox.

Join 41 other followers