Posts Tagged 'Market Research'

Is segmentation just discrimination with an acceptable name?

A short time ago we posted a basic explanation of the Cambridge Analytica/Facebook scandal (which you can read here). In it, we stated that market segmentation and stereotyping are essentially the same thing. This presents an ethical quandary for marketers as almost every marketing organization makes heavy use of market segmentation.

To review, marketers place customers into segments so that they can better understand and serve them. Segmentation is at the essence of marketing. Segments can be created along any measurable dimension, but since almost all segments have a demographic component we will focus on that for this post.

It can be argued that segmentation and stereotyping are the same thing. Stereotyping is attaching perceived group characteristic to an individual. For instance, if you are older I might assume your political views lean conservative, since it is known that political views tend to be more conservative in older Americans that they are in general among younger Americans. If you are female I might assume you are more likely to be the primary shopper for your household, since females in total do more of the family shopping than males. If you are African-American, I might assume you have a higher likelihood than others to listen to rap music, since that genre indexes high among African-Americans.

These are all stereotypes. These examples can be shown to true of a larger group, but that doesn’t necessarily imply that they apply to all the individuals in the group. There are plenty of liberal older Americans, females who don’t shop at all, and African-Americans who can’t stand rap music.

Segmenting consumers (which is applying stereotypes) isn’t inherently a bad thing. It leads to customized products and better customer experiences. The potential problem isn’t with stereotyping, it is when doing so moves to a realm of being discriminatory that we have to be careful. As marketers we tread a fine line. Stereotyping oversimplifies the complexity of consumers by forming an easy to understand story. This is useful in some contexts and discriminatory in others.

Some examples are helpful. It can be shown that African-Americans have a lower life expectancy than Whites. A life insurance company could use this information to charge African-Americans higher premiums than Whites. (Indeed, many insurance companies used to do this until various court cases prevented them from doing so.) This is a segmentation practice that many would say crosses a line to become discriminatory.

In a similar vein, car insurance companies routinely charge higher risk groups (for example younger drivers and males) higher rates than others. That practice has held up as not being discriminatory from a legal standpoint, largely because the discrimination is not against a traditionally disaffected group.

At Crux, we work with college marketers to help them make better admissions offer decisions. Many colleges will document the characteristics of their admitted students who thrive and graduate in good standing. The goal is to profile these students and then look back at how they profiled as applicants. The resulting model can be used to make future admissions decisions. Prospective student segments are established that have high probabilities of success at the institution because they look like students known to be successful, and this knowledge is used to make informed admissions offer decisions.

However, this is a case where a segmentation can cross a line and become discriminatory. Suppose that the students who succeed at the institution tend to be rich, white, female, and from high performing high schools. By benchmarking future admissions offers against them, an algorithmic bias is created. Fewer minorities, males, and students from urban districts will be extended admissions offers What turns out to be a good model from a business standpoint ends up perpetuating a bias., and places certain demographics of students at a further disadvantage.

There is a burgeoning field in research known as “predictive analytics.” It allows data jockeys to use past data and artificial intelligence to make predictions on how consumers will react. It is currently mostly being used in media buying. Our view is it helps in media efficiency, but only if the future world can be counted on to behave like the past. Over-reliance on predictive analytics will result in marketers missing truly breakthrough trends. We don’t have to look further than the 2016 election to see how it can fail; many pollsters were basing their modeling on how voters had performed in the past and in the process missed a fundamental shift in voter behavior and made some very poor predictions.

That is perhaps an extreme case, but shows that segmentations can have unintended consequences. This can happen in consumer product marketing as well. Targeted advertising can become formulaic. Brands can decline distribution in certain outlets. Ultimately, the business can suffer and miss out on new trends.

Academics (most notably Kahneman and Tversky) have established that people naturally apply heuristics to decision making. These are “rules of thumb” that are often useful because they allow us to make decisions quickly. However, these academics have also demonstrated how the use of heuristics often result in sub-optimal and biased decision making.

This thinking applies to segmentation. Segmentation allows us to make marketing decisions quickly because we assume that individuals take on the characteristics of a larger group. But, it ignores the individual variability within the group, and often that is where the true marketing insight lies.

We see this all the time in the generational work we do. Yes, Millennials as a group tend to be a bit sheltered, yet confident and team-oriented. But this does not mean all of them fit the stereotype. In fact, odds are high that if you profile an individual from the Millennial generation, he/she will only exhibit a few of the characteristics commonly attributed to the generation. Taking the stereotype too literally can lead to poor decisions.

This is not to say that marketers shouldn’t segment their customers. This is a widespread practice that clearly leads to business results. But, they should do so considering the errors and biases applying segments can create, and think hard about whether this can unintentionally discriminate and, ultimately, harm the business in the long term.

Has market research become Big Brother?

Technological progress has disrupted market research. Data are available faster and cheaper than ever before. Many traditional research functions have been automated out of existence or have changed significantly because of technology. Projects take half the time to complete that they did just a decade ago. Decision making has moved from an art to a science. Yet, as with most technological disruptions, there are just as many potential pitfalls as efficiencies to be wary of as technology changes market research.

“Passive” data collection is one of these potential pitfalls. It is used by marketers in good ways: the use of passive data helps understand consumers better, target meaningful products and services, and create value for both the consumer and the marketer. However, much of what is happening with passive data collection is done without the full knowledge of the consumer and this process has the potential of being manipulative. The likelihood of backlash towards the research industry is high.

The use of passive data in marketing and research is new and many researchers may not know what is happening so let us explain. A common way to obtain survey research respondents is to tap into large, opt-in online panels that have been developed by a handful of companies. These panels are often augmented with social (river) channels whereby respondents are intercepted while taking part in various online activities. A recruitment email or text is delivered, respondents take a survey, and data are analyzed. Respondents provide information actively and with full consent.

There have been recent mergers which have resulted in fewer but larger and more robust online research panels available. This has made it feasible for some panel companies to gain the scale necessary to augment this active approach with passive data.

It is possible to append information from all sorts of sources to an online panel database. For instance, voter registration files are commonly appended. If you are in one of these research panels, clients likely know if you are registered to vote, if you actually voted, and your political party association. They will have made a prediction of how strong a liberal or conservative you likely are. They may have even run models to predict which issues you care most about. You are likely linked into a PRIZM cluster that associates you with characteristics of the neighborhood where you reside, which in turn can score your potential to be interested in all sorts of product categories. This is all in your file.

These panels also have the potential to link to other publicly-available databases such as car registration files, arrest records, real estate transactions, etc. If you are in these panels, whether you have recently bought a house, how much you paid for it, if you have been convicted of a crime, may all be in your “secret file.”

But, it doesn’t stop there. These panels are now cross-referenced to other consumer databases. There are databases that gather the breadcrumbs you leave behind in your digital life: sites you are visiting, ads you have been served, and even social media posts you have made. There is a tapestry of information available that is far more detailed than most consumers realize. From the research panel company’s perspective, it is just a matter of linking that information to their panel.

This opens up exciting research possibilities. We can now conduct a study among people who are verified to have been served by a specific client’s digital advertising. We can refine our respondent base further by those who are known to have clicked on the ad. As you can imagine, this can take ad effectiveness research to an entirely different level. It is especially interesting to clients because it can help optimize media spending which is by far the largest budget item for most marketing departments.

But, therein lies the ethical problem. Respondents, regardless of what privacy policies they may have agreed to, are unlikely to know that their passive web behavior is being linked into their survey responses. This alone should ring alarm bells for an industry suffering from low response rates and poor data quality. Respondents are bound to push back when they realize there is a secret file panel companies are holding on them.

Panel companies are straying from research into marketing. They are starting to encourage clients to use the survey results to better target individual respondents in direct marketing. This process can close a loop with a media plan. So, say on a survey you report that you prefer a certain brand of a product. That can now get back to you and you’ll start seeing ads for that product, likely without your knowledge that this is happening because you took part in a survey.

To go even further, this can affect advertising people not involved in the survey may see. If you prefer a certain brand and I profile a lot like you, as a result of your participation in a survey I may end up seeing specific ads. Even if I don’t know you or have any connection to you.

In some ways, this reeks of the Cambridge Analytica scandal (which we explain in a blog post here). We’ll be surprised if this practice doesn’t eventually create a controversy in the survey research industry. This sort of sales targeting resulting from survey participation will result in lower response rates and a further erosion of confidence in the market research field. However, it is also clear that these approaches are inevitable and will be used more and more as panel companies and clients gain experience with them.

It is the blurring of the line between marketing and market research that has many old-time researchers nervous. There is a longstanding ethical tenet in the industry that participation in research project should in no way result in the respondent being sold or marketed to. The term for this is SUGGING (Selling Under the Guise of research) and all research industry trade groups have a prohibition against SUGGING embedded in their codes of ethics. It appears that some research firms are ignoring this. But, this concept has always been central to the market research field: we have traditionally assured respondents that they can be honest on our surveys because we will in no way market to them directly because of their answers.

In the novel 1984 George Orwell describes a world where the government places its entire civilization under video surveillance. For most of the time since its publication, this has appeared as a frightening, far-fetched cautionary tale. Recent history has suggested this world may be upon us. The NSA scandal (precipitated by Edward Snowden) showed how much of our passive information is being shared with the government without our knowledge. Rather than wait for the government to surveil the population, we’ve turned the cameras on ourselves. Marketers can do things I don’t feel people realize and research respondents are unknowingly enabling this. The contrails you leave as you simply navigate your life online can be used to follow you and the line between research and marketing is fading, and this will eventually be to the detriment of our field.

Market research isn’t about storytelling, it is about predicting the future

We recently had a situation that made me question the credibility of market research. We had fielded a study for a long-term client and were excited to view the initial version of the tabs. As we looked at results by age groupings we found them to be surprising. But this was also exciting because we were able to weave a compelling narrative around why the age results seemed counter-intuitive.

Then our programmer called to say a mistake had been made in the tabs and the banner points by age had been mistakenly reversed.

So, we went back to the drawing board ad constructed another, equally compelling story, as to why the data were behaving as they were.

This made me question the value of research. Good researchers can review seemingly disparate data points from a study and generate a persuasive story as to why they are as they are. Our entire business is based on this skill – in the end clients pay us to use data to provide insight into their marketing issues. Everything else we do is a means to this end.

Our experience with the flipped age banner points illustrates that stories can be created around any data. In fact, I’d bet that if you gave us a randomly-generated data set we could convince you as to its relevance to your marketing issues. I actually thought about doing this – taking the data we obtain by running random data through a questionnaire when testing it before fielding, handing it to an analyst, and seeing what happens. I’m convinced we could show you a random data set’s relevance to your business.

This issue is at the core of polling’s PR problem. We’ve all heard people say that you can make statistics say anything, therefore polls can’t be trusted. There are lies, damn lies, and statistics. I’ve argued against this for a long time because the pollsters and researchers I have known have universally been well-intentioned and objective and never try to draw a pre-determined conclusion from the data.

Of course, this does not mean that all of the stories we tell with data aren’t correct or enlightening. But, they all come from a perspective. Clients value external suppliers because of this perspective – we are third-party observers who aren’t wrapped up in the internal issues client’s face and we are often in a good position to view data with an objective mind. We’ve worked with hundreds of organizations and can bring these experiences bring that to bear on your study. Our perspective is valuable.

But, it is this perspective that creates an implicit bias in all we do. You will assess a data set from a different set of life experiences and background than I will. That is just human nature. Like all biases in research, our implicit bias may or not be relevant to a project. In most cases, I’d say it likely isn’t.

So, how can researchers reconcile this issue and sleep at night knowing their careers haven’t been a sham?

First and foremost, we need to stop saying that research is all about storytelling. It isn’t. The value of market research isn’t in the storytelling it is in the predictions of the future it makes. Clients aren’t paying us to tell them stories. They are paying us to predict the future and recommend actions that will enhance their business. Compelling storytelling is a means to this but is not our end goal. Data-based storytelling provides credibility to our predictions and gives confidence that they have a high probability of being correct.

In some sense, it isn’t the storytelling that matters, it is the quality of the prediction. I remember having a college professor lecturing on this. He would say that the quality of a model is judged solely by its predictive value. Its assumptions, arguments, and underpinnings really didn’t matter.

So, how do we deal with this issue … how do we ensure that the stories we tell with data are accurate and fuel confident predictions? Below are some ideas.

  1. Make predictions that can be validated at a later date. Provide a level of confidence or uncertainty around the prediction. Explain what could happen to prevent your prediction from coming true.
  2. Empathize with other perspectives when analyzing data. One of the best “tricks” I’ve ever seen is to re-write a research report as if you were writing it for your client’s top competitor. What conclusions would you draw for them? If it is an issue-based study, consider what you would conclude from the data if your client was on the opposite side of the issue.
  3. Peg all conclusions to specific data points in the study. Straying from the data is where your implicit bias may tend to take over. Being able to tie conclusions directly to data is dependent on solid questionnaire design.
  4. Have a second analyst review your work and play devil’s advocate. Show him/her the data without your analysis and see what stories and predictions he/she can develop independent of you. Have this same person review your story and conclusions and ask him/her to try to knock holes in them. The result is a strengthened argument.
  5. Slow down. It just isn’t possible to provide stories, conclusions, and predictions from research data that consider differing perspectives when you have just a couple of days to do it. This requires more negotiation upfront as to project timelines. The ever-decreasing timeframes for projects are making it difficult to have the time needed to objectively look at data.
  6. Realize that sometimes a story just isn’t there. Your perspective and knowledge of a client’s business should result in a story leaping out at you and telling itself. If this doesn’t happen, it could be because the study wasn’t designed well or perhaps there simply isn’t a story to be told. The world can be a more random place than we like to admit, and not everything you see in a data set is explainable. Don’t force it – developing a narrative that is reaching for explanations is inaccurate and a disservice to your client.

The Cambridge Analytica scandal points to marketing’s future

There has been a lot of press, almost universally bad, regarding Cambridge Analytica recently. Most of this discussion has centered on political issues (how their work may have benefitted the Trump campaign) and on data privacy issues (how this scandal has shined a light on the underpinnings of Facebook’s business model). One thing that hasn’t been discussed is the technical brilliance of this approach to combining segmentation, big data, and targeted communications to market effectively. In the midst of an incredibly negative PR story lurks the story of a controversial future of market research and marketing.

To provide a cursory and perhaps oversimplified recap of what happened, this all began with a psychographic survey which provided input into a segmentation. This is a common type of market research project. Pretty much every brand you can think of has done it. The design usually has a basis in psychology and the end goal is typically to create subgroups of consumers that provide a better customer understanding and ultimately help a client spend marketing resources more efficiently by targeting these subgroups.

Almost every marketer targets demographically – by easy to identify characteristics such as age, gender, race/ethnicity, and geography. Many also target psychographic ally – by personality characteristics and deeper psychological constructs. The general approach taken by Cambridge Analytics has been perfected over decades and is hardly new. I’d say I’ve been involved in about 100 projects that involve segmenting on a psychographic basis.

To give a concrete example, this type of approach is used by public health campaigns seeking to minimize drug and alcohol use. Studies will be done on a demographic basis that indicate things like drug use skews towards males more than females, towards particular age groups, and perhaps even certain regions of the country. But, it can also be shown that those most at risk of addiction also have certain personality types – they are risk takers, sensation seekers, extroverts, etc. Combined with demographic information, this can allow a public health marketer to target their marketing spend as well as help them craft messages that will resound with those most at risk.

Segmentation is essentially stereotyping with another name. It is associating perceived characteristics of a group with an individual. At its best, this approach can provide the consumer with relevant marketing and products customized to his/her needs. At its worst, it can ignore variation within a group and devalue the consumer as an individual. Segmentation can turn to prejudice and profiling fast and marketers can put too much faith in it.

Segmentation is imperfect. Just because you are a male, aged 15-17, and love to skateboard without a helmet and think jumping out of an airplane would be cool does not necessarily mean you are at risk to initiate drug use. But, our study might show that for every 100 people like you, 50 of them are at risk, and that is enough to merit spending prevention money towards reaching you. You might not be at risk for drug use, but we think you have a 50% chance of being so and this is much higher than the general risk in the population. This raises the efficiency of marketing spending.

What Cambridge Analytica did was analogous to this. The Facebook poll users completed provided data needed to establish segments. These segments were then used to predict your likelihood to care about an issue. Certain segments might be more associated with hot button issues in the election campaign, say gun rights, immigration, loss of American jobs, or health care. So, once you filled out the survey, combined with demographic data, it became possible to “score” you on these issues. You might not be a “gun nut” but your data can provide the researcher with the probability that you are, and if it is high enough you might get an inflammatory gun rights ad targeted to you.

Where this got controversial was, first and foremost, regardless of what Facebook’s privacy policy may say, most users had no clue that answering an innocuous quiz might enable them to be targeted in this way. Cambridge Analytica had more than the psychographic survey at their disposal – they also had demographics, user likes and preferred content, and social connections. They also had much of this information on the user’s Facebook friends as well. It is the depth of the information they gathered than has led to the crisis at Facebook.

People tend to associate most strongly with people who are like them. So, if I score you high on a “gun nut scale” chances are reasonably high that your close friends will have a high probability of being like you. So, with access to your friends, a marketer can greatly expand the targeted reach of the campaign.

It is hard to peel away from the controversies to see how this story really points to the future of marketing, and how research will point the way. Let me explain.

Most segmentations suffer from a fatal flaw: they segment with little ability to follow up by targeting. With a well-crafted survey we can almost always create segments help a marketer better understand his/her customers. But, often (and I would even say most of the time) it is next to impossible to target these segments. Back to the drug campaign example, since I know what shows various demographic groups watch, I can tell you to spend your ad dollars on males aged 16-17. But, how the heck do you then target further and find a way to reach the “risk taking” segment you really want? If you can’t target, segmentation is largely an academic exercise.

Traditionally you couldn’t target psychographic segments all that well. But, with what Google and Facebook now know about their users, you can. If we can profile enough of the Facebook teenage user base and have access to who their friends are, we can get incredibly efficient in our targeting.  Ad spend can get to those who have a much higher propensity for drug use and we can avoid wasting money on those who have low propensity.

It is a brilliant approach. But, like most things on the Internet, it can be a force for bad as well as good. If what Cambridge Analytica had done was for the benefit of an anti-drug campaign, I don’t think it would be nearly the story it has become. Once it went into a polarized political climate, it became news gold.

Even when an approach like this is applied to what most would call legitimate marketing, say for a consumer packaged good, it can get a bit creepy and feel manipulative. It is conceivable that via something one of my Facebook friends did, I can get profiled as a drinker of a specific brand of beer. Since Google also knows where my phone is, I can then be sent an ad or a coupon at the exact moment I walk by the beer case in my local grocery store. Or, my friends can be sent the same message. And I didn’t do anything to knowingly opt into being targeted like this.

There are ethical discussions that need to be had regarding whether this is good or bad, if it is a service to the consumer, or if it is too manipulative. But, this sort of targeting and meshing of research and marketing is not futuristic – all of the underpinning technology is there at the ready and it is only a matter of time until marketers really learn how to tap into it. It is a different world for sure and one that is coming fast.

Congrats to Truth Initiative – Wins Gold at Ogilvy Awards!

Congratulations to our client Truth Initiative on winning Gold at the David Ogilvy Awards. The Ogilvy awards are unique in that they celebrate campaigns that effectively use market research to spark an insightful campaign. Truth Initiative won gold in the “Unexpected Targeting and Segmentation” category.

The Truth Campaign was called “Stop Profiling.” It centered on a social justice theme – that today’s youth will ban together if they perceive a segment of the population is being treated unfairly. Truth’s ad (“Market Priority”) can be seen here.

Crux Research partnered with CommSight to provide formative research, copy testing, and campaign tracking. We are excited to be a part of this award-winning effort – and this award is the third Ogilvy we have been involved in for Truth Initiative.

Millennial College Students Are Torn Between Open Speech and Protecting the Vulnerable

We recently completed a poll of 1,000 college students on the topic of free speech on campus. Previous postings (here and here) have shown that students are reticent to support controversial speakers on campus and do not support any speakers who might have viewpoints that some students find to be uncomfortable.

In this final post on our poll results, we take a look at some contradictions in our data that demonstrate that today’s college students are torn between a desire to favor a campus that promotes free and open debate and an ethos that makes them want to protect the vulnerable from feeling uncomfortable.

There has been a long-held belief by conservatives that colleges are bastions of liberal thinking and perhaps indoctrination. Our poll results lend support to this viewpoint, as 52% of college students feel their professors tend to be more liberal in their thinking than the nation as a whole while just 23% feel their professors are more conservative:

Compared to the views of the nation as a whole, would you say that your current professors/instructors tend to be:
More conservative in their thinking 23%
About the same as the nation as a whole 25%
More liberal in their thinking 52%

Students tend to express a desire for their professors to be given a wide latitude to express their views and are largely not in support of administrators censoring how professors express their views to students.

Which statement below comes closest to your opinion?
College administrators should closely monitor what professors/instructors teach to make sure all students are comfortable 33%
College professors/instructors should be given a wide degree of freedom to express their views to students 67%

The result below shows that students report that colleges should encourage students to have an open mind to ideas that they may find uncomfortable. At first glance, college students seem to favor an atmosphere of openness on campus.

Which statement below comes closest to your opinion?
Colleges should attempt to shield students from ideas and opinions they may find unwelcome and offensive 25%
Colleges should encourage students to be exposed to ideas and opinions they may find unwelcome and offensive 75%

Millennial college students also recognize that free and open speech is central to university life. For example:

  • Two-thirds (66%) agree that the intellectual vitality of a university depends on open and free expression of ideas.
  • 63% agree that free speech, including controversial speech, is central to college teaching and learning.
  • 57% agree that student-run newspapers have a first amendment right to publish controversial stories without running afoul of college administrators.

That said, this poll also shows that Millennials also hold some views that run counter to the free speech ethos they express:

  • 57% agree that students should be encouraged to report instances of professor bias to administrators.
  • 48% feel that students should be provided warnings in advance to alert them to potentially troublesome readings.
  • 45% feel that colleges should provide intellectual safe spaces, where students can retreat from ideas and perspectives that are at odds with their own.

And, as we discussed in our previous postings, students shy away from permitting almost any type of speaker on campus that could potentially communicate anything that might cause a subgroup of students discomfort.

So, there are some contradictions in our findings that needs explaining. We feel that there is likely some nuance on Millennial opinion. The Millennial college student seems torn between realizing that exposure to ideas counter to their own is essential to their education and a strong ethos of protecting the vulnerable.

Which statement below comes closest to your opinion?
It is more important that colleges stick up for the vulnerable 50%
It is more important that colleges stand up for a spirit of inquiry 50%

This nuance is difficult for Boomer and Xers (who make up most college administrators and professors) to grasp. Older generations grew up not only at a time when free and open speech was held to a higher standard but also at a time where the college/university campus was the nexus of student opinion and influence. Today’s Millennial student has experienced more cultural diversity on campus and has established digital meeting spaces are their nexus for opinion and community. Millennials are exposed to diverse and controversial opinions constantly, to the point where their desire to protect the campus from controversy and discomfort may be a defense mechanism. It is an environment they can control.

What this all means for the university has yet to be seen. But, campus life is changing, and it will be key that the pendulum that is now swinging towards safety and comfort doesn’t swing so far as to limit student exposure to valuable viewpoints and a well-rounded worldview.

Students Are More Likely to Oppose Campus Speakers Than to Support Them

We recently posted a result from an in-depth poll we conducted among 1,000 college students last fall. In this poll we asked students about specific speakers they may or may not support coming to their campus. Among our conclusions was that students largely aren’t supportive of very many speakers – particularly individuals who might be considered to be controversial or present ideas some might find uncomfortable.

In this same poll, we asked students about types of speakers that might come to a college campus. We included speaker types we felt most observers would feel are appropriate as well as speaker types that we felt even the most passionate free speech advocates might question. Our goal was to see where “the line” might be for today’s college students. The answer is the line is very high – students largely don’t want campus speakers at all.

The table below shows the percentage of US college students who would support each type of speaker coming to their campus to speak:

Support
A leader from the Black Lives Matter movement 50%
An advocate for the legalization of marijuana 46%
An elected official with views that are vastly different than yours 22%
A publisher of pornographic videos 21%
An activist who has a different view on abortion than you do 19%
A speaker who strongly opposes the Black Lives matter movement 19%
A politician who is against gay marriage 17%
A speaker who believes that there are racial differences in intelligence 17%
A tobacco company executive 14%
A speaker who is known to have sexually harassed a colleague in the past 11%
Muslim who advocates hatred towards the United States 10%
A speaker who believes that the Holocaust did not happen 10%
A white supremacist 10%

Some interesting conclusions can be made by looking at whom students are willing to support coming to their campus to speak:

  • Even the most highly supported type of speaker (A leader from the Black Lives Matter movement) is only supported by half (50%) of students. Support for any type of campus speaker is tepid.
  • Two types of speakers stood out as having the most support: Leaders from the Black Lives Matter movement and advocates for the legalization of marijuana.
  • It is perhaps troubling that only about 1 in 5 students (22%) support an elected official with views different from their own.
  • Racially insensitive speakers (white supremacists and Holocaust deniers) are the least supported types of speakers.

We can also look at the same list, but this time sorted by the percentage of students who oppose this type of speaker coming to their campus to speak:

Oppose
A white supremacist 68%
A speaker who believes that the Holocaust did not happen 68%
A speaker who is known to have sexually harassed a colleague in the past 67%
Muslim who advocates hatred towards the United States 66%
A speaker who believes that there are racial differences in intelligence 51%
A politician who is against gay marriage 50%
A tobacco company executive 49%
A speaker who strongly opposes the Black Lives matter movement 46%
A publisher of pornographic videos 39%
An activist who has a different view on abortion than you do 27%
An elected official with views that are vastly different than yours 25%
An advocate for the legalization of marijuana 16%
A leader from the Black Lives Matter movement 16%

Here we see that:

  • In general, students are more passionate in their opposition to speaker types than in their support.
  • Speakers with racially insensitive views and those known to have sexually harassed someone are the most opposed types of speakers. Speakers who have sexually harassed are opposed just as much as white supremacists.
  • About half of students oppose politicians who are against gay marriage and tobacco company executives. This is about the same level of opposition as to a speaker who believes there are racial differences in intelligence.
  • About 1 in 4 students would oppose an elected official that has different views than the student.

Because there have been instances of speakers being shouted down and even physically confronted by college students, we posed a question that asked students what they felt were acceptable ways to protest against a campus speaker.

Which of the following actions would you take if you were strongly opposed to a speaker your college had invited to speak on campus?
Disagree with the speaker during a question-and-answer period 25%
Organize a boycott of the speech 22%
Stage a protest outside of the building where the speech is taking place 21%
Host a concurrent speech from a speaker with an opposing view 16%
Stage a sit-in at an administrative building 12%
Physically confront the speaker 8%
Disrupt the speech while it is going on 7%

For the most part, students don’t support any actions if they strongly oppose a campus speaker. While it is encouraging to see that they do not support disrupting the speech or physically confronting a speaker, it is perhaps just as disheartening to see that only 1 in 4 would be willing to disagree with the speaker during a Q&A period. So, not only do students not want most types of speakers, they aren’t willing to step up and do something if a speaker they find controversial does come to campus.

Just as we found when we looked at specific speakers, students seem to be shying away from not just controversial speakers, but also those that might make some portion of the student body uncomfortable. Based on these results, we predict that there will be fewer speakers invited to college campuses in the future and that attendance at these events will decline.

The types of people you find in a market research presentation

Last summer I led a market research results presentation at a client’s office. I had not met any of the individuals in the meeting prior to the presentation other than my immediate client-contact. During introductions I tried my best to understand who was who and to carefully observe the dynamics between people. “Knowing thy audience” is key to an effective presentation.

And, I have to admit – within a few minutes I found myself stereotyping the members of my audience. I have delivered scores of presentations in the past and I can usually quickly assess what the dynamic of the room is going to be like and categorize attendees. But, I can also be wrong in my assessment and it isn’t healthy to make assumptions about people without taking the time to truly get to know them. I sort of feel guilty that I find myself doing this.

This particular presentation had gathered an interesting cast of characters and I couldn’t help but think about how they each were similar to people I have presented to in the past at various clients. Anyway, the list below is meant to be a bit humorous, and I think that anyone who has been in market research presentations will see people they recognize below.

“The Characters You Find in a Market Research Presentation.”

  • The Introvert. This is a person who says little during the meeting but her mind is racing. She tends to get active late in the meeting and provides insightful comments because she doesn’t feel a need to chime in on every obvious point. Others in the organization often ignore her because she is introverted but she is often the smartest person in the room. However, she has the potential to derail the end of the meeting by starting an entirely new line of conversation as you are trying to wrap up. How to succeed with the Introvert: Try to engage her early and ask for her perspective late in the meeting as this person often has the best things to say and adds a lot to the discussion if you can draw her out.
  • Mr. (Lack of) Attention Span. This is a person who probably comes late to the meeting and forces you to start over and repeat the first 10 minutes. Once in the meeting, he is constantly checking his phone, having side conversations, and asking questions that you just answered. This is also the person that skips ahead in the deck and won’t let you build a story as you would like. How to succeed with Mr. Attention Span: Do not provide handouts beforehand or during this meeting. Keep the presentation short if possible. State ground rules up front as to when you will pause for questions.
  • The Poseur. This person has a clear view of the world in his mind and will find a way to massage every fact you present to make it fit with a pre-conceived view. He uses your facts to illustrate just how insightful he is and what he already knows. This is the marketer that personifies David Ogilvy’s quote that marketers use research “as a drunkard uses a lamp post, for support rather than for illumination.”   He uses the meeting to become the center of attention. He has to provide his view on every slide and every conclusion you have no matter what the size of the meeting. He dominates and other attendees tend to defer to him before offering their own opinions.  How to succeed with the Poseur:  At the onset, set “pause points” in the presentation — at the end of each section you will call for a discussion. Establish ground rules for the meeting. Ask everyone to write down a prediction on how a research result came out on paper before you show the actual result. Then, call on other individuals to discuss their prediction. Look to qualitative techniques for inspiration on how to handle a dominant focus group participant for inspiration.
  • The Jargon Guy. This is a person who talks a lot but doesn’t really say anything. He is a master of business jargon – it is the person who will use words like “bandwidth”, “game changer”, “visioning”, etc.  He will add “ize” onto nouns to turn them into verbs and use acronyms as much as possible. He reads popular business books on the side. You’ll feel like you are in an episode of “The Office” when you meet him. How to succeed with the Jargon Guy: Learn some of the proprietary jargon and acronyms used by your client’s firm beforehand.
  • The Cherry Picker.  Similar to the Poseur, this is the client who also has a clear “map of the world” established in her head and won’t let facts get in the way of a good opinion. She is active in the discussion but what she does is cherry pick results – and criticizes every point that doesn’t fit with her vision, and falls in love with every point that does. How to succeed with the Cherry Picker: Try to get her to buy into your methodology and lead with conclusions you think are likely to fit with how she thinks. That may get her to listen more to findings that don’t fit with her outlook later on.
  • The Naysayer.  This person doesn’t believe in market research and once he learns the study isn’t perfect will challenge everything you say. He straddles a line between “critic” and “cynic”. How to succeed with the Naysayer: This person can be a useful contributor if you can get his negativity to become constructive and establish the right tone. Fortunately, his concerns can often be anticipated beforehand, and you can often address his concerns before he gets a chance to raise them.
  • The Academic.  The academic asks incredibly detailed questions about the methodology and slows down the initial part of the presentation. This person is usually highly educated and understands the details of statistics and experimental design, sometimes better than you do. The good news is she rarely questions your findings if she agrees with the methods you have employed. How to succeed with the Academic: get to her beforehand and share the details of the methodology so she doesn’t get the meeting off to a bad start by bogging it down with methodological details. This person can be a great ally for you during the talk.
  • The Box Checker.  This is a person who is mainly concerned that the research got done because it is part of a larger marketing process that he is responsible for. He is much more of a “process” than an “outcomes” person and tends to be bureaucratic. How to succeed with the Box Checker:  Make sure he knows the project got done efficiently, on time, and within budget.
  • The Enlightened Leader.  This is the person we all want to present to. It is the highest ranking person in the room, but she casts aside all her other responsibilities for the hour you have with her. For at least one hour, you and your client feel that this study is the most important thing in her life.  She truly listens, doesn’t presume anything, and allows the research to add nuance to her view of the world. She usually insists that others in the meeting take action based on the findings.  How to succeed with the Enlightened Leader: Bring her into the conversation early, as it sets the tone for everyone.

I should note, that with very few exceptions, these personalities tend to be respectful and courteous and less challenging to present to than the above descriptions imply. Above all, preparation is key to success with all types of people. You need to deeply know your data set and have well-supported conclusions and implications, as in the end that tends to get you over any rough spots that arise. Your day-to-day contact needs to be your ally, and running through the presentation in advance with him/her often helps stave off any rough moments. Most research presentations go well, but we aim for them to not just go well, but to be effective. While it might not be appropriate to stereotype as I have done here, it is appropriate to realize each individual is coming to your presentation with his/her own perspective. Understanding that perspective can be as important as the study itself in terms of having research inform better decisions.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.