Archive for the 'Marketing' Category

Truth Initiative wins Ogilvy for Opioid Campaign

Truth Initiative has won two 2019 Ogilvy awards for its campaign against opioid misuse.

The ARF David Ogilvy Awards is the only award that honors the research and analytics insights behind the most successful advertising campaigns. Crux Research, along with our research partners at CommSight, provided the research services for this campaign.

A case study of the campaign can be found here

You can view spots from the campaign here and here.

We are very proud to have provided Truth Initiative with research support for this important campaign.

How to be an intelligent consumer of political polls

As the days get shorter and the air gets cooler, we are on the edge of a cool, colorful season. We are not talking about autumn — instead, “polling season” is upon us! As the US Presidential race heats up, one thing we can count on is being inundated with polls and pundits spinning polling results.

Most market researchers are interested in polls. Political polling pre-dates the modern market research industry and most market research techniques used today have antecedents from the polling world. And, as we have stated in a previous post, polls can be as important as the election itself.

The polls themselves influence voting behavior which should place polling organizations in an ethical quandary. Our view is that polls, when properly done, are an important facet of modern democracy. Polls can inform our leaders as to what the electorate cares about and keep them accountable. This season, polls are determining which candidates get on the debate stage and are driving which issues candidates are discussing most prominently.

The sheer number of polls that we are about to see will be overwhelming. Some will be well-conducted, some will be shams, and many will be in between. To help, we thought we’d write this post on how be an intelligent consumer of polls and what to look out for when reading the polls or hearing about them in the media.

  • First, and this is harder than it sounds, you have to put your own biases aside. Maybe you are a staunch conservative or liberal or maybe you are in the middle. Whatever your leaning, your political views are likely going to get in the way of you becoming a good reader of the polls. It is hard to not have a confirmation bias when viewing polls, where you tend to accept a polling result that confirms what you believe or hope will happen and question a result that doesn’t fit with your map of the world. I have found the best way to do this is to first try to view the poll from the other side. Say you are a conservative. Start by thinking about how you would view the poll if you leaned left instead.
  • Next, always, and I mean ALWAYS, discover who paid for the poll. If it is an entity that has a vested interest in the results, such as a campaign, a PAC, and industry group or lobbyist, go no further. Don’t even look at the poll. In fact, if the sponsor of the poll isn’t clearly identified, move on and spend your time elsewhere. Good polls always disclose who paid for it.
  • Don’t just look to who released the poll, review which organization executed it. For the most part, polls executed by major polling organizations (Gallup, Harris, ORC, Pew, etc.) will be worth reviewing as will polls done by colleges with polling centers (Marist, Quinnipiac, Sienna, etc.). But there are some excellent polling firms out there you likely have never heard of. When in doubt, remember that Five Thirty Eight gives pollsters grades based on their past performances.  Despite what you may hear, polls done by major media organizations are sound. They have polling editors that understand all the nuances and have standards for how the polls are conducted. These organizations tend to partner with major polling organizations that likewise have the methodological muscle that is necessary.
  • Never, and I mean NEVER, trust a poll that comes from a campaign itself. At their best, campaigns will cherry pick results from well executed polls to make their candidate look better. At their worst, they will implement a biased poll intentionally. Why? Because much of the media, even established mainstream media, will cover these polls. (As an aside, if you are a researcher don’t trust the campaigns either. From my experience, you have about a 1 in 3 chance of being paid by a campaign for conducting their poll.)
  • Ignore any talk about the margin of error. The margin of error on a poll has become a meaningless statistic that is almost always misinterpreted by the media. A margin of error really only makes sense when a random or probability sample is being used. Without going into detail, there isn’t a single polling methodology in use today that can credibly claim to be using a probability sample. Regardless, being within the margin of error does not mean a race is too close to call anyway. It really just means it is too close to call with 95% certainty.
  • When reading stories on polls in the media, read beyond the headline. Remember, headlines are not written by reporters or pollsters. They are written by editors that in many ways have had their journalistic integrity questioned and have become “click hunters.” Their job is to get you to click on the story and not necessarily to accurately summarize the poll. Headlines are bound to be more sensational that the polling results merit.

All is not lost though. There are plenty of good polls out there worth looking at. Here is the routine I use when I have a few minutes and want to discover what the polls are saying.

  • First, I start at the Polling Report. This is an independent site that compiles credible polls. It has a long history. I remember reading it in the 90’s when it was a monthly mailed newsletter. I start here because it is nothing more than raw poll results with no spin whatsoever. Their Twitter feed shows the most recently submitted polls.
  • I sometimes will also look at Real Clear Politics. They also curate polls, but they also provide analysis. I tend to just stay on their poll page and ignore the analysis.
  • FiveThirtyEight doesn’t provide polling results in great detail, but usually draws longitudinal graphs on the probability of each candidate winning the nomination and the election. Their predictions have valid science behind them and the site is non-partisan. This is usually the first site I look at to discover how others are viewing the polls.
  • For fun, I take a peek at BetFair which is an UK online betting site that allows wagers on elections. It takes a little training to understand what the current prices mean, but in essence this site tells you which candidates people are putting their actual money on. Prediction markets fascinate me; using this site to predict who might win is fun and geeky.
  • I will often check out Pew’s politics site. Pew tends to poll more on issues than “horse race” matchups on who is winning. Pew is perhaps the most highly respected source within the research field.
  • Finally, I go to the media. I tend to start with major media sites that seem to be somewhat neutral (the BBC, NPR, USA TODAY). After reviewing these sites, I then look at Fox News and MSNBC’s website because it is interesting to see how their biases cause them to say very different things about the same polls. I stay away from the cable channels (CNN, Fox, MSNBC) just because I can’t stand hearing boomers argue back and forth for hours on end.

This is, admittedly, way harder than it used to be. We used to just be able to let Peter Jennings or Walter Cronkite tell us what the polls said. Now, there is so much out there that to truly get an objective handle on what is going on takes serious work. I truly think that if you can become an intelligent, unbiased consumer of polls it will make you a better market researcher. Reading polls objectively takes a skill that applies well to data analysis and insight generation, which is what market research is all about.

Demand Curves Always Slope Downward

Last month marked 30 years since I received my MBA. This anniversary has made me think critically about what I learned in business school and to judge what proved helpful and what did not. I will be the first to admit I have a good, yet sometimes selective memory for these things.

I had many outstanding business professors. They were far superior to the teachers I encountered as an undergraduate. There was one professor in particular I will always remember. I took an economics course from him and later took a business law course from him. I had little interest in business law and took the course solely because he was such a great lecturer.

He devoted the final lecture of his business law course to a topic that had nothing to do with law. He stated that there was a simple tenet we should always keep in mind. If we learned nothing else from our time in the program it should be this: demand curves always slope downward.

He then predicted that during our careers we will encounter many situations where people will try to convince us otherwise. We will see things in the business and popular press that ignore this basic concept. But, unlike the others, we will never fall for it because it is the one mistake he was on a crusade to ensure that none of his students would ever get wrong.

Demand curves always slope downward.

What does this mean? It is a simple concept most kindergartners can explain: If the cost of something goes up, fewer people will want it and less of it will be sold. Simple, huh?

My professor was prescient. In the past 30 years, I have encountered dozens, perhaps hundreds of cases where somebody was convinced that a cost change won’t have an effect on volume.

I’ve seen it a lot in business planning. I worked for a consumer goods firm for a short time. One year, a product manager decided to take a price increase. The business plan she created showed that in the current year we had sold 1 million units at $3 for a revenue of $3 million. Her planning assumed a 10 percent price increase would increase revenue by $300K. Not! If the price goes to $3.30 the only guarantee I can think of is that we would sell less than 1 million units. Nevertheless, her plan got through.

I’ve seen public policy makers forget this simple concept as well. They will propose tax changes and then assume that the change won’t affect consumer behavior.

The error is most commonly made when people only consider price and not “cost” in a broader since. Cost involves price but is also comprised of other things, such as the value of your time, the inconvenience of traveling to make a purchase, etc. As Adam Smith said, the real price of something is the “toil and trouble of acquiring it.”

A good example of this happened earlier this year in the county I live in. Our county legislature decided against raising the legal smoking age from 18 to 21. A quote from my representative indicated that because surrounding counties sell tobacco to those 18 and older, raising the age to 21 in our county would not change smoking behavior because young smokers will simply drive elsewhere.

Wrong! Demand curves slope downward. Raising the age to 21 in our county most certainly will decrease tobacco use because we have raised the cost of obtaining cigarettes by making it more inconvenient. 18-21-year olds would now have to drive further to get tobacco. They might have to bug someone of legal age to buy them for them. They may need to risk buying while underage. This all increases the cost to them and they will buy less. It is okay if you are against raising the legal age but it is not okay to use flawed logic to get there.

It is fine to argue that raising the age won’t have a large effect, but arguing that it won’t have any effect at all ignores a basic economic tenet. Demand curves slope downward. Thinking otherwise would sort of be like trying to convince a physicist that gravity only exists in some cases.

To illustrate this point, look to CVS. In 2014, CVS decided to stop selling tobacco products. This increased the cost of buying cigarettes because it became a bit less convenient to find them. Although many felt that this wouldn’t do anything to overall smoking behavior (thus ignoring that demand curves slope downward), a recent study by CVS concluded that 95 million fewer packs of cigarettes were bought by smokers in an 8-month period studied. If we pro-rate that over the 5 years since CVS has stopped selling cigarettes, the implication is that as a result of CVS’s decision, about 3 billion fewer cigarettes have been smoked per year.

Without wading too deeply into a one of the hottest of hot-button political issues, I do hear things from the pro-gun lobby that clearly shows they don’t recognize that demand curves slope downward. I think it is legitimate to be against gun restrictions from a philosophical viewpoint (e.g. gun ownership is a citizen right, guaranteed in the constitution, etc.). But the pro-gun lobby often claims that restricting which weapons that can be sold, taxing them, making it more onerous to register them, etc. will have no effect on “bad guys” getting guns.

Of course it will. Raise the cost of something and people will do it less. I have no idea how to resolve the gun debate in the US, but I am 100% confident that if we make guns harder to obtain fewer guns will be obtained. By good guys and bad guys. Whether that is a good or bad thing depends on which side of the debate you are on.

This concept can influence behavior in unexpected ways. There are studies that show how frequently employees interact varies inversely with the geographic distance their desks are from one another. I noticed this first-hand. At one point, my office was moved literally about 30 feet down a hallway, further away from where the bulk of the people on my team sat. I noticed right away that I conversed with them about half as often as a result. Why? Because the cost for us to interact increased and our behavior changed. It became a bit more inconvenient to interact with them.

“Demand curves slope downward” can have a converse affect: lower the cost of something and people will buy more. The rise of online retail demonstrates this. Shopping online is so convenient and easy that consumers have moved to it quickly – because their cost of shopping has come down. However, in my experience you don’t see people making mistakes on this side of the argument. People seem to know that lowering cost increases volume. It is more that they often fail to see that increasing costs lowers volume.

So, 30 years later, I am going to send a link to this post to my former professor. He will be pleased that at least one of his students remembered his advice.

NOTE: Economic geeks will note that there are some cases where demand curves don’t slope downward. There are “Giffen goods” – items where consumers will buy more of when price goes up. In reality, it is rare to ever see a discussion of this outside of an Economics class.

Jeff Bezos is right about market research

In an annual shareholder letter, Amazon’s Jeff Bezos recently stated that market research isn’t helpful. That created some backlash among researchers, who reacted defensively to the comment.

For context, below is the text of Bezos’ comment:

No customer was asking for Echo. This was definitely us wandering. Market research doesn’t help. If you had gone to a customer in 2013 and said “Would you like a black, always-on cylinder in your kitchen about the size of a Pringles can that you can talk to and ask questions, that also turns on your lights and plays music?” I guarantee you they’d have looked at you strangely and said “No, thank you.”

This comment is reflective of someone who understands the role market research can play for new products as well as its limitations.

We have been saying for years that market research does a poor job of predicting the success of truly breakthrough products. What was the demand for television sets in the 1920’s and 1930’s before there was even content to broadcast or a way to broadcast it? Just a decade ago, did consumers know they wanted a smartphone they would carry around with them all day and constantly monitor? Henry Ford once said that if he had asked customers what they wanted they would have wanted faster horses and not cars.

In 2014, we wrote a post (Writing a Good Questionnaire is Just Like Brian Surgery) that touched on this issue. In short, consumer research works best when the consumer has a clear frame-of-reference from which to draw. New product studies on line extensions or easily understandable and relatable new ideas tend to be accurate. When the new product idea is harder to understand or is outside the consumer’s frame-of-reference research isn’t as predictive.

Research can sometimes provide the necessary frame-of-reference. We put a lot of effort to be sure that concept descriptions are understandable. We often go beyond words to do this and produce short videos instead of traditional concept statements. But even then, if the new product being tested is truly revolutionary the research will probably predict demand inaccurately. The good news is few new product ideas are actually breakthroughs – they are usually refinements on existing ideas.

Failure to provide a frame-of-reference or realize that one doesn’t exist leads to costly research errors. Because this error is not quantifiable (like a sample error) it gets little attention.

The mistake people are making when reacting to Bezos’ comment is they are viewing it as an indictment of market research in general. It is not. Research still works quite well for most new product forecasting studies. For new products, companies are often investing millions or tens of millions in development, production, and marketing. It usually makes sense to invest in market research to be confident these investments will pay off and to optimize the product.

It is just important to recognize that there are cases where respondents don’t have a good frame-of-reference and the research won’t accurately predict demand. Truly innovative ideas are where this is most likely to happen.

I’ve learned recently that this anti-research mentality pervades the companies in Silicon Valley. Rather than use a traditional marketing approach of identifying a need and then developing a product to fulfill the need, tech firms often concern themselves first with the technology. They develop a technology and then look for a market for it. This is a risky strategy and likely fails more than it succeeds, but the successes, like the Amazon Echo, can be massive.

I own an Amazon Echo. I bought it shortly after it was launched having little idea what it was or what it could do. Even now I am still not quite sure what it is capable of doing. It probably has a lot of potential that I can’t even conceive of. I think it is still the type of product that might not be improved much by market research, even today, when it has been on the market for years.

Is getting a driver’s license still a rite of passage for teens?

In the 80’s and 90’s, before the Millennial generation hit their teen years in force, we would use “driver’s license status” as a key classification variable in studies. Rather than split focus groups by age or grade in school, we would often place teens who had their license in one group and those who did not have their license yet in another group. Regardless of the topic of the group. We found that teens with licenses were more independent of their parents and more capable of making decisions without parental input. Drivers license obtention was often better predictor of consumer behavior than age.

Young people experience many rites of passages in a short period of time. These are experiences that signify a change in their development. They ride the school bus for the first time, get their first smartphone, enter high school, go to the prom, leave home to go to college, vote for the first time, etc. As marketers, we have always looked at these inflection points as times when consumer behavior shifts. The obtaining of a driver’s license is traditionally seen as a watershed moment as it signifies a new level of independence.

However, this wisdom no longer holds. Millennials, particularly second wave Millennials, are not as focused on obtaining drivers licenses as their Boomer and Xer parents were. Where I grew up, we couldn’t wait until our 16th birthday so we could get our learner’s permit. My classmates and I usually took our road tests at the first opportunity. Failing the road test was a traumatic experience, as it caused us to remain in our parents’ control for a few more months.

This is no longer the case. In 1983, 46% of America’s 16-year-olds had a driver’s license. That is now less than 25% currently. I was very surprised to notice that my children and their friends seemed to be in no particular rush to get their licenses. Many times, it was the parents that pushed the kids to take their road test, as the parents were tiring of chaperoning the kids from place to place.

There are likely things that have caused this change:

  • Today’s parents are highly protective of children. Parents no longer push their children to be as independent as quickly.
  • There are societal pressures. In most states, there are more stringent requirements in terms of driving experience to be able to take a road test and more restrictions on what a younger driver can do with his/her license. The license simply isn’t as valuable as it used to be.
  • Driving has peaked in the US. People are driving less frequently and fewer miles when they do. There has also been a movement of the population to urban areas which have more mass transit.
  • The decline of retail has played a part. Going to the mall was a common weekend activity for Xer teens. Now, staying home and shopping on Amazon is more common. Millennials never went to the mall to socialize.
  • Online entertainment options have proliferated. Movies and shows are readily streamed. Many teens fulfill a need for socialization via gaming, where they interact with their friends and make new ones. This need could only be met in person in the past.
  • Teens are working less so have less of a need to drive to work. Of course, this means they have less of their own money and that tethers them to their parents even longer.

There are likely many other causes. But the result is clear. Teens are getting licenses later and using them less than they did a generation ago.

As a result, researchers have lost a perfectly good measure! Obtaining a driver’s license is not as strong a rite of passage as it used to be.

We’ve been thinking about what might make a good alternative measure. What life event do young people experience that changes them in terms of granting their independence from parents? Leaving home and living independently for the first time would qualify but seems a bit late to be useful. There may be no clear marker signifying independence for Millennials, as they stay dependent on parents across a much wider time period than in the past. Or, perhaps we need to change our definition of independence.

How Did Pollsters Do in the Midterm Elections?

Our most read blog post was posted the morning after the 2016 Presidential election. It is a post we are proud of because it was composed in the haze of a shocking election result. While many were celebrating their side’s victory or in shock over their side’s losses, we mused about what the election result meant for the market research industry.

We predicted pollsters would become defensive and try to convince everyone that the polls really weren’t all that bad. In fact, the 2016 polls really weren’t. Predictions of the popular vote tended to be within a percent and a half or so of the actual result which was better than for the previous Presidential election in 2012. However, the concern we had about the 2016 polls wasn’t related to how close they were to the result. The issue we had was one of bias: 22 of the 25 final polls we found made an inaccurate prediction and almost every poll was off in the same direction. That is the very definition of bias in market research.

Suppose that you had 25 people flip a coin 100 times. On average, you’d expect 50% of the flips to be “heads.” But, if say, 48% of them were “heads” you shouldn’t be all that worried as that can happen. But, if 22 of the 25 people all had less than 50% heads you should worry that there was something wrong with the coins or they way they were flipped. That is, in essence, what happened in the 2016 election with the polls.

Anyway, this post is being composed the aftermath of the 2018 midterm elections. How did the pollsters do this time?

Let’s start with FiveThirtyEight.com. We like this site because they place probabilities around their predictions. Of course, this gives them plausible deniability when their prediction is incorrect, as probabilities are never 0% or 100%. (In 2016 they gave Donald Trump a 17% chance of winning and then defended their prediction.) But this organization looks at statistics in the right way.

Below is their final forecast and the actual result. Some results are still pending, but at this moment, this is how it shapes up.

  • Prediction: Republicans having 52 seats in the Senate. Result: It looks like Republicans will have 53 seats.
  • Prediction: Democrats holding 234 and Republicans holding 231 House seats. Result: It looks like Democrats will have 235 or 236 seats.
  • Prediction: Republicans holding 26 and Democrats holding 24 Governorships. Result: Republicans now hold 26 and Democrats hold 24 Governorships.

It looks like FiveThirtyEight.com nailed this one. We also reviewed a prediction market and state-level polls, and it seems that this time around the polls did a much better job in terms of making accurate predictions. (We must say that on election night, FiveThirtyEight’s predictions were all over the place when they were reporting in real time. But, as results settled, their pre-election forecast looked very good.)

So, why did polls seem to do so much better in 2018 than 2016? One reason is the errors cancel out when you look at large numbers of races. Sure, the polls predicted Democrats would have 234 seats, and that is roughly what they achieved. But, in how many of the 435 races did the polls make the right prediction? That is the relevant question, as it could be the case that the polls made a lot of bad predictions that compensated for each other in the total.

That is a challenging analysis to do because some races had a lot of polling, others did not, and some polls are more credible than others. A cursory look at the polls suggests that 2018 was a comeback victory for the pollsters. We did sense a bit of an over-prediction favoring the Republican Senatorial candidates, but on the House side there does not seem to be a clear bias.

So, what did the pollsters do differently? Not much really. Online sampling continues to evolve and get better, and the 2016 result has caused polling firms to concentrate more carefully on their sampling. One of the issues that may have caused the 2016 problem is that pollsters are starting to almost exclusively use the top 2 or 3 panel companies. Since 2016, there has been a consolidation among sample suppliers, and as a result, we are seeing less variance in polls as pollsters are largely all using the same sample sources. The same few companies provide virtually all the sample used by pollsters.

Another key difference was that turnout in the midterms was historically high. Polls are more accurate in high turnout races, as polls almost always survey many people who do not end up showing up on election day, particularly young people. However, there are large and growing demographic differences (age, gender, race/ethnicity) in supporters of each party, and that greatly complicates polling accuracy. Some demographic subgroups are far more likely than others to take part in a poll.

Pollsters are starting to get online polling right. A lot of the legacy firms in this space are still entrenched in the telephone polling world, have been protective of their aging methodologies, and have been slow to change. After nearly 20 years of online polling the upstarts have finally forced the bigger polling firms to question their approaches and to move to a world where telephone polling just doesn’t make a lot of sense. Also, many of the old guard, telephone polling experts are now retired or have passed on, and they have largely led the resistance to online polling.

Gerrymandering helps the pollster as well. It still remains the case that relatively few districts are competitive. Pew suggests that only 1 in 7 districts was competitive. You don’t have to be a pollster to accurately predict how about 85% of the races will turn out. Only about 65 of the 435 house races were truly at stake. If you just flipped a coin in those races, in total your prediction of house seats would have been fairly close.

Of course, pollsters may have just gotten lucky. We view that as unlikely though, as there were too many races. Unlike in 2018 though, in 2016 we haven’t seen any evidence of bias (in a statistical sense) in the direction of polling errors.

So, this is a good comeback success for the polling industry and should give us greater confidence for 2020. It is important that the research industry broadcasts this success. When pollsters have a bad day, like they did in 2016, it affects market research as well. Our clients lose confidence in our ability to provide accurate information. When the pollsters get it right, it helps the research industry as well.

The most selective colleges have the least effective marketing

Recently, Stanford University made headlines for deciding to stop issuing an annual press release documenting its number of applicants and acceptances.

There has been a bit of an arms race among colleges with competitive admissions to be able to claim just how selective they are. The smaller the proportion of applicants accepted, the better the college does in many ranking systems and the more exclusive the “brand” of the college becomes.

This seems to be a bit crazy, as publicizing how few students are accepted is basically broadcasting how inefficient your college marketing system has become. We can’t think of any organization beyond colleges that would even consider doing something analogous to this – broadcasting to the world that they have enticed non-qualified buyers to consider their product.

I learned firsthand how ingrained this behavior is among college admissions and marketing personnel. About five years ago I had the pleasure to speak in front of a group of about 200 college marketers and high school counselors. I created what I felt was a compelling and original talk which took on this issue. I have given perhaps 200 talks in my career, and this one might have been the single most poorly received presentation I have ever delivered.

The main thrust of my argument was that as a marketer, you want to be as targeted as possible so as to not waste resources. “Acquisition cost” is an important success metric for markers: how much do you spend in marketing for every customer you are able to obtain? Efficiency in obtaining customers is what effective marketing is all about.

I polled the audience to ask what they felt the ideal acceptance rate would be for their applicants. Almost all responded “under 10%” and most responded “under 5%.” I then stated that the ideal acceptance rate for applicants would be 100%. The ideal scenario would be this: every applicant to your college would be accepted, would then choose to attend your institution, would go on to graduate, become a success, and morph into an engaged alumnus.

I used an analogy of a car dealership. Incenting college marketers to increase applications is akin to compensating a car salesperson for how many test drives he/she takes customers on. The dealership derives no direct value from a test drive. Every test drive that does not result in a car purchase is a waste of resources. The test drive is a means to an end and car dealers don’t tend to track it as a success metric. Instead, they focus on what matters – how many cars are sold and how much was spent in marketing to make that happen.

Colleges reward their marketers to get students to test drive when they should be rewarding their marketers for getting them to buy. This wouldn’t matter much if a high proportion of applicants were accepted and ending up attending.  But, even at highly selective colleges it is not uncommon for less than 10% of applicants to be accepted, less than 33% of those accepted to choose to attend, and less than 50% of those that enroll to actually end up graduating. At those rates, for every 1,000 applicants, just 17 will end up graduating from the institution. That is a success rate of 1.7%.

These are metrics that in any business context would be seen as a sign of an organization in serious trouble. Can you imagine if only 10% of the people who came in your store qualified to buy your product? And then if only a third of those would actually decide to do so? And then if half of those that do buy don’t end up using your product or return it? That is pretty much what happens at selective colleges.

This issue is a failure of leadership. College marketers I have worked with can often see this problem, but feel pressured by their Deans and College Presidents to maximize their applicant base. Granted, this can help build the college’s brand, but it is a huge drain on resources that are better spent ensuring targeting applicants who are poised for success at the institution. It has happened because selectivity is considered important in building a college’s brand. Stanford has taken a useful first step, and hopefully other colleges will follow their lead.

Is segmentation just discrimination with an acceptable name?

A short time ago we posted a basic explanation of the Cambridge Analytica/Facebook scandal (which you can read here). In it, we stated that market segmentation and stereotyping are essentially the same thing. This presents an ethical quandary for marketers as almost every marketing organization makes heavy use of market segmentation.

To review, marketers place customers into segments so that they can better understand and serve them. Segmentation is at the essence of marketing. Segments can be created along any measurable dimension, but since almost all segments have a demographic component we will focus on that for this post.

It can be argued that segmentation and stereotyping are the same thing. Stereotyping is attaching perceived group characteristic to an individual. For instance, if you are older I might assume your political views lean conservative, since it is known that political views tend to be more conservative in older Americans that they are in general among younger Americans. If you are female I might assume you are more likely to be the primary shopper for your household, since females in total do more of the family shopping than males. If you are African-American, I might assume you have a higher likelihood than others to listen to rap music, since that genre indexes high among African-Americans.

These are all stereotypes. These examples can be shown to true of a larger group, but that doesn’t necessarily imply that they apply to all the individuals in the group. There are plenty of liberal older Americans, females who don’t shop at all, and African-Americans who can’t stand rap music.

Segmenting consumers (which is applying stereotypes) isn’t inherently a bad thing. It leads to customized products and better customer experiences. The potential problem isn’t with stereotyping, it is when doing so moves to a realm of being discriminatory that we have to be careful. As marketers we tread a fine line. Stereotyping oversimplifies the complexity of consumers by forming an easy to understand story. This is useful in some contexts and discriminatory in others.

Some examples are helpful. It can be shown that African-Americans have a lower life expectancy than Whites. A life insurance company could use this information to charge African-Americans higher premiums than Whites. (Indeed, many insurance companies used to do this until various court cases prevented them from doing so.) This is a segmentation practice that many would say crosses a line to become discriminatory.

In a similar vein, car insurance companies routinely charge higher risk groups (for example younger drivers and males) higher rates than others. That practice has held up as not being discriminatory from a legal standpoint, largely because the discrimination is not against a traditionally disaffected group.

At Crux, we work with college marketers to help them make better admissions offer decisions. Many colleges will document the characteristics of their admitted students who thrive and graduate in good standing. The goal is to profile these students and then look back at how they profiled as applicants. The resulting model can be used to make future admissions decisions. Prospective student segments are established that have high probabilities of success at the institution because they look like students known to be successful, and this knowledge is used to make informed admissions offer decisions.

However, this is a case where a segmentation can cross a line and become discriminatory. Suppose that the students who succeed at the institution tend to be rich, white, female, and from high performing high schools. By benchmarking future admissions offers against them, an algorithmic bias is created. Fewer minorities, males, and students from urban districts will be extended admissions offers What turns out to be a good model from a business standpoint ends up perpetuating a bias., and places certain demographics of students at a further disadvantage.

There is a burgeoning field in research known as “predictive analytics.” It allows data jockeys to use past data and artificial intelligence to make predictions on how consumers will react. It is currently mostly being used in media buying. Our view is it helps in media efficiency, but only if the future world can be counted on to behave like the past. Over-reliance on predictive analytics will result in marketers missing truly breakthrough trends. We don’t have to look further than the 2016 election to see how it can fail; many pollsters were basing their modeling on how voters had performed in the past and in the process missed a fundamental shift in voter behavior and made some very poor predictions.

That is perhaps an extreme case, but shows that segmentations can have unintended consequences. This can happen in consumer product marketing as well. Targeted advertising can become formulaic. Brands can decline distribution in certain outlets. Ultimately, the business can suffer and miss out on new trends.

Academics (most notably Kahneman and Tversky) have established that people naturally apply heuristics to decision making. These are “rules of thumb” that are often useful because they allow us to make decisions quickly. However, these academics have also demonstrated how the use of heuristics often result in sub-optimal and biased decision making.

This thinking applies to segmentation. Segmentation allows us to make marketing decisions quickly because we assume that individuals take on the characteristics of a larger group. But, it ignores the individual variability within the group, and often that is where the true marketing insight lies.

We see this all the time in the generational work we do. Yes, Millennials as a group tend to be a bit sheltered, yet confident and team-oriented. But this does not mean all of them fit the stereotype. In fact, odds are high that if you profile an individual from the Millennial generation, he/she will only exhibit a few of the characteristics commonly attributed to the generation. Taking the stereotype too literally can lead to poor decisions.

This is not to say that marketers shouldn’t segment their customers. This is a widespread practice that clearly leads to business results. But, they should do so considering the errors and biases applying segments can create, and think hard about whether this can unintentionally discriminate and, ultimately, harm the business in the long term.


Visit the Crux Research Website www.cruxresearch.com

Enter your email address to follow this blog and receive notifications of new posts by email.