Many of us in the market research field lament our poor timing when choosing when to be born. We went to college and graduate school and entered a field that focuses on data analysis at a time when this was a marginally marketable skill. Data analysis is a field that has now exploded in its value to employers. Many of us feel we were at the forefront of an eventual “nerd takeover” of the business world.
We can explain why in two words: “Big Data.” Never has there been more data aching to be analyzed. Consumers used to be tracked in just two ways: when they bought something and when they took the time to interrupt their dinners to answer a telephone survey in the evening. Now, people are being tracked in ways unimaginable just a few years ago and perhaps in ways they don’t even realize.
The digital trail we leave as we navigate the Internet leaves a powerful and permanent data contrail. It used to be that marketers could learn where we went online. Now, they can also learn who we are, what our friends do, and where we are when we do various things.
Yet, despite all of this data and attempts to harvest it, marketers often seem no more knowledgeable about their customers than they were a generation ago. This conundrum is often chalked up to a failing of researchers: we have all this data, yet our researchers just haven’t yet discovered how to separate the signal from the noise.
While this may well be true, there also might be a “hype curve” type phenomenon going on. The “hype curve” was made most famous by Gartner and is often studied in MBA programs. In short, when we are faced with something disruptive, we tend to overstate its potential. Then, when the reality of the phenomenon inevitably fails to meet the hype, we adjust our expectations downward… but by too much. We start to be overly critical of the phenomenon’s potential. In the final phase, the actual potential of the phenomenon establishes itself – somewhere between the initial hype and our revised, downward expectations of it.
The hype curve can be used in many contexts. I’ve seen it applied in politics. President Obama came to office amid sky high (and unrealistic) expectations. As he invariably failed to meet them all, people revised their assessment of his potential too far downward. Eventually, his performance will be reviewed by historians as being between these two extremes.
I’ve seen the concept applied to music artists. A music artist puts out an incredible first album. Fans start touting them as the next coming of the Beatles. Their second album can’t possibly meet these expectations, and when it comes out the group start to fade in popularity. By the time their third album is released, it is time for their popularity to settle at an appropriate level.
They hype curve concept is most commonly applied to technology. A new gadget comes out. We hear about how it will save the world and change our lives. It fails to meet those expectations, and people start to think that it won’t make much of a difference at all. Over time, the gadget finds its level – it is a useful addition to our lives. Its reality falls between the initial hype and the revised expectation.
The hype curve example is applicable to Big Data as well, and we are in the early stages of it. Our expectations of what can be done with the incredible amounts of data that are out there are overstated. There will be a point soon when our expectations will be revised downward, and people will start to underestimate what can be done. Eventually, like all other innovations, Big Data will find its level.
So, right now is the perfect time to graduate from college with strong data analysis skills – too late for many of us unfortunately!