The Paradox of College Rating Systems

It is the time of year when various magazines and services promote their own versions of college rating systems. For many of these publications, the college rating issue is the single most popular issue of the year. And, it is no wonder… with the growth in the costs of college education has come a greater concern on the part of parents and students that there is an adequate return on this investment.

College admissions officials face an interesting paradox. Privately, they loathe these systems, hate their methodology, and struggle to convince their administration that these systems are irrelevant. Publicly, though, they do everything they can to improve their rankings and broadcast them loudly if they like where they ended up.

These rating systems are fundamentally flawed. That is a strong statement. By fundamentally flawed I mean that they measure things of little relevance to a parent or a child who is evaluating colleges, which presumably, is their raison d’etre. Perhaps of even greater concern is that colleges themselves typically don’t measure what is actually important, in terms of the quality of undergraduate education they are providing.

It is a fairly basic principal in business that the efficiency of a system is measured by its output divided by its input. In other words, it is the difference in quality in what they system yields and what it takes in that indicates its quality. This is termed “value-add” in many contexts.

This was brought to my attention by Ronald Yeaple, my faculty advisor when I was an MBA student (and among the best teachers I’ve ever encountered) many years ago.  He stopped by to re-connect and we discussed a book he had recently written called “Does It Pay to Get an MBA?” One of the arguments he presents in this book is that MBA programs can be evaluated in a fairly straightforward manner. For better or worse, a primary reason people choose to get an MBA is to further their careers and improve their salary prospects. So, starting salaries or salaries a few years out are effective “output” measures for MBA graduates.

Most business school rating systems take this into account and include salary information in their ratings formulas. But, they only include half of what needs to be done to measure the efficiency of a system. It is no surprise that the business schools that come to the top of these lists every year happen to be the ones that cost the most to go to and yield the highest salaries.

But, these programs also attract the best and the brightest. This begs an important question:  is it the program or the student that matters? Yes, Harvard and Stanford have high starting salaries for the graduates of their MBA programs. But, is this because of a high value-add experience at these institutions, or is it driven more by the quality of student they attract?  In other words, would the student Harvard attracts have garnered a similar salary if he/she graduated at other, perhaps less esteemed and less expensive business schools?

It isn’t too much of a stretch to state that a student that gets into a top notch program was likely to do very well regardless of their MBA school choice. The issue is how to measure this. Fortunately, there is an objective measure of the quality of an incoming MBA student. It isn’t perfect (few measures are) but it is widely accepted:  the GMAT scores of the student.

So, the efficiency of an MBA program can be measured by dividing its output (salary information) by its input (GMAT score of the student). This is a measure of the value add of the institution – what it has added to the student beyond what the student brings to the program.

Ron Yeaple did this for MBA programs… in an inventive way.  He took the average GMAT score of a class year of MBA students for all the MBA programs in the country and used it as a predictor (independent) variable in a regression analysis. His dependent variable was the starting salaries of graduates of the program.

As you can imagine, there is a positive relationship between the two. Business schools with higher incoming GMAT scores tend to have higher starting salaries. Those with lower GMAT scores tend to have lower starting salaries.

But, the interesting part is all MBA programs don’t align perfectly on the regression line. Some are above it and some are below. Those above the regression line are those whose students are earning more as graduates than their incoming GMAT scores would predict. Those schools below the line are underperforming – their students are earning less than their GMAT scores would predict. When Ron ranked the schools by their deviation from the regression line, the list did not correspond to any list I have seen. This was a beautiful analysis, and reminds me of why I found Ron to be such an outstanding professor.

It is more challenging to implement this value-added concept to undergraduate education. For one thing, a proper outcome variable is harder to define. How do we measure the output of a college? There is more to a quality institution than the salary of their graduates. But, in today’s world it has to be an important part of the outcome measure.

Outcome measures that don’t somehow correct for the incoming quality of the student base can’t do an adequate job of ranking institutions. If you ranked colleges by the mean SAT scores of their incoming freshman you would have a list that looks a lot like the lists that are published each year. How does that say anything about the quality of the education a college is providing?

0 Responses to “The Paradox of College Rating Systems”

  1. Leave a Comment

Have a thought on this? Leave a reply!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: