The quality of rankings and the schools listed

There are allegedly around 16,000 business schools world-wide with many of the top schools vying for the same students. In such a buyer’s market, brand is everything. “You can get an excellent education at Cambridge or Princeton. Even if you don’t, others will think you did, so the brand name will help you through life”, wrote Simon Kuper in the Financial Times.

By Laurent Ortmans
Quality assurance and ranking manager
ESMT Berlin
LinkedIn

The AACSB and EQUIS accreditations add a badge of quality but it is still a very busy field with about 1,000 schools, including 47 from France. Rankings add another filter and provide a benchmark for schools against their peer institutions. For the history, Businessweek published the first MBA ranking back in 1988. Northwestern Kellogg School of Management was #1. Thirty years later, business school rankings are now ubiquitous.

The appeal of these rankings lays in their apparent simplicity. They condense a lot of information into a simple ordered list. Rankings hence allow candidates to narrow down quickly the list of schools they wish to apply. “Their popularity with readers suggests that they do serve a valuable purpose”, says Andrew Jack, the Financial Times (FT) Global Education editor. “There are however no substitutes for individuals doing their own analysis, research and visits”.

Indeed, ranking show, on average, the relative performance of listed programmes compared to each other, according to a finite set of unequally weighted criteria. “There is no single definition of a quality business degree”, says Della Bradshaw, the former business education editor at the Financial Times (FT). “It will change depending on who asks the question: a quality programme for a professor might relate to research publications; for a student it might relate to the calibre of recruiters.” Changing the weights of the criteria may often lead to a completely different result. I looked last year at the impact of decreasing the weight of the salary criteria in the FT MBA 2019 rankings. For example, if the weight of “salary increase” were reduced from 20% to 15%, more than half of the US schools would lose on average three places.

Are these rankings any good? I published a “ranking of business school rankings” in 2018 based on five criteria: Management; Data collection; Quality Assurance; Clear and Independent methodology; and Influence. Winfried Ruigrok, Dean of the Executive School of Management, Technology & Law at the University of St. Gallen, identified five similar criteria. He wrote that rankings should be judged on their Validity; Reliability; Data Integrity; Independence; and Management (check links for definition of these criteria).

Andrew Jack agrees that rankings are in general as good as their Quality Assurance process. “We are committed to using robust data, verifying it with our team and even auditing,” he said. On the other hand, Daniel Kahn, the Senior Research Analyst who heads QS’s business school rankings, primes the methodology. “I would argue a ranking is actually only as good as its methodology”, he noted. Both Andrew and Daniel are right: A strong methodology and Quality Assurance process is essential.

The most influential international rankings these days are those published by the Financial TimesThe Economist and QS. The FT launched its first ranking in 1999, The Economist in 2011 and QS in 2017. These three rankings have only a few things in common. They all measure gender and international diversity of students and faculty, even if their definition of faculty varies quite significantly. They also include some salary measures, which are again very different from each other. Let us look at each of them in turn.

Financial Times

In 1997/98, the FT wanted to find out which business schools were producing the top global managers for the twenty-first century. Since then, the FT rankings focus on the career progression of recent graduates; the students’ international exposure; and which business schools generate new ideas through their research.

On the plus side, the ranking is free of commercial interests. The FT reviews its methodology regularly, as for example the introduction of the corporate social responsibility (CSR) criterion in 2018. It publishes all the criteria individually, displaying either the actual data or individual ranks. The FT website is easy to navigate and one can download all ranking data at once in a spreadsheet. The calculations using standardized z-scores are very simple. With a bit of skill and expertise, one can estimate the total ranking scores for each individual school. Finally, the FT ranking is the only one that formally audits the answers of the participants to its MBA rankings.

The main drawback is that it surveys alumni three years after graduation. It takes a long time for any changes made by schools to show up in the ranking data. A school that revamps its curriculum in 2020, for the benefit of the students enrolling in its two-year MiM cohort in 2021, will not see any impact until the FT 2026 MiM ranking. By then, the curriculum may have changed again. Equally, new programs have to wait a long while before being eligible. In one criterion, the alumni rate their schools’ career services as it was four or five years ago.

The Economist

Launched twelve years after the FT-ranking, The Economist came up with a different methodology. They opted to focus on new career opportunities, educational experience, increase in salary and networking. Uniquely among the main rankings, The Economist relies significantly on students’ own ratings.

On the plus side, it is also free of commercial interest. The Economist uses recent data, surveying alumni less than two years after graduation as well as current students. Nick Parsons, The Economist Rankings Administrator, strong of eight years of experience in the job manages deftly the Quality Assurance process. Even though if it not possible to download all the ranking data at once, The Economist publishes the data of every criterion individually.

The main downside of the ranking is that the student ratings are subjective. Students lack a benchmark that would enable them to make an objective assessment. Furthermore, most of these scores are very close, and the smallest of variations, a couple of decimal points or less, can lead to huge gains or drops in rank. The scores are so close that any differences are likely due to random factors, making it near to impossible to read anything meaningful into it. Looking at the table below, how does one decide which of these schools has the best faculty, programme, and students?

Table 1: Economist MBA ranking 2019

Table 1: Economist MBA ranking 2019

QS

More renowned for its university rankings, QS had produced lists of business schools for over 20 years.
Using a similar methodology as for universities, QS launched its first business schools rankings in 2017. Unlike the FT and The Economist rankings that are limited to the top 100 schools, QS did not introduce a size limit. Its latest MBA ranking includes 258 schools.

QS does not survey students or alumni. Instead, its rankings are built around two large reputation surveys of academics and employers. “There is an idea that if you value the reputation of a school then you will trust that all the programs they produce will be of a certain quality”, said Daniel.

One issue is that reputation is a very subjective measure. Another is that most employers have only local connections. Finally, it is not clear if QS adjust for cohort sizes. A school with 300 students will have a smaller network of employers than a school with thousands of students.

The most controversial criterion is perhaps the 10-year return on investment. QS explains in its online methodology that it “can often be one of the hardest metrics to accurately predict, with many permutations and possibilities”.

The main thing in favour of QS is the size of its rankings, which allows a broader coverage. “For many years there were accredited business schools which were missing out on a top 100 ranking”, noted Daniel, “and thus unfortunately were not on the radar of prospective candidates who were either not eligible, or interested in what the top 100 had to offer”.

Conclusion

These three rankings all have their pros and cons, and despite different methodology, they all produce similar rankings for the top 25-30 schools. They then differ increasingly for the lower-ranked schools. Prospective students, employers or whoever consume these rankings needs to be savvy and read the small prints. In the end, all the ranked schools are very good. They may not all have an internationally recognised name but they are all very strong locally/nationally. That is enough for many students.