So now we know. After months of hype and speculation, the winners were announced last month. These were not the Oscars, but the World Reputation Rankings, “the world’s definitive index of academic prestige,” according to its publisher, Times Higher Education, a weekly magazine based in London.
Harvard had slipped to fourth place — behind the California Institute of Technology, Oxford and Stanford — in the magazine’s World University Rankings last fall. Would it remain on top in the reputation results? It did.
Which was not exactly surprising. Nor, despite the best efforts of Times Higher Education to drum up excitement, were overall results. Although the precise order has varied, the same six names — M.I.T., Harvard, Stanford, University of California at Berkeley, Oxford and Cambridge — have filled the top six slots since the reputation rankings were first published in 2011. And while American dominance has slipped slightly, from 45 schools in the top 100 two years ago to 43 this year, no other country comes anywhere close. Second-place Britain fills just 9 of the top 100 places; Australia, in third place, has only 6.
A good reputation, wrote Publilius Syrus, a first-century B.C. Latin wit, is worth more than money. For universities, reputation is money: more applications, more tuition dollars, greater levels of alumni giving.
Though academic rankings have existed for decades in the United States, international rankings are a more recent event. But like the influential U.S. News and World Report lists, they are increasingly impossible to ignore. The appeal of an arbiter of educational quality, however subjective, has spread across the globe as students become ever more mobile, and universities, including American ones, look even farther beyond their borders.
Also like U.S. News, international rankings have come under attack for their methodologies and impact. A report released on April 12 by the European University Association warns of the growing variety of rankings and their influence on universities and nations’ educational policies. The report cites work by Ellen Hazelkorn, who in her 2011 book “Rankings and the Reshaping of Higher Education” found that in some countries rankings determine student eligibility for government scholarships, academic partners for collaboration, even immigration status. Dr. Hazelkorn, dean of the Graduate Research School at the Dublin Institute of Technology, cites statements by the governments of Mongolia, Qatar and Kazakhstan limiting scholarships to students admitted to the top 100 universities. Brazilian universities allow collaboration only with universities in the top 500, she says, and Singapore only with the top 100. In the Netherlands, immigration law favors foreigners who have graduated from the top 150 universities; in Denmark, graduates of the top 20 face easier entry requirements.
How did rankings acquire so much influence?
In 2003, when Nian Cai Liu, a chemistry professor at Shanghai Jiao Tong University, issued the first Academic Ranking of World Universities — known as the Shanghai rankings — the aim was simply to supply a benchmark to measure China’s progress. The theory was that by measuring such objective factors as the number of faculty or alumni who win Nobel Prizes or Fields Medals in mathematics, or whose work is frequently cited by other researchers, the rankings would help Chinese policy makers decide which of the country’s universities deserved more resources.
But if the intended audience was small, the effect on the academic landscape was like lighting a match to find your way out of a fireworks factory.
The absence of French or German universities in the top 20 shocked officials in those countries. With the University of Tokyo its only institution in the top 20 (at 19), Japan was not thrilled either. And as even Mr. Liu and his colleagues admitted, counting publications and prizes tends to slight social science and humanities, while the focus on research means liberal arts colleges don’t figure at all.
Such criticism, however, was easily lost amid the clamor as governments urged their universities to improve their standings. Indonesia, Malaysia, South Korea and Taiwan announced programs to lift at least one university into the top 100; Nigeria pledged to put two universities in the top 200. The ranking’s biomedical bias makes it particularly influential in the developing world, where science, technology, engineering and math are seen as holding the key to economic prosperity.
The explosion of interest — the Shanghai Web site gets thousands of visitors every day — also prompted imitators.
The first was Quacquarelli Symonds, a London-based company that runs education events and publishes guides to postgraduate study. QS got into the rankings business in 2004, in partnership with Times Higher Education until parting ways in 2009 amid disagreement about how reputation was being handled.
Today, reputation accounts for 50 percent of a university’s QS score. The Times Higher Education calculations, on the other hand, limit reputation to 33 percent of the overall score in its World University Rankings. And the survey is by invitation only, while QS allows any academic staff member to respond. The dangers of such an approach can be seen in a recent letter from Michael B. Murphy, president of University College Cork in Ireland, asking each faculty member to recruit three acquaintances at other institutions to fill out the survey to lift the university’s QS ranking.
“The more reputational measures you have, the less accurate a ranking is,” says Philip G. Altbach, the director of the Center for International Higher Education at Boston College. “Reputational surveys privilege research. Academics might be able to judge departments in their own fields in terms of research prominence, but they have no way of judging teaching quality or learning.”
Several years after its split with QS, Times Higher Education spun off its own reputation ranking, based solely on opinion surveys and billed as a league of 100 “global university super brands.” Phil Baty, the magazine’s rankings editor, acknowledges their subjectivity. “But in a highly competitive global higher education market, reputation matters deeply,” he says. “Research has shown that a university’s reputation is the No. 1 consideration for overseas students — above fees and even course content — and it is also a top priority for globally mobile faculty.”
A news release for last month’s World Reputation Rankings quotes Paul Wells, a Canadian journalist, who calls the listing “awesome because it skips objective criteria and goes straight to the stuff that makes people most insecure.”
Indeed, in a statement accompanying the ranking release, Mr. Baty did his best to stoke fears: “New forces in higher education are emerging, especially in the East Asian countries that are investing heavily in building world-class universities, so the traditional elite must be very careful. In the three years that the World Reputation Rankings have been running, we have clear evidence that the U.S. and the U.K. in particular are losing ground.”
In fact, the United States lost only a single place in the top 100, and though British universities have lost ground, so have China’s two flagships, Tsinghua (which dropped from 30th to 35th place) and Peking University (38th to 45th).
While multiple objective factors figure in the World University Rankings, whether they indicate quality is a matter of debate. David G. Blanchflower, a Dartmouth economics professor, has criticized the criterion — also used by QS — that counts the number of faculty members or students from abroad. He finds it biased against American institutions, which have fewer of them.
Times Higher Education has tweaked its methodology several times since splitting with Quacquarelli Symonds. In one well-publicized embarrassment, the relatively obscure Alexandria University in Egypt catapulted ahead of both Harvard and Stanford in the category of research impact, or citations, helping it land in the top 200 in the 2010-11 rankings. “There was one mathematician, Mohamed El Naschie, who kept citing himself, and getting his friends to cite him,” says Richard Holmes, a professor at Universiti Teknologi MARA in Malaysia, who pointed out the error on his blog, University Ranking Watch.
The model was revised once more, only to fall afoul of Mr. Holmes again this year, when the Moscow State Engineering Physics Institute came in first for its research impact — tied with Rice University and ahead of M.I.T., Caltech, Harvard and Stanford. Mr. Holmes, the rankings equivalent of the baseball statistics maven Bill James, explains that someone at the Moscow campus was listed as co-author of a biannual review of particle physics — a reference routinely cited by anyone publishing in that field. He also pointed out that because all the citations occurred within two years they had greater weight in the model. Impact is also benchmarked differently for each country, giving Russian publications greater weight.
“You can’t say a given university in Malaysia is better than one in the United States just because universities in Malaysia aren’t very good,” Mr. Holmes says.
Noting that the impact of citations was just one of 13 performance indicators, Mr. Baty points out that the Moscow institute did not make the top 200 schools over all. By normalizing the results for subject areas and nationality, he says, “we reward quality and not just sheer quantity, and smaller specialist institutions with highly focused research can do very well in this indicator — we make no apology for that.”
QS, which supplies U.S. News with its 400 “World’s Best Universities,” has also come under fire recently, for offering “universities the opportunity to highlight their strength” by paying a fee to be rated on a scale of one to five stars. The University of Limerick in Ireland did not make the Shanghai ranking’s top 500 or the QS top 400. Yet after paying a one-time audit fee of $9,850 and an annual license fee of $6,850, Limerick is now able to boast that Quacquarelli Symonds has awarded it “5-star ratings across the areas of infrastructure, teaching, engagement and internationalization.”
Ben Sowter, who oversees QS’s ratings and rankings, refutes any suggestion that universities can buy stars: “Just because accreditation agencies charge the universities, that doesn’t mean they are biased.”
The influence of international rankings is likely to grow with the globalization of higher education. Countries whose universities rely on fee-paying foreign students to balance their books, or which see higher education as a potential earner of export income, are especially vulnerable to the vagaries of reviewers. Dr. Hazelkorn has written some of the harshest criticism of rankings and of companies that, she says, “have seen a commercial opening and are going hell for leather.”
Yet even she says: “When you are recruiting students, in many countries the first thing they ask you is, ‘Where are you ranked?’ Not being ranked makes you invisible.”
D.D. Guttenplan is education reporter for The International Herald Tribune.