College Guide

Blog

August 20, 2009 12:47 PM International Studies

How America’s mania for college rankings went global.

By Ben Wildavsky

Other attempts to grade the world’s colleges and universities have also blossomed. The “Webometrics Ranking of World Universities,” put together by a division of the Spanish National Research Council, the country’s largest public research body, measures Web-based academic activities. The International Professional Rankings of World Universities, compiled by France’s prestigious Mines ParisTech, are based solely on the number of a university’s alumni serving in top positions in Fortune 500 companies.

These international rankings serve as broad measures of quality for nations intent on improving their international standing. They are also being used in some cases as the equivalent of the Good Housekeeping seal of approval. The Mongolian government, for instance, has weighed a policy that would give study-abroad funding only to students admitted to a university that appears in one of the global rankings. In the Netherlands, an immigration-reform proposal aimed at attracting more skilled migrants would restrict visas to all but graduates of the universities ranked in the two top tiers of global league tables.

Other countries are trying to revamp their university systems in the hope of achieving higher stature in the rankings. “Excellence initiatives in Germany, Russia, China and France are policy responses to rankings,” Ellen Hazelkorn, director of the Higher Education Policy Research Unit at the Dublin Institute of Technology, wrote in the online publication University World News. “The pace of higher education reform is likely to quicken in the belief that more elite, competitive and better institutions are equivalent to being higher ranked.” A recent study of rankings in four countries, conducted by the Institute for Higher Education Policy, found that rankings had a useful impact on how universities make decisions, including more data-based assessment of success, but also some potentially negative effects, like encouraging a focus on elite research universities at the expense of those that tend to serve less-advantaged students. In some cases, universities are playing the game, however grudgingly, with cold hard cash: in Australia, a number of vice chancellors have received salary bonuses predicated on their success in nudging their campuses up in the rankings.

All the existing international rankings have significant failings. Spain’s “Webometrics” effort is creative but necessarily very narrow. Shanghai’s approach is heavily biased toward science-oriented institutions, and gives universities dubious incentives to chase Nobel winners whose landmark work may not be recent enough to add meaningfully to the institution’s intellectual firepower. France’s Professional Rankings just happen to place far more French schools in the top echelon of universities than do other global rankings—a result that led to a memorable tautological headline in University World News: “French Do Well in French World Rankings.” Critics of Times Higher note that its highly volatile rankings depend heavily on an e-mail survey with a miniscule response rate and a marked bias toward institutions in the U.K. and former British Empire.

The authors of the rankings themselves are often up front about their shortcomings. “Any criticisms I’m quite happy to print,” says Ann Mroz, a veteran journalist who edits Times Higher Education. “I would prefer that people came to us and there was some sort of debate about it and see whether maybe we have got a few things wrong. Until we discuss it we’re never going to know.” Mroz says that she herself is uncomfortable with the use of faculty-student ratios in the Times Higher rankings. “It’s so crude,” she says. “Does it tell you how good the teaching is?” She would like to use a better measure, she says—if one can be found.

That’s the crux of the matter: students and governments love rankings, and people will continue to produce them, however problematic they may be, as long as that appetite exists. Valérie Pécresse, France’s minister of higher education and research, once quipped that the problem with rankings was that they existed. But if that’s the problem, it’s an insoluble one—international rankings are quite clearly here to stay. The question is, how do we make them better?

Many organizations, mostly outside the United States, are tackling this problem. The European Union, for instance, just announced that it is developing a new “multi-dimensional global university ranking.” Mostly focused on Europe, the goal of the new assessment, still in the exploratory stage, is to move beyond research in hard sciences to include humanities and social sciences, as well as teaching quality and “community outreach.” But for now, the best bet in the rankings world may be an initiative that the Organisation for Economic Co-operation and Development has in the works, called the International Assessment of Higher Education Learning Outcomes, or AHELO. On an aggregate level, a nation’s success in building its higher education system is typically measured by enrollment levels and graduation rates—in other words, measures of quantity. AHELO is based on the premise that those measures should be accompanied by assessments of quality—how well professors are teaching, and how well students are actually learning. It’s an approach that focuses on the missing link in the rankings explosion: outputs and value-added rather than inputs and reputation.

It’s a difficult nut to crack, since there’s no standardized measure of learning quality, or even much agreement on what that might hypothetically be. The nascent AHELO’s answer to that conundrum relies on four major components. The first measures students’ skills in areas such as analytical reasoning, writing, and applying theory to practice. The second measures subject-specific knowledge. The third looks at the context in which students learn, including their own demographic backgrounds and the characteristics of the universities they attend.

Ben Wildavsky is a senior fellow at the Ewing Marion Kauffman Foundation and a guest scholar at the Brookings Institution. He is working on a book about the globalization of higher education, forthcoming from Princeton University Press in 2010.

Comments

  • Phil Baty on December 18, 2009 10:03 AM:

    The Times Higher Education World University Rankings are changing. The magazine has ended its rankings partnership with QS and will now produce the annual rankings with Thomson Reuters. THE and TR are currently consulting on a new more rigorous and transparent methodology. For news on the development of the rankings, and to have yoru say, vist: http://bit.ly/ErAag
    Phil Baty