Features

September/ October 2013 Europe Enters the College Rankings Game

Could the EU’s new U-Multirank someday challenge U.S. News?

By Ellen Hazelkorn

During the feasibility phase (2009-2011), about 150 HEIs, primarily from Europe, participated. This was a source of disappointment, particularly since only fifty universities from outside Europe and only two in the U.S. signed up. The implementation phase was launched in January 2013, during the Irish Presidency of the European Council. The threshold of 500 universities has been met with HEIs from more than sixty countries, in line with targets of 75 percent from EU countries and 25 percent from non-EU nations. This phase is being funded with 2 million euros ($2.6 million) for two years by the European Commission, with a possibility of another two years of funding in 2015-16. The ultimate intention is for the ranking to be supported by a foundation or similar independent consortium.

Data collection for the current phase is due to begin shortly, with first results expected in early 2014. This phase will focus on institutional and field-based rankings, including mechanical and electrical engineering, business, and physics. The next phase, due at the end of 2014, will cover universities providing degrees in computer science/IT, sociology, psychology, music, and social work. In response to criticism from research-intensive universities, U-Multirank will facilitate comparison by institutional type. A consultation process on the refined indicators and design of the online tool will continue in parallel with implementation.

The major difficulty plaguing any global ranking is the choice of indicators and the availability of meaningful international comparative data. These issues lie at the heart of the innovative aspects of U-Multirank but are also the source of continued criticism and scepticism. Other concerns arise from the purpose and likely use of the ranking’s results, costs, and name.

The most vocal opposition has come from LERU, the League of European Research Universities, which represents twenty-one research-intensive institutions across Europe, including the universities of Cambridge and Oxford, University College London, Imperial College London, and the University of Edinburgh. LERU formally withdrew its support for the project in January 2013, citing concerns about the need for and cost of U-Multirank—a view that also found favor within the UK House of Lords. In March 2013, following a four-month inquiry into the EU’s contribution to the modernization of European higher education, its EU Social Policies and Consumer Protection Sub-Committee released a report saying that U-Multirank was not a priority for the EU at this time and expressing concern about the administrative burden being placed on institutions to provide the requisite data. It also questioned whether the rankings would ultimately be used to allocate resources for various European research or other programs.

It is fair to say that this criticism should be taken with a grain of salt. There is a pernicious Euro-sceptic sentiment that runs through many UK parliamentary and government statements and is especially strong at the moment. The British government, led by the Conservative Prime Minister David Cameron, is struggling to maintain a balanced discussion despite the involvement of the junior coalition partner, the Liberal Democrats, which is a strong supporter of the EU. Cameron has been forced to declare a simple “in-out” referendum on EU membership in reaction to the growing strength of the right-wing UK Independence Party (UKIP), which is eating into his own party support.

LERU criticism is probably best understood in terms of its membership, which is strongly led by UK-based universities. They have arguably benefited the most from the English-language bias that permeates the Times Higher Education and QS rankings—and it could be said that they have the most to lose from an alternative format. LERU apart, other UK universities have applied to be part of U-Multirank.

Nonetheless, genuine issues have arisen, most thoroughly documented in two substantial reports by the European University Association,[*] which is also conducting a study of the impact of rankings on European higher education institutions.

Indicators

The choice of indicators is always a source of contention. This arises from whether the indicators measure something meaningful or simply what’s easy to count (to paraphrase Einstein). There is some distinction between the choice of indicators used for institutional and field-based rankings, but overall they combine traditional indicators with some new innovative ones, such as interdisciplinary programs, art-based research outputs, and regional engagement. They also include student satisfaction data.

However, input and output indicators are used interchangeably. For example, “continuous professional development,” or CPD, activity is used as an indicator of knowledge transfer but is measured simply in terms of the number of such courses offered per full-time academic staff. Similar comments can be made about counting the number of staff employed in the technology transfer office or international students. Arts, humanities, and social science research are still underrepresented because of reliance on traditional bibliometric databases, such as Web of Science and Scopus, and engagement and co-publications are viewed primarily through a techno-science lens.

U-Multirank may have more indicators than other rankings, but it has not cracked the problem of measuring the quality of teaching and learning. It had hoped that the Organisation for Economic Co-operation and Development’s Assessment of Higher Education Learning Outcomes (AHELO) initiative would provide much-needed data, and chose its initial field-based rankings to align with it. But the demise of AHELO, announced in March of this year, after an estimated expenditure of $10 million, put an end to that dream. So it’s back to reliance on expenditure, graduation rates, and academic performance—which, as we know, are poor proxies for teaching and learning outcomes measures.

The U-Multirank team has worked closely with stakeholders to identify new and more useful indicators. But, ultimately, intensity is equated with quality, and this problem is apparent in oral presentations given by team members. In other words, the more there is of a particular activity, the better it is assumed to be. This is then represented in the sunburst graphic as longer (or shorter) legs—resulting in inevitable misinterpretation.

Purpose and Audience

U-Multirank is designed to challenge the methodology and dominance of the big three global rankings—in particular, Shanghai’s ARWU and the Times Higher Education and QS rankings. If it can entice sufficient numbers of U.S. universities—and so far fewer than twenty have signed up—it will also be able to challenge the rankings of U.S. News & World Report.

Use of the term “rankings” to describe what is in effect “banding” has, however, raised some hackles as well as suspicion that it will ultimately produce or facilitate an ordinal ranking. The European Commission has denied that any correlation will be made between the rankings and resource allocation. However, there is already evidence that the EU, as well as other funding agencies, does take rankings into account in the assessment of the “quality” of the research team. Likewise, there is strong evidence that business and employer groups, philanthropic and investment organizations, and other countries—particularly when national scholarships or partnerships are being considered—do factor in rankings.

Ellen Hazelkorn is the vice president for research and enterprise and the dean of the Graduate Research School at the Dublin Institute of Technology. She is also the head of the Higher Education Policy Research Unit, and author of Rankings and the Reshaping of Higher Education: The Battle for World-class Excellence (2011).

Comments

(You may use HTML tags for style)

comments powered by Disqus