Features

September/ October 2013 Europe Enters the College Rankings Game

Could the EU’s new U-Multirank someday challenge U.S. News?

By Ellen Hazelkorn

Another criticism of rankings is that they have a propensity to disproportionately focus on research-intensive universities. This has had the effect of driving up the status and reputation of “world-class” universities who serve privileged students while simultaneously undermining institutional diversity. U-Multirank has embraced diversity as a core principle, but will now be including a series of predefined rankings by institutional type, including research-intensive universities based on about ten research-related, mainly bibliometric, indicators. While this ranking will be made on a multidimensional and more differentiated basis than existing global rankings, public and policy focus may well gravitate toward this particular ranking—thereby undermining the whole purpose of the exercise.

The CHE ranking has wide usage across Germany, and is also used in neighbouring Austria, the Netherlands, and German-speaking cantons of Switzerland. A French version now in development will probably use nine indicators, some of which will take French education peculiarities into account. Nonetheless, it remains to be seen whether the sunburst diagram, as developed by U-Map and U-Multirank, can provide a meaningful comparator framework for students and other stakeholders, especially given concerns about some of the indicators. Despite claims by U-Multirank that it will enhance consumer choice, global rankings today are much more about global and institutional positioning. After all, this was the reason U-Multirank was created.

Data Sources

Critics of rankings often point to an overreliance on institutional self-reporting in the absence of meaningful cross-national comparative data and data definitions. The same problem affects Europe; the definition of who is a student or a faculty member differs from one member state to the next. There has been some discussion about developing a global common data set, but cross-jurisdictional comparisons of educational quality defy simple methods.

Dependence on institutional data has raised questions about the accuracy of reporting and allegations of “gaming,” which have plagued U.S. News and World Report. It has also led to various efforts to boycott rankings in the hope of undermining them—most notably in the U.S. by the Education Conservancy in 2007 and in Canada around the same time. Most of these campaigns have fizzled out, as the boycott has had little effect except to isolate those universities. The main lesson is that being ranked brings visibility, which is necessary oxygen almost regardless of which position the university actually holds in the ranking.

Nevertheless, data accuracy and accessibility remains a potential land mine. Similarly, administrative time and money has come under scrutiny. To get around this, and to ensure that U-Multirank has consistent, accurate, and independent access to data, the EU cleverly commissioned a sister project called EUMIDA, which lays the foundation for regular data collection by national statistical institutes on individual HEIs in EU member states plus Norway and Switzerland, likely to be coordinated through EUROSTAT. The feasibility study was completed in 2010, and the implementation stage is due to start shortly. EUMIDA is a good example of how policy makers can develop solutions to agilely circumvent roadblocks. This effectively nullifies the decision of LERU, and others, to boycott participation.

It also signals an important opportunity. If U-Multirank can pull in data from other national and supranational sources, then it could provide the basis for a worldwide database on higher education. The implications of that would be very significant indeed.

In a globalized world, cross-jurisdictional comparisons are inevitable and only likely to intensify in the future. In addition, demands for greater accountability about higher education performance can no longer be ignored. We have a right to know whether our students’ qualifications are of high quality, are internationally comparable and transferable, and are being achieved at a reasonable and efficient cost. Rankings have arisen because of this information deficit.

Ultimately, new media technologies and formats, such as social media, consumer Web sites and the Internet, and the use of search engines and open-source facilities, will dramatically transform the debate over the coming years by putting more control directly into the hands of users. It is easy to imagine a higher education TripAdvisor, but crowdsourcing carries its own concerns.

In this environment, U-Multirank is a significant improvement on other global rankings. The difficulties encountered by U-Multirank highlight the complexities associated with assessing and comparing quality. Context remains fundamentally important. National or global, public or private, student cohort and learning environment—these dimensions can radically affect the performance of institutions and render simple comparisons meaningless.

However, as an indicator-based system, U-Multirank can only achieve limited transparency, and cannot provide more than a quantitative picture. It cannot pretend to say anything about quality. And if it remains true to its original mission—to be genuinely “multi-rank”—will it struggle to displace other global rankings? Or will decision makers continue to look for simple answers to complex problems?

Ellen Hazelkorn is the vice president for research and enterprise and the dean of the Graduate Research School at the Dublin Institute of Technology. She is also the head of the Higher Education Policy Research Unit, and author of Rankings and the Reshaping of Higher Education: The Battle for World-class Excellence (2011).

Comments

(You may use HTML tags for style)

comments powered by Disqus