Could the EU’s new U-Multirank someday challenge U.S. News?
In the early years of this century, Shanghai Jiao Tong University was a relatively unknown institution outside of China. In 1998, it had been selected by the Chinese government to be included in the 985 Project to build world-class universities. Beginning in 1999, a team at the university, led by Nian Cai Liu, developed the Academic Ranking of World Universities in order to benchmark the position of Chinese universities vis-à-vis competitor universities. The ARWU was first released in 2003 and was followed in the same year by Webometrics, developed by the Spanish National Research Council, and in 2004 by the Times Higher Education/QS World Universities Ranking. Today, there are about ten global rankings, and more than sixty countries have national rankings, many of which are sponsored by government or government agencies.
The release of the ARWU marked the emergence of the global phase of rankings. Prior to that, rankings—such as U.S. News and World Report’s college rankings, which were first published in 1983—were primarily about measuring national reputation and performance and providing information to students and parents. By placing higher education quality within a wider comparative and international framework, the ARWU was different. It immediately illustrated that national preeminence was no longer sufficient and that the higher education world was multipolar. In doing so, it set the cat among the pigeons.
Within months of the ARWU’s publication, the rankings were being called “a major wake-up call” for European higher education. To the consternation of European policy makers and academics, their universities made up only about one-third of the world’s top 100 institutions. Over the years, the precise number has ebbed and flowed, but the proportion has remained relatively static, depending on the ranking. European concern was set against the backdrop of the European Union’s (EU) ambitious Lisbon Agenda; launched in 2000, it sought to make Europe “the most dynamic and competitive knowledge-based economy in the world.” Strong, competitive, and modern universities lay at the heart of that goal.
In 2005, the German government launched the Exzellenzinitiative (Initiative for Excellence), replacing its long-standing policy support for equity across all universities with a plan to promote excellence among only a few. (Versions of this have since been introduced in many other European countries, including France, Spain, and Russia, and also around the world.) The French were equally troubled: in 2008, the French Sénat released a report arguing that the country’s researchers were being disadvantaged in favor of English-speaking institutions. A 2008 conference organized under the auspices of the French Presidency of the European Council championed the idea of a new EU ranking.
In contrast to the Shanghai ranking, which concentrates only on research, it was argued that the EU ranking should give due regard to the diversity of institutional missions and the breadth of higher education’s activity across teaching, research, and engagement. (It should be noted that the term “diversity” differs in other countries from its usage in the U.S., where it typically refers to ethnicity, race, and gender; elsewhere it describes a diverse range of college and university missions or purposes.) In 2005, the EU sponsored the first phase of a European classification system, launched as U-Map in 2009. In the same year, a consortium was established to test the feasibility of a multidimensional ranking, released as U-Multirank in 2011 with an operational version due for 2014.
U-Map is Europe’s version of the U.S. Carnegie Classification system. Based on the desire to highlight the diversity of European higher education institutions (HEIs), U-Map was designed as an interactive online tool that allows various stakeholders to choose the multidimensional classification attributes that are most important to them. There are twenty-nine indicators across six dimensions using official and institutional data: teaching and learning, students, research, knowledge exchange, international orientation, and regional engagement. Results are produced as a radar or sunburst diagram[CE1] , with each dimension represented by a different color, in order to provide a visual representation of an institution’s characteristics.
Source: Van Vught et al., U-Map: European Classification of Higher Education (Enschede, Netherlands: University of Twente Centre for Higher Education Policy Studies, 2010), http://www.u-map.org/U-MAP_report.pdf, p. 37.
U-Map is promoted as a profiling tool, to facilitate easy comparison between different institutions and inform student choice or strategic decision making by institutions or governments. So far more than 230 HEIs, primarily in Europe, have signed up and added their profiles to U-Map; the aim is to have 1,000 European HEIs involved by the end of 2013. The U-Map concept has been taken up and developed in different jurisdictions to showcase institutional diversity.
The Nordic countries launched their own U-Map, albeit with lower-than-expected participation, especially from Sweden and Denmark. In the meantime, Norway and the Republic of Ireland have developed their own versions. These are more complex, benefiting from access to a wider range of national and institutional data, but they share some similarities, including the visualisation of the results. In these instances, the maps are used to display institutional differences and to inform strategic dialogues between the government and/or ministry and HEIs about institutional targets and resourcing. An Australian model has recently been developed by the Martin Institute at the University of Melbourne and the Association Centre for Educational Research (ACER).
Borrowing on the experience of U-Map, U-Multirank was conceived to directly challenge the dominance of global rankings at both the conceptual and functional levels. Whereas U-Map profiles what an institution does, U-Multirank aims to assess how well it does these activities. The objective is to overcome complaints that traditional rankings compare apples with oranges rather than apples with apples.
It is being developed by members of the original CHERPA Consortium, which created U-Map and is led by the Centre for Higher Education (CHE) in Germany and the Centre for Higher Education Policy Studies at the University of Twente, plus the Centre for Science and Technology Studies at Leiden University, the academic publisher Elsevier, the Bertelsmann Foundation, and the software firm Folge 3.
U-Multirank is based on four design principles: it is user driven, whereby each individual or stakeholder group can rank the data according to his/her own preferences; it is multidimensional, with information collected according to five different characteristics; there is peer-group comparability, through which HEIs of similar missions can be compared; and it permits multilevel analysis, in which HEIs can be examined at the institutional level but also at the disciplinary or field-based level and at the department level.
U-Multirank also uses interactive online technology to facilitate multi-functionality. The system does not pre-assign a weighting to each indicator, and there are no composite indicators. This will preclude, the promoters say, the results being aggregated into a single-digit ordinal ranking. At the institutional level, the results will be shown in the sunburst format, while the field-based rankings will draw on the experience of the CHE ranking, which bands universities into three different groups (top, middle, and bottom), using traffic-light colors (green, yellow, and red), as illustrated in Figure 2. The intention is to avoid simplistic league tables.
Source: F. A. Van Vught and F. Ziegele, eds., U-Multirank: Design and Testing the Feasibility of a Multidimensional Global University Ranking (Brussels: European Commission Directorate of Education and Culture, 2010), http://ec.europa.eu/education/higher-education/doc/multirank_en.pdf
Feed the Political AnimalDonate
Washington Monthly depends on donations from readers like you.