Rankings and league tables of higher education institutions (HEIs) and programs are a global phenomenon. They serve many purposes: they respond to demands from consumers for easily interpretable information on the standing of higher education institutions; they stimulate competition among them; they provide some of the rationale for allocation of funds; and they help differentiate among different types of institutions and different programs and disciplines. In addition, when correctly understood and interpreted, they contribute to the definition of “quality” of higher education institutions within a particular country, complementing the rigorous work conducted in the context of quality assessment and review performed by public and independent accrediting agencies. This is why rankings of HEIs have become part of the framework of national accountability and quality assurance processes, and why more nations are likely to see the development of rankings in the future. Given this trend, it is important that those producing rankings and league tables hold themselves accountable for quality in their own data collection, methodology, and dissemination.

In view of the above, the International Ranking Expert Group (IREG) was founded in 2004 by the UNESCO European Centre for Higher Education (UNESCO-CEPES) in Bucharest and the Institute for Higher Education Policy in Washington, DC. It is upon this initiative that IREG’s second meeting (Berlin, 18 to 20 May, 2006) has been convened to consider a set of principles of quality and good practice in HEI rankings - the Berlin Principles on Ranking of Higher Education Institutions.

It is expected that this initiative has set a framework for the elaboration and dissemination of rankings - whether they are national, regional, or global in scope - that ultimately will lead to a system of continuous improvement and refinement of the methodologies used to conduct these rankings. Given the heterogeneity of methodologies of rankings, these principles for good ranking practice will be useful for the improvement and evaluation of ranking.

Rankings and league tables should:

A) Purposes and Goals of Rankings

1. Be one of a number of diverse approaches to the assessment of higher education inputs, processes, and outputs. Rankings can provide comparative information and improved understanding of higher education, but should not be the main method for assessing what higher education is and does. Rankings provide a market-based perspective that can complement the work of government, accrediting authorities, and independent review agencies.

2. Be clear about their purpose and their target groups. Rankings have to be designed with due regard to their purpose. Indicators designed to meet a particular objective or to inform one target group may not be adequate for different purposes or target groups.

3. Recognize the diversity of institutions and take the different missions and goals of institutions into account. Quality measures for research-oriented institutions, for example, are quite different from those that are appropriate for institutions that provide broad access to underserved communities. Institutions that are being ranked and the experts that inform the ranking process should be consulted often.

4. Provide clarity about the range of information sources for rankings and the messages each source generates. The relevance of ranking results depends on the audiences receiving the information and the sources of that information (such as databases, students, professors, employers). Good practice would be to combine the different perspectives provided by those sources in order to get a more complete view of each higher education institution included in the ranking.

5. Specify the linguistic, cultural, economic, and historical contexts of the educational systems being ranked. International rankings in particular should be aware of possible biases and be precise about their objective. Not all nations or systems share the same values and beliefs about what constitutes “quality” in tertiary institutions, and ranking systems should not be devised to force such comparisons.

B) Design and Weighting of Indicators

6. Be transparent regarding the methodology used for creating the rankings. The choice of methods used to prepare rankings should be clear and unambiguous. This transparency should include the calculation of indicators as well as the origin of data.

7. Choose indicators according to their relevance and validity. The choice of data should be grounded in recognition of the ability of each measure to represent quality and academic and institutional strengths, and not availability of data. Be clear about why measures were included and what they are meant to represent.

8. Measure outcomes in preference to inputs whenever possible. Data on inputs are relevant as they reflect the general condition of a given establishment and are more frequently available. Measures of outcomes provide a more accurate assessment of the standing and/or quality of a given institution or program, and compilers of rankings should ensure that an appropriate balance is achieved.

9. Make the weights assigned to different indicators (if used) prominent and limit changes to them. Changes in weights make it difficult for consumers to discern whether an institution’s or program’s status changed in the rankings due to an inherent difference or due to a methodological change.

C) Collection and Processing of Data

10. Pay due attention to ethical standards and the good practice recommendations articulated in these Principles. In order to assure the credibility of each ranking, those responsible for collecting and using data and undertaking on-site visits should be as objective and impartial as possible.

11. Use audited and verifiable data whenever possible. Such data have several advantages, including the fact that they have been accepted by institutions and that they are comparable and compatible across institutions.

12. Include data that are collected with proper procedures for scientific data collection. Data collected from an unrepresentative or skewed subset of students, faculty, or other parties may not accurately represent an institution or program and should be excluded.

13. Apply measures of quality assurance to ranking processes themselves. These processes should take note of the expertise that is being applied to evaluate institutions and use this knowledge to evaluate the ranking itself. Rankings should be learning systems continuously utilizing this expertise to develop methodology.

14. Apply organizational measures that enhance the credibility of rankings. These measures could include advisory or even supervisory bodies, preferably with some international participation.

D) Presentation of Ranking Results

15. Provide consumers with a clear understanding of all of the factors used to develop a ranking, and offer them a choice in how rankings are displayed. This way, the users of rankings would have a better understanding of the indicators that are used to rank institutions or programs. In addition, they should have some opportunity to make their own decisions about how these indicators should be weighted.

16. Be compiled in a way that eliminates or reduces errors in original data, and be organized and published in a way that errors and faults can be corrected. Institutions and the public should be informed about errors that have occurred.

Berlin, 20 May 2006

News from IREG Members

The worldwide ranking UI GreenMetric Workshop in Bologna

This Bologna workshop is aimed to highlight the commitment of Italian universities towards a sustainable 360° approach. UI GM Chairperson, Riri Fitri Sari, will showcase the ranking’s structure and indicators in details. Italian participants are warmly invited to ask questions and raise issues concerning the specificity of the Italian universities to fit into the UI GreenMetric ranking. The Alma Mater Studiorum assumes the role of National Network Hub for UI GM in the framework of this event that will also host experts from RUS (Rete delle Università per lo Sviluppo Sostenibile).

Ranking conference in Warsaw

Polish universities feel, they deserve a better place in international rankings. To discuss what can and should be done to improve the quality of research, rectors of best universities in the country gathered 1-2 December in Warsaw at the conference “Polish Universities in International Perspective – Rankings and Strategic Management of University” organized by Perspektywy Education Foundation jointly with the Conference of Rectors (CRASP) and Polish Accreditation Commission (PAC).

Conference on Universities’ Reputation

Spanish Universidad de Navarra organizes the International Conference BUILDING UNIVERSITIES' REPUTATION Understanding the Student Perspective – Key to a Reputation Strategy, Pamplona, Spain 30-31 March 2017. www.unav.edu/bur

Report on Activities of IREG Observatory 2015-1016

Dr. Jan Sadlak, President of IREG Observatory on Academic Ranking and Excellence at the General Assembly held in Lisbon on 6 May 2016 presented a detailed report on activities of the organization, the President and members of the Executive Committee in the period: May 2015 – May 2016.

You may ready the report HERE.

Members area

© 2017 IREG Observatory on Academic Ranking and Excellence