IREG SEAL OF APPROVAL
Process
The IREG Executive Board has appointed a Seal of Approval Committee made up of leading ranking experts from around the world. Applicants for the Seal of Approval submit a detailed self-report to the committee describing how they comply with IREG Observatory standards and how they measure up to twenty detailed criteria around issues like methodology and transparency.
After careful review, the Seal of Approval Committee makes a decision as to whether the applicant is qualified to use the “IREG Approved” seal. It may grant use of the Seal of Approval after its initial review, or it may ask for further dialogue with the applicant, or it may deny Seal of Approval status pending changes in practice.
After receiving the completed application, the Committee will make every effort to inform the applicant of questions or issues as quickly as possible so that a final decision can be reached in a timely manner.
The Seal of Approval has been conceived as a public responsibility initiative. Therefore, the fee for applicants is set periodically by the Executive Committee based on cost of the service to IREG Observatory. Fees for organizations which are not IREG Observatory Members are set at 50% higher than for member organizations. For information about current fees contact the IREG Observatory Secretariat at the address below.
Ranking organizations interested in applying for the Seal of Approval should send a letter of interest to:
and a hard copy to: IREG Observatory, rue Washington 40, 1050 Brussels, Belgium
Guidelines for Writing the Self-Report
The self-report is designed not only to enable the Seal of Approval Committee to make a judgment about the applicant, but to help the applicant go through a process of self -assessment. There is no required length. The basic requirement of the report is that it explain how the applying ranking organization addresses each of the criteria necessary for approval. What follows is a suggested structure for the report and a list of the required criteria. The Berlin Principles, which form the basis for the criteria, is also appended. Applicants requiring assistance in preparing the self-report are invited to contact the IREG Observatory Secretariat.
Structure of the self-report
1. Information on the ranking organization
1.1. Name and address
1.2. Type (academic – non-academic, public – private, for-profit – non-profit)
1.3. Financing model/sources of the ranking
1.4. Contact person
2. Information on previous record of the ranking activity
2.1. Date of first publication and brief history
2.2. Publication frequency
2.3 Date of the latest two publications
3. Purpose and main target groups of the ranking
3.1. Purpose
3.2. Main target groups /users
4. Scope of the ranking
4.1. Geographical scope (global, international (e.g. European, national etc.)
4.2. Types of institutions, number of institutions ranked, number of institutions published
4.3. Level of comparison (e.g. institutional, field based, etc.)
5. Methodology of the ranking
5.1. General approach, options include overall composite score through setting fixed weights to different indicators, separate indicators which allow customized rankings, other aggregation methods
5.2. Measured dimensions (research, teaching & learning, third mission etc.)
5.3. Indicators (relevance, definitions, weights etc.)
5.4 Data source, options include third-party database (data was not provided by universities), data collected from universities by third-party agencies, data collected from universities by ranking organisations (or their representative), survey of university staff or students by ranking organisations with collaboration of the universities, survey conducted by ranking organisations exclusively, etc.
5.5. Transparency of methodology
5.6. Display of ranking results (league tables, groups or clusters, mixed way)
6. Quality assurance of the ranking
6.1. Quality assurance on data collection and process
6.2. Organizational measures for quality assurance (consultants, boards etc.)
7. Publication and use of the ranking
7.1. Type of publication (print, online or both)
7.2. Language of publication (primary language, other available language)
7.3. Access for users of the rankings (registration, fees)
8. Impact of the ranking
8.1. Impact on personal level (students, parents, researchers etc.)
8.2. Impact on institutional level (higher education institution as a whole, rectors and presidents, deans, students, administration, etc.)
8.3. Impact on the higher education system
The IREG Ranking Seal of Approval Criteria
The criteria refer to five dimensions of rankings: first, the definition of their purpose, target groups and their basic approach, second, their methodology, including selection of indicators, methods of data collection and calculation of indicators, third, the publication and presentation of their results, fourth, aspects of transparency and responsiveness of the ranking and the ranking organization and, last, aspects of internal quality assurance processes and instruments within the ranking .
2.1. Criteria on Purpose, Target Groups, Basic Approach
The method of evaluation called “ranking” refers to a method which allows a comparison and ordering of higher education institutions and their activities. Within this general framework rankings can differ in their purpose and aims, their main target audiences and their basic approach.
Criterion 1:
The purpose of the ranking and the (main) target groups should be made explicit.
Criterion 2:
Rankings should recognize the diversity of institutions and take the different missions and goals of institutions into account. The ranking has to be explicit about the type/profile of institutions which are included and those which are not.
Criterion 3:
Rankings should specify the linguistic, cultural, economic, and historical contexts of the educational systems being ranked. International rankings should adopt indicators with sufficient comparability across various national systems of higher education.
2.2. Criteria on Methodology
The methodology has to correspond to the purpose and basic approach of the ranking. Rankings have to meet standards of collecting and processing statistical data.
Criterion 4:
Rankings should choose indicators according to their relevance and validity. Rankings should be clear about why measures were included and what they are meant to represent.
Criterion 5:
Good ranking practice would be to combine the different perspectives provided by the sources in order to get a more complete view of each higher education institution.
Criterion 6:
Rankings should measure outcomes in preference to inputs whenever possible. Measures of outcomes provide a more accurate assessment of the standing and/or quality of a given institution or program, and compilers of rankings should ensure that an appropriate balance is achieved.
Criterion 7:
Rankings have to be transparent regarding the methodology used. The choice of methods used to prepare rankings should be clear and unambiguous.
Ranking must provide clear definitions for each indicator as well as the underlying data sources and the calculation of indicators from raw data. The methodology has to be publicly available to all users of the ranking as long as the ranking results are open to public.
Criterion 8:
If rankings are using composite indicators the weights of the individual indicators have to be published. Changes in weights over time should be limited and have to be justified due to methodological or conceptual considerations.
Institutional rankings have to make clear the methods of aggregating results for a whole institution. Institutional rankings should try to control for effects of different field structures (e.g. specialized vs. comprehensive universities) in their aggregate result
Criterion 9:
Data used in the ranking must be obtained from authorized, audited and verifiable data sources and/or collected with proper procedures for professional data collection. Information on survey data has to include: source of data, method of data collection, response rates, and structure of the samples (such as geographical and/or occupational structure).
Criterion 10:
Although rankings have to adapt to changes in higher education and should try to enhance their methods, the basic methodology should be kept stable as much as possible. Changes in methodology should be made transparent.
2.3. Criteria on Publication and Presentation of Results
Rankings should provide users with a clear understanding of all of the factors used to develop a ranking.
Criterion 11:
The publication of a ranking has to be made available to users throughout the year either by print publications and/or by an online version of the ranking.
Criterion 12:
The publication has to deliver a description of the methods and indicators used in the ranking. That information should take into account the knowledge of the main target groups of the ranking.
Criterion 13:
The publication of the ranking must provide scores of each individual indicator used to calculate a composite indicator in order to allow users to verify the calculation of ranking results. Composite indicators may not refer to indicators that are not published.
2.4. Criteria on Transparency and Responsiveness
Greater transparency means higher credibility of a given ranking.
Criterion 15:
Rankings should be compiled in a way that eliminates or reduces errors and published in a way that errors and faults caused by the ranking can be corrected. Such errors should be corrected within a ranking period at least in an online publication of the ranking.
Criterion 16:
Rankings have to be responsive to higher education institutions included/ participating in the ranking. This involves giving explanations on methods and indicators as well as explanation of results of individual institutions.
Criterion 17:
Rankings have to provide a contact address in their publication (print, online version) to which users and institutions ranked can direct questions about the methodology, feedback on errors and general comments. They have to demonstrate that they respond to questions from users.
2.5. Criteria on Quality Assurance
Rankings are assessing the quality of higher education institutions. This puts a great responsibility on rankings concerning their own quality and accurateness. They have to develop their own internal instruments of quality assurance.
Criterion 18:
Rankings have to apply measures of quality assurance to ranking processes themselves.
Criterion 19:
Rankings have to document the internal processes of quality assurance. This documentation has to refer to processes of organizing the ranking and data collection as well as to the quality of data and indicators.
Criterion 20:
Rankings should apply organisational measures that enhance the credibility of rankings. These measures could include advisory or even supervisory bodies, preferably (in particular for international rankings) with some international participation.