1. The Executive Committee of the IREG Observatory on Academic Ranking and Excellence at its meeting in Warsaw on 15 May 2013 decided to grant QS Intelligence Unit the right to use the “IREG Approved” label in relation to the following three rankings produced – QS World University Rankings, QS University Rankings Asia, and QS University Rankings Latin America for the period ending 31 December 2016. The Executive Committee of IREG Observatory extended this period to 31 December 2017.
2. The Executive Committee of IREG Observatory at its meeting in Paris on 22 March 2013 discussed the outcomes of the audit of the three rankings mentioned above. After analysing the documents which reflect and attest a great attention to due process as well as work done by all involved in the audit process, the Executive Committee requested additional clarifications regarding the current presentation of the above mentioned three rankings along with the “QS Stars” rating. The Executive Committee noted that “QS Stars” were not a part of the self-report under review, hence, were not subject of the audit procedure. Accordingly, the IREG label is used [in printed and Internet presentations] strictly in connection with the rankings that have been evaluated in course of the audit and according to the procedures for the IREG Ranking Audit presented in the IREG Ranking Audit Manual.
3. The audit has been performed by the Audit Team composed of:
- Tom Parker (Chair), Senior Associate, Institute for Higher Education Policy, Washington, DC, USA;
- Akiyoshi Yonezawa, Associate Professor, University of Nagoya, Japan;
- Stanislaw Chwirot, Professor, Nicolaus Copernicus University in Torun, Poland; and
- Saddiq Sait Mohammed, Professor, Director for Information Technology, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia.
4. The Audit Team reviewed the Self-Report on the basis of twenty criteria set up in the IREG Ranking Audit Manual [developed with reference to the Berlin Principles on Ranking of Higher Education Institutions]. The criteria, with assigned numerical scores and weights, cover the following aspects of ranking:
- Criteria 1-3: pertain to purpose, target groups, and basic approach as well as to institutional diversity, linguistic, cultural, economic and historical context;
- Criteria 4-10: pertain to the methodology and the importance of ensuring that rankings choose indicators according to their relevance and validity, measure outcomes and are transparent;
- Criteria 11-14: pertain to the publication and presentation of ranking results;
- Criteria 15-17: relate to the transparency and responsiveness; and
- Criteria 18-20: relate to quality assurance of the ranking.
5. In general, the Audit Team found that:
- QS Intelligence Unit is striving to meet the complex challenges of trying to design its rankings on a multinational basis, thereby balancing the need to form judgments of highly complex and often subjective issues with the need for objective and easily attainable data;
- QS Intelligence Unit is working to make its system as transparent and understandable as possible by establishing a separate website to explain the adopted methodologies including the weighting of individual indicators.
6. The Audit Team identified some issues where improvement has been suggested:
- indicators chosen appear imbalanced with a bias towards research as evidenced by the weights assigned to the two indicators related to academic reputation (40 per cent) and to citations per faculty (20 per cent) as compared to the two indicators related to quality of education (staff-student ratio weighted as 20 per cent and employer reputation weighted as 10 per cent);
- in terms of academic quality, the indicator calculated from ‘citations per faculty’ needs improvement because it is not entirely clear how the citations per faculty are computed;
- QS Intelligence Unit should be more explicit about the ranking indicators being used which may favour universities which have a majority of the staff in faculties producing higher numbers of citations;
- current measures concerning the areas of teaching and employability are imperfect;
- only three of the six indicators used are somehow related to outputs (academic reputation, employer reputation and citations per faculty), whereby it is difficult to assess to what extent employer reputation gives a measure of the learning outcome;
- QS Intelligence Unit should provide more information on its population sampling methodology and assessment of possible selection bias; e. g., information is missing about breakdowns for people and institutions being asked to contribute to the employer surveys; and
- a formal procedure for correcting errors after the ranking is published is missing.
7. In its feedback on the issues made by the Audit Team, the QS Intelligence Unit made the following observations:
- lack of bibliometric data in the arts and humanities is indeed a subject of concern and QS Intelligence Unit is committed to improve this indicator as suggested;
- all institutions that the authors of a publication represent do receive full credit for the citations. This decision has been made consciously based on expert guidance. However, there is the intention to filter out publications where the authors represent more than a given number of institutions, such as in the case of literature reviews which receive high levels of citations;
- QS Intelligence Unit is not in favour of the opinion that more measures are better because an increased number will necessarily increase the margin for error and also diminish transparency;
- three indicators related to outputs carry a weight of 70 per cent which clearly indicates the preference for outcomes over inputs; moreover, since outcomes must be seen as a function of inputs plus added value, outputs should not be taken in isolation;
- QS Intelligence Unit agreed with the recommendation to publish details about the diversity in terms of the size and scale of organisations represented in the employer surveys and intends to do so in the next edition of the QS survey analysis; and
- QS Intelligence Unit answered that, for the time being, errors are corrected on the basis of a case-by-case engagement with institutions. However, the recommendation put forward by the Audit Team is a good one and will be implemented.
8. Due to the fact that this audit process has been the first one of the IREG Observatory, both sides, the Audit Team as well as the ranking institution, were confronted with certain uncertainties. The Audit Team expected further concrete guidance for the internal consensus-process, whereas the ranking institution expected a learning process which would continue after the response to the IREG Audit Report. The IREG Audit Coordinator remained neutral; he didn’t enter into substance during the whole audit process. However, he felt it necessary to report those uncertainties to the Executive Committee which will deal with the issue of establishing more specific guidelines for the Audit Team as well as with the question of a feedback to the response of the ranking institution.