Critical Treatise on University Ranking Systems

Abstract

Ranking systems use different methodologies, indicators, and data sources primarily focused on research quality and their results are important for stakeholders to compare universities on global level. Ranking also improves university global image, recruitment, and funding. Despite their importance rankings have flaws and pitfalls that cause controversy and concern. Detailed study of ranking literature indicated that research data shared by universities is a matter of concern in relation to credibility and authenticity. It also indicated that subjective ranking tools can produce erroneous results making them less useful for stakeholders. Some methodologies are also inherent flaws that produce different ranks for the same institution in different rankings. Due to lack of clear differences between universities it is recommended that stakeholders should consider detailed score tables rather than general ranking lists. Stakeholders should also acknowledge university disciplinary focus and education aspects that are not covered by rankings. This treatise is designed to give insight into rankings for academics and stakeholders with the aim of elucidating ranking importance, methodology, and indicators. It also gives comprehensive perspective of influential rankings and general analysis of ranking controversy, concerns, flaws, and pitfalls.

Share and Cite:

Sayed, O. (2019) Critical Treatise on University Ranking Systems. Open Journal of Social Sciences, 7, 39-51. doi: 10.4236/jss.2019.712004.

1. Introduction

World rankings are conducted by various organizations, media, and academic bodies. They rank Higher Education Institutions (HEIs) by assessing faculty, research, graduates, income, and reputation using different methodologies and indicators. Ranking importance was revealed by surveys indicating that ranking lists are important in student choice of HEIs [1]. Several studies indicated that rankings are important for global comparison of HEIs [2] [3]. Studies also indicated that rankings influence stakeholders’ and funding agencies’ decision making [4]. However, concern about ranking accuracy pertains to using easily quantifiable rather than important indicators [5] [6]. Some reservations also relate to securing ranking position by dubious practices and data manipulation [6] [7] [8]. Such doubts reflect lack of consensus about rankings among academics. A comprehensive review of ranking literature published during the past fifteen years has been conducted. Published data were studied in order to present a detailed description and comparative view of ranking systems methodology, indicators, and data sources. A thorough analysis was carried out to establish sources of controversy, inherent flaws, and apparent pitfalls. This treatise offers insight into rankings for academics and stakeholders aiming to discuss ranking importance and a comparative view of methodology, indicators, data sources, and analysis of controversy by depicting rankings flaws and pitfalls.

2. Ranking Importance

Publicity empowered rankings to dominate High Education (HE) arena and shape opinion on HEIs reputation. Rankings are important for HEIs and stakeholders in several different ways. They relate to HEIs planning and their indicators are used to gauge institutional success [9]. Ranking position above competitors helps HEIs build robust global image that improves recruitment and funding [10]. Policy makers within HEIs consider ranking indicators a driving force of institutional progress and use them to create and pursue target benchmarks and enforce academic change [11] [12] [13]. Governments also consider ranking results to compare national HE to international benchmarks [14]. Rankings research results are used by HEIs as evidence for quality and cost-effectiveness in pursuit of funding [15] [16] [17]. In addition, HEIs consider ranks prior to establishing academic cooperation and positive images created by rankings help enhance partnerships and collaborations [18] [19] [20]. These factors collectively created demand for high quality research data which usually lack credibility and validation [21] [22] [23] [24]. Problems of research quality assessment pertain to transparency and studies indicated that accuracy in reporting research output is essential for meaningful assessment [24] [25].

3. Ranking Systems

World rankings include, among others, Leiden Ranking, Nature Index, and Reuters Ranking publishing annual lists using methodologies and indicators that assess research quality. Leiden Ranking (Leiden University, Netherlands) ranks 1000 HEIs on annual science papers indexed in Web of Science [26] [27]. Nature Index (Springer-Nature Publishers) ranks 100 HEIs on annual science papers indexed in Science Citation Index [28]. Reuters Ranking (Thomson Reuters, USA) ranks 100 HEIs on science papers and patents to reflect commercializing innovation [29].

In a different approach, Science Webometrics (National Research Council, Spain) compiles data on internet structure, number of hyperlinks, web usage, and web-page versatility for 22,000 HEIs. Different to other rankings, it assesses performance based on application of information technology. Half of total score is assigned to number of hyperlinks, number of users, documents located by search engines on HEIs websites, and publications [30].

In the Arab World major challenges face HE due to socioeconomic and political issues. However, HE is on an upward path with many HEIs appearing on world rankings [31] [32]. Nevertheless, many other HEIs still require substantial improvement in quality and relevance. The Center for World University Ranking (CWUR) in United Arab Emirates is the Arab ranking body with annual list of 2000 HEIs assessed on education quality, student training, and number of science articles verified by Clarivate Analytics [33]. It is worth underlining CWUR attention to education compared to other rankings that mainly focus on research and assess education based on teaching commitment [33] [34]. However, despite rankings popularity HEIs focus on the influential Academic Ranking of World Universities (ARWU), Quacquarelli Symonds (QS), and Times Higher Education (THE).

The ARWU Ranking (Jiao Tong University, China) ranks 1000 HEIs in general and subject-specific lists [35]. It assesses research quality by Nobel Laureates, field medalists, research citations, and publications in Science and Nature with data from Thompson Routers (Table 1). It considers publications indexed in Science Citation Index (SCI) and Social Science Citation Index (SSCI). The top establishment is given a score of 100 and other HEIs are calculated as percentage of top score [36]. It is criticized for assigning 60% of score for research quality and only 10% for education quality and for biased indicators such as Nobel Laureates and Field Medalists [37].

The QS Ranking (Quacquarelli Symonds, UK) has general and subject-specific lists approved of World Ranking Observatory [38]. It ranks HEIs on mission, research, teaching, and graduate employability using peer review, Faculty:Student Ratio, citations, employer reputation, and globalization (Table 2). Criticism to QS pertains to reputation assessment by subjective survey methodology [39]. Teaching commitment assessed by Faculty:Student Ratio is also inadequate for assessing education quality as it does not reflect facilities, resources, and student support [39]. The citation per faculty indicator obtained via Thompson Reuters and Scopus databases is also a matter of concern. Scopus includes more non-English language journals than Thomson Reuters and mixing data from these two sources can yield different citation values [15]. The QS assigns 10% for international student and international faculty ratios to reflect institutional globalization. This indicator is thought to be inadequate due to liability of these ratios to temporal variations [40] [41].

The THE ranking (Higher Education Magazine, UK) publishes a list of 200 HEIs using data shared by HEIs and Thompson Reuters excluding HEIs with no

Table 1. ARWU ranking indicators. Able type styles.

Table 2. QS ranking indicators.

undergraduate programs and those with annual research output less than 1000 articles [42] [43] [44]. Indicators cover teaching and research quality, citations, reputation, and income. Reputation indicator is assessed by Thompson Reuters survey and citations indicator is calculated as five-year mean per paper in Web of Science indexed journals (Table 3). Criticism to THE pertains to reputation assessment by subjective survey methodology [43]. The highly valued citations indicator is a disadvantage for HEIs using languages other than English since papers in other languages are difficult to trace by search engines, and bias towards natural science is a disadvantage for HEIs focusing on social science [45] [46] [47].

Although ARWU, QS, and THE concur in some features they differ in several aspects. They agree on Thompson Routers, SCI, and SCCI as data sources but they differ in sponsors, partners, and methodology. However, ARWU has academic sponsor and partner while QS and THE have non-academic ones (Table 4). Initial HEIs annually assessed are 1200, 3400, and 2600 for ARWU, QS, and THE, respectively. Final lists include 1000 HEIs for ARWU and QS, and only 100 for THE (Table 5). They also define and value indicators differently. While graduate quality is assessed by Nobel Laureates and Field Medalists in ARWU, it is assessed by graduate employability in QS and THE (Table 5). Faculty quality is based on awards and publications in ARWU, and on publications only in QS and THE. Education quality is assessed as teaching commitment by faculty awards, per capita performance, and Faculty:Student Ratio, in ARUW, QS, and THE, respectively. Globalization is embedded in ARWU peer review while assessed by international students and faculty ratios in QS and THE (Table 5).

Table 3. Times higher education ranking indicators.

Table 4. Comparison of Shanghai, QS, and THE rankings features.

Table 5. Comparison of ARWU, QS, and THE indicators.

However, it is important to reiterate that these ratios are inadequately liable to temporal variations [40] [41]. Finally, while graduate quality, faculty, education, and research are assigned different weights, the three rankings allocate 40% - 60% of score research quality.

4. Ranking Controversy

Ranking dominates HE arena and is commercialized by entanglement of ranking organizations with world media [6] [48] [49]. The ranking debate also occupies a large body of HE literature [6] [37] [50] [51] [52]. This led many HEIs to assign large numbers of staff to ranking-related activities and to sign consultancy contracts with specialized firms in pursuit of ranking [48] [53]. Argument in favour of ranking asserts that absence of appropriate tools makes rankings good for comparing HEIs. Rankings also led HEIs to improve management, recruitment, partnerships, and funding [12] [14]. Although acceptable this argument ignores that rankings are based on indicators that do not cover all performance aspects [5] [36]. Controversy pertains to cases where ranking becomes forefront of planning and policies which turns it into a threat with HEIs being more interested in achieving ranking rather than improving quality [49] [54] [55] [56] [57] [58]. Controversy also emanates from transparency issues, possible data manipulation, and misuse of ranking within HEIs for issues related to faculty promotion [16] [59]. Further, as HE authorities become more interested in ranking, significant resources are delivered to certain HEIs while limited support is given to others. Rankings basic focus on research quality also diminishes role of HEIs in community service and downgrades those with emphasis in this field [15] [16]. Some rankings do not make corrections for institutional size leading large HEIs to rank higher than small ones with similar research quality [60]. Rankings also assess HEIs reputation by subjective surveys where respondents generally tend to favour certain HEIs due to personal experience or acclaim [37]. This reflects negatively on HEIs with less recognition but meaningful contribution to stakeholders and society. Similarly, assessing HEIs by alumni stature is inappropriate since it does not reflect job satisfaction, academic freedom, equal opportunity, and governance [37]. Despite this controversy it would be unwise to assume that rankings will lose their importance in foreseeable future. Rankings are here to stay and HEIs and stakeholders should be aware of their limitations.

5. Ranking Flaws

Publication of ranking lists is met by anticipation by HEIs and stakeholders alike. This is true for students comparing HEIs, HEIs enhancing recruitment and funds, and funding agencies to properly direct funds. Despite anticipation rankings have their flaws. Annual ranking lists are not satisfying for HEIs unable to promote programs due to limited resources, and for students who can find satisfaction in HEIs excelling in aspects of their interest with less international outlook. In addition, different indicators and methodologies bring about differences in ranking position for the same institution on different rankings. Such differences in ranks for the same HEIs within the same country for the same year make it difficult for stakeholders to determine true ranking for a particular institution (Table 6).

Moreover, ranking lists do not always reveal true differences between HEIs. This is illustrated by comparing ranking positions and scores of different indicators for top ten HEIs on 2019 THE list (Table 7). First, statistically significant differences are not clear for overall score, teaching and research quality scores, and citations for top five HEIs (Table 7). Second, score for teaching quality of institute ranking five on list well exceeds that of preceding four HEIs (Table 7). Third, income from industry score does not conform well to ranking. Two HEIs with high income from industry come in fourth and fifth positions while low income institution occupies second position on list (Table 7). Finally the institute in ninth position has highest international outlook (Table 7). Therefore, differences between HEIs are not clear and it is up to stakeholders to decide which HEIs suit their needs by considering scores for aspects of interest rather than just general ranking. If looked upon without consideration of underlying

Table 6. Ranking positions of five British HEIs in 2019.

Table 7. Ranking positions and indicator scores of top ten HEIs on THE 2019 list.

scores, stakeholders may base opinions on impression rather than perception. Further, since no ranking considers all HE aspects some rankings may be more appropriate for certain stakeholders than others. Based on included aspects stakeholders should consider rankings that best represent their needs. Finally, subtle HE aspects are difficult for to assess. Education is not only about reputation for students or facilities for researchers. An important element is selecting excellent HEIs with cost that students can afford and facilities that researchers can use. Education is also about amicable environment that encourages students for lifelong learning and researchers for exploration and discovery.

6. Ranking Pitfalls

Many HEIs developed a sense of urgency to prove excellence by allocating resources to planning in pursuit of ranking. However, despite their importance rankings have inherent pitfalls that should be acknowledged by ranking organizations, HEIs, and stakeholders [37] [61].

Adopting generic approach to assessment by using one indicator to assess a group of aspects is one of rankings pitfalls. Examples include assessing teaching quality by Faculty:Student Ratio which do not reflect education facilities and student support [37]. Mixing citations data from different sources can also produce inconsistent values of citations [15]. This generic approach should be avoided by diversifying indicators and unifying data sources. Rankings should also make correction for institutional size since size-dependent indicators are useful to assess large HEIs with ample resources while size-independent indicators are suitable for those achieving success with limited resources [61] [62] [63]. In addition, databases and surveys are two methodologies that produce different assessment results.

For example, databases clearly define university hospitals and medical schools, while in surveys participants find it difficult to differentiate them due to diverse public perception of such medical facilities. Accuracy in dealing with assessment results produced by different methodologies is essential for useful comparisons.

Stakeholders should also acknowledge that rank of an institution can differ on different lists due to different indicators and data sources, and such that differences should not be confused for decline in performance [51] [57] [64]. Additionally, aspects not covered by indicators should not be overlooked by stakeholders since rankings generally focus on aspects that are relatively easy to quantify [36]. Some rankings have relatively narrow focus on specific aspects while others have broader amplitudes and stakeholders should be aware that no ranking covers all performance aspects. On mutual terms, the relationship between rankings and HEIs should be based on transparency and understanding of ranking aim and purpose. Rankings should clarify methodology and data sources, and HEIs should make authentic data accessible. The more transparent the relationship the more useful results are for stakeholders. Similarly, rankings and stakeholders should be aware that HEIs are unique entities with missions and strategies drafted to match their focus and context. Considering HEIs focus and context should be addressed by rankings use of subject-specific indicators and by stakeholders using rankings that apply such indicators [2] [3].

7. Conclusions

Review of published literature indicated that ARWU, QS, and THE are prominent among other global rankings. Rankings assess HEIs using different performance aspects, methodologies, indicators, and data sources, with general focus on research output and quality. Ranking importance relates to improved planning and quality within HEIs which can positively reflect on improved recruitment and funding. It can be emphasized that ranking research results are used by HEIs for building positive images, as evidence of research quality, and for establishing academic partnerships. The HE authorities also use rankings to align national education to international benchmarks.

Despite importance rankings proved to have inherent flaws and pitfalls that cause controversy and concern. Concern pertains to ranking being the only driving force for HEIs which makes them interested in achieving ranking rather than improving quality. Concern also emanates from transparency, data manipulation, and misuse of ranking within HEIs. Results in this study also indicated that rankings use disputable subjective methodologies and mix data from different sources causing discrepancy and making ranking results less useful for stakeholders. This study also revealed that ranking flaws pertain to differences in methodology and indicators that result in different ranking for same HEIs on different rankings which makes difficult for stakeholders to determine true ranking of a particular institution. Flaws also pertain to lack of clear differences between HEIs in ranking lists, which necessitates stakeholders to consider score tables for appropriate ranking interpretation. Stakeholders should also acknowledge that subtle HE aspects of education environment and inspiration are difficult to assess by rankings. Finally despite their importance rankings inherent pitfalls should be acknowledged by ranking systems, HEIs, and stakeholders.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] QS (2019) Enrollment Solutions International Student Survey.
https://www.internationalstudentsurvey.com/
[2] Sá, F. (2014) The Effect of Tuition Fees on University Applications and Attendance: Evidence from the UK. IZA Institute of Labour Economics Berlin, Discussion Paper, 8364.
[3] Madden, A.D., Yat-Sen, S., Webber, S., Ford, N. and Crowder, M. (2018) The Relationship between Students’ Subject Preferences and Their Information Behavior. Journal of Documentation, 74, 692-721.
https://doi.org/10.1108/JD-07-2017-0097
[4] Maringe, F. (2006) University and Course Choice. International Journal of Educational Management, 20, 466-479.
https://doi.org/10.1108/09513540610683711
[5] Marklein, M.B. (2015) Rankings Create Perverse Incentives. University News
https://www.universityworldnews.com/post.php?story2015041014225416
[6] Vernon, M.M., Balas, E.A. and Momani, S. (2018) Are University Rankings Useful to Improve Research? A Systematic Review. PLoS ONE, 13, e0193762.
https://doi.org/10.1371/journal.pone.0193762
[7] Fong, E.A. and Wilhite, A.W. (2017) Authorship and Citation Manipulation in Academic Research. PLoS ONE, 12, e0187394.
https://doi.org/10.1371/journal.pone.0187394
[8] Mussard, M. and Pappachen, A.J. (2017) Boosting the Ranking of a University Using Self-Citations. Current Science, 113, 1827.
[9] QS (2017) How Important Are University Subject Rankings?
https://www.qs.com/how-important-are-university-subject-rankings/
[10] QS (2019) About. https://www.qs.com/about-us/
[11] Lucas, L. (2014) Academic Resistance to Quality Assurance Processes in Higher Education in the UK. Policy and Society, 33, 215-224.
https://doi.org/10.1016/j.polsoc.2014.09.006
[12] Jarocka, M. (2015) Transparency of University Rankings in the Effective Management of Universities. Business, Management and Education, 13, 64-75.
https://doi.org/10.3846/bme.2015.260
[13] Almaa, B., Coşkun, E. and şvendireli, E. (2016) University Ranking Systems and Proposal of a Theoretical Framework for Ranking of Turkish Universities: A Case of Management Departments. Social and Behavioral Sciences, 235, 128-138.
https://doi.org/10.1016/j.sbspro.2016.11.008
[14] Vogela, R., Hattkeb, F. and Petersen, J. (2017) Journal Rankings in Management and Business Studies: What Rules Do We Play by? Research Policy, 46, 1707-1722.
https://doi.org/10.1016/j.respol.2017.07.001
[15] Chadegani, A.A., Salehi, H., Yunus, M., Farhadi, H., Fooladi, M., Farhadi, M. and Ebrahim, N.A. (2013) A Comparison between Two Main Academic Literature Collections: Web of Science and Scopus Databases. Asian Journal of Science, 9, 18-26.
https://doi.org/10.5539/ass.v9n5p18
[16] Mussard, M. and Pappachen, A.J. (2018) Engineering the Global University Rankings: Gold Standards, Limitations and Implications. IEEE Access, 6, 6765-6776.
https://doi.org/10.1109/ACCESS.2017.2789326
[17] Perkmann, M., Tartarik, V., McKelvey, M., Autioa, E., Broströmc, A., Pablo, D’Ested, P., Finif, R., Geunael, A., Grimaldif, R., Hughesm, A., Krabelh, S., Kitsong, M., Llerenai, P., Lissonij, F., Saltera, A. and Sobrerof, M. (2013) Academic Engagement and Commercialization: A Review of the Literature on University-Industry Relations. Research Policy, 42, 423-442.
https://doi.org/10.1016/j.respol.2012.09.007
[18] Ahn, J., Oh, D.-H. and Lee, J.-D. (2014) The Scientific Impact and Partner Selection in Collaborative Research at Korean Universities. Scientometrics, 100, 173-188.
https://doi.org/10.1007/s11192-013-1201-7
[19] Hazelkorn, E. and Gibson, A. (2017) Global Science, National Research, and the Question of University Rankings. Palgrave Communications, 13, 1-11.
https://doi.org/10.1057/s41599-017-0011-6
[20] Dill, D.D. (2018) Enhancing Academic Quality and Collegial Control: Insights from US Policy on the Ethical Conduct of Human Subjects’ Research. Higher Education Policy, 1-20.
https://doi.org/10.1057/s41307-018-0093-9
[21] Ioannidis, J.P. (2005) Why Most Published Research Findings Are False. PLoS Medicine, 2, e124.
https://doi.org/10.1371/journal.pmed.0020124
[22] Keupp, M.M., Palmie, A.M. and Gassmann, O. (2012) The Strategic Management of Innovation: A Systematic Review and Paths for Future Research. International Journal of Management Reviews, 14, 367-390.
https://doi.org/10.1111/j.1468-2370.2011.00321.x
[23] Mertens, D.M. and Hesse-Biber, S. (2013) Mixed Methods and Credibility of Evidence in Evaluation. In: Mertens, D.M. and Hesse-Biber, S., Eds., New Directions for Evaluation, Wiley, New York, 5-13.
https://doi.org/10.1002/ev.20053
[24] Seyfried, M. and Pohlenz, P. (2018) Assessing Quality Assurance in Higher Education: Managers’ Perceptions of Effectiveness. European Journal of Higher Education, 8, 258-271.
https://doi.org/10.1080/21568235.2018.1474777
[25] Chalmers, I., Bracken, M.B., Djulbegovic, B., Garattini, S., Grant, J., Gülmezoglu, A.M., Howells, D.W., Ioannidis, J.P. and Oliver, S. (2014) How to Increase Value and Reduce Waste When Research Priorities Are Set. The Lancet, 383, 156-165.
https://doi.org/10.1016/S0140-6736(13)62229-1
[26] Frenken, K., Heimeriks, G.J. and Hoekman, J. (2017) What Drives University Research Performance? An Analysis Using the CWTS Leiden Ranking Data. Journal of Informetrics, 11, 59-872.
https://doi.org/10.1016/j.joi.2017.06.006
[27] Leiden University Centre for Science and Technology (2019).
https://www.leidenranking.com/
[28] Nature Index 2017 Innovation. Nature Index.
https://www.natureindex.com
[29] Thompson Reuters (2019) Reuters Top 100: Europe’s Most Innovative Universities 2019 Announced.
https://www.reuters.com/article/rpbtop1002019/reuters-top-100-europes-most-innovative-universities-2019-announced-idUSKCN1S60PA
[30] Björneborn, L. and Ingwersen P. (2004) Toward a Basic Framework for Webometrics. Journal of American Society for Information Science and Technology, 55, 1216-1227.
https://doi.org/10.1002/asi.20077
[31] Badran, A., Baydoun, E. and Hillman, J.R. (2019) Major Challenges Facing Higher Education in the Arab World: Quality Assurance and Relevance. Springer Nature, Amsterdam.
https://doi.org/10.1007/978-3-030-03774-1
[32] Times Higher Education (2019) Best Universities in the Arab World.
https://www.timeshighereducation.com/student/best-universities/best-universities-arab-world
[33] Mahassen, N. (2019) A Quantitative Approach to World University Rankings. Center for World University Rankings.
https://cwur.org/
[34] Zha, Q. (2016) University Rankings in Perspective. Inside Higher Education.
https://www.insidehighered.com/blogs/world-view/university-rankings-perspective
[35] ARWU (2019) http://www.shanghairanking.com/
[36] Hou, Y-W. and Jacob, W.J. (2017) What Contributes More to the Ranking of Higher Education Institutions? A Comparison of Three World University Rankings. International Education Journal, 16, 29-46.
[37] Davis, M. (2016) Can College Rankings Be Believed. Journal of Design, Economics, and Innovation, 2, 215-230.
https://doi.org/10.1016/j.sheji.2016.11.002
[38] Ranking Expert Group (2019) http://ireg-observatory.org/en/
[39] Huang, M.-H. (2012) Opening the Black Box of QS World University Rankings. Research Evaluation, 21, 71-78.
https://doi.org/10.1093/reseval/rvr003
[40] Priem, J. and Hemminger, B.M. (2010) Stoichiometrics: Towards New Metrics of Scholarly Impact on the Social Web. First Monday, 15.
https://doi.org/10.5210/fm.v15i7.2874
[41] Serenko, A, Bontis, N., Booker, L., Sadeddin, K. and Hardie, T. (2010) A Scientometric Analysis of Knowledge Management and Intellectual Capital Academic Literature (1994-2008). Journal of Knowledge Management, 14, 3-23.
https://doi.org/10.1108/13673271011015534
[42] Baty, P. (2009) New Data Partner for World University Rankings Times Higher Education Signs Deal with Thomson Reuters.
https://www.timeshighereducation.com/news/new-data-partner-for-world-university-rankings/408881.article
[43] Holmes, R. (2015) Searching for the Gold Standard: The Times Higher Education World University Rankings: 2010-2014. Asian Journal of University Education, 11, 1-30.
[44] Times Higher Education (2019) Times World University Ranking 2019.
https://www.timeshighereducation.com/world-university-rankings/2019/world-ranking#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/stats
[45] Bookstein, F.L., Seidler, H., Fieder, M. and Winckler, G. (2010) Too Much Noise in the Times Higher Education Rankings. Scientometrics, 85, 295-299.
https://doi.org/10.1007/s11192-010-0189-5
[46] Pusser, B. and Marginson, S. (2013) University Rankings in Critical Perspective. Higher Education, 84, 544-568.
https://doi.org/10.1353/jhe.2013.0022
[47] Lim, M.A. (2018) The Building of Weak Expertise: The Work of Global University Rankers. Higher Education, 75, 415-430.
https://doi.org/10.1007/s10734-017-0147-8
[48] Hazelkorn, E. (2015) Rankings and the Reshaping of Higher Education. Palgrave Macmillan, London.
https://doi.org/10.1057/9781137446671
[49] Goglio, V. (2016) One Size Fits All? A Different Perspective on University Rankings. Journal of Higher Education Policy and Management, 38, 212-226.
https://doi.org/10.1080/1360080X.2016.1150553
[50] Taylor, P. and Braddock, R. (2007) International University Ranking Systems and the Idea of University Excellence. Journal of Higher Education Policy and Management, 29, 245-260.
https://doi.org/10.1080/13600800701457855
[51] Rauhvargers, A. (2011) Global University Rankings and Their Impact. European University Association, Brussels.
[52] Redd, Y.K.S., Xie, E. and Tang, Q. (2016) Higher Education, High-Impact Research, and World University Rankings: A Case of India and Comparison with China. Pacific Science Review B: Humanities and Social Sciences, 2, 1-21.
https://doi.org/10.1016/j.psrb.2016.09.004
[53] Jabjaimoh, P., Samart, K., Jansakul, N. and Jibenja, N. (2019) Optimization for Better World University Rank. Journal of Scientometric Research, 81, 18-20.
https://doi.org/10.5530/jscires.8.1.3
[54] Bowden, R. (2000) Fantasy Higher Education: University and College League Tables. Quality in Higher Education, 6, 41-60.
https://doi.org/10.1080/13538320050001063
[55] Dill, D.D. and Soo, M. (2005) Academic Quality, League Tables, and Public Policy: A Cross-National Analysis of University Ranking Systems. Higher Education, 49, 495-533.
https://doi.org/10.1007/s10734-004-1746-8
[56] Hazelkorn, E. (2007) The Impact of League Tables and Ranking Systems on HE Decision Making. Higher Education Management and Policy, 19, 1-24.
https://doi.org/10.1787/hemp-v19-art12-en
[57] Marginson, S. and van der Wende, M.C. (2007) To Rank or to Be Ranked: The Impact of Global Rankings in HE. Journal of Studies in International Education, 11, 306-329.
https://doi.org/10.1177/1028315307303544
[58] Kehm, B.M. (2014) Global University Rankings. Impacts and Unintended Side Effects. European Journal of Education, 49, 102-112.
https://doi.org/10.1111/ejed.12064
[59] Tilak, J.B. (2016) Global Rankings, World-Class Universities and Dilemma in Higher Education Policy in India. Higher Education for the Future, 3, 123-146.
https://doi.org/10.1177/2347631116648515
[60] Zirulnick, A. (2010) New World University Ranking Puts Harvard Back on Top. Christian Science Monitor.
https://www.csmonitor.com/World/2010/0916/New-world-university-ranking-puts-Harvard-back-on-top
[61] Yudkevich, M., Altbach, P.G. and Rumbley, L.E. (2013) The Global Academic Rankings Game: Changing Institutional Policy, Practice, and Academic Life. Palgrave Macmillan, London.
[62] Bougnola, M.L. and Dulá, J.H. (2014) Technical Pitfalls in University Rankings. Higher Education, 69, 859-866.
https://doi.org/10.1007/s10734-014-9809-y
[63] Marlo, M., Vernon, M.M., Balas, E.A. and Momani, S. (2018) Are University Rankings Useful to Improve Research? A Systematic Review. PLoS ONE, 13, e0193762.
https://doi.org/10.1371/journal.pone.0193762
[64] Buela-Casal, G., Gutiérrez-Martínez, O., Bermúdez-Sánchez, M. and Vadillo-Muñozb, O. (2007) Comparative Study of International Academic Rankings of Universities. Scientometrics, 71, 349-365.
https://doi.org/10.1007/s11192-007-1653-8

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.