Article

Correlation among top 100 universities in the major six global rankings: policy implications

Authors:
  • Imam Abdulrahman Bin Faisal University
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The discrepancies among various global university rankings derive us to compare and correlate their results. Thus, the 2015 results of six major global rankings are collected, compared and analyzed qualitatively and quantitatively using both ranking orders and scores of the top 100 universities. The selected six global rankings include: Academic Ranking of World Universities (ARWU), Quacquarelli Symonds World University Ranking (QS), Times Higher Education World University Ranking (THE), US News & World Report Best Global University Rankings (USNWR), National Taiwan University Ranking (NTU), and University Ranking by Academic Performance (URAP). Two indexes are used for comparison namely, the number of overlapping universities and Pearson’s/Spearman’s correlation coefficients between each pair of the studied six global rankings. The study is extended to investigate the intra-correlation of ARWU results of the top 100 universities over a 5-year period (2011–2015) as well as investigation of the correlation of ARWU overall score with its single indicators. The ranking results limited to 49 universities appeared in the top 100 in all six rankings are compared and discussed. With a careful analysis of the key performance indicators of these 49 universities one can easily define the common features for a world-class university. The findings indicate that although each ranking system applies a different methodology, there are from a moderate to high correlations among the studied six rankings. To see how the correlation behaves at different levels, the correlations are also conducted for the top 50 and the top 200 universities. The comparison indicates that the degree of correlation and the overlapping universities increase with an increase in the list length. The results of URAP and NTU show the strongest correlation among the studied rankings. Shortly, careful understanding of various ranking methodologies are of utmost importance before analysis, interpretation and usage of ranking results. The findings of the present study could inform policy makers at various levels to develop policies aiming to improve performance and thereby enhance the ranking position.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Естественно, что большое количество различных мировых университетских рейтингов привело к их сравнительному анализу. В начале, как отмечается в работе [6] возник большой кластер публикаций, посвященный качественному сравнительному анализу различных методологий этих рейтингов, а позже появился кластер публикаций по их количественному сравнительному анализу, фокусирующихся на Pearson's / Spearman's корреляциях и на перекрытиях рейтингов. ...
... Во всех случаях расчетов F и М, их значения оказались достаточно близкими [7]. Очевидно, что во всех расчетах значения М не много меньше, чем значения F. Основная идея работы [7] была использована в работе [6], в которой M-measure была заменена коэффициентом корреляции Пирсона для меры подобия Overall Score двух рейтингов. Здесь отметим, что ранжированию университетов во всех рейтингах предшествует расчёт их интегральных показателей, которые в зависимости от названия рейтингов называются Overall Score или Total Score. ...
... В дополнение к работе [6] мы обнаружили работу [15], в которой проделана парная корреляция Пирсона для ТОР-10, ТОР-50 и TOP-100 рейтингов THE, QS, ARWU, Leiden и URAP на уровень 2015 г. ...
Article
Full-text available
Проделан обзор литературы по многомерному статистическому анализу глобальных университетских рейтингов в контексте их подобия и чувствительности к изменению весовых коэффициентов индикаторов, послойным корреляциям между индикаторами и интегральными показателями рейтингов. Данный обзор показал, на необходимость проделывать послойные корреляции на систематической основе, что проделано для TOP – 100 ARWU с тремя слоями, TOP- 200 THE c семью слоями и TOP – 210 QS с семью слоями за три года (2018 – 2020 гг.). Отмечена большая послойная неоднородность рейтингов, в которых корреляция между значениями их индикаторов и интегральными показателями убывает от верхних слоёв к нижним. Показано, что послойный выбор наиболее коррелируемых пар индикаторов и интегральных показателей позволяет решать управленческие задачи по более осмысленному продвижению университетов в глобальных университетских рейтингах.
... Un mecanismo que permite determinar la GC, medir el CI y visualizar el desempeño, productividad y calidad de una IES es su posición en los Rankings Mundiales de Universidades (RMU), tales como: QS Top Universities, Academic Ranking of World Universities ARWU, Web of Metrics (WofM) y Times Higher Education THE, que son cuatro de los RMU más representativos a nivel global y determinan la gestión de las universidades por medio de la utilización de indicadores y aplicando diferentes metodologías (Shehatta y Mahmood, 2016;Moed, 2017;Kivinen et al. 2017;Buela-Casal et al. 2017). Y en investigaciones como la de Mussard y James (2018) se realiza una valoración a los indicadores empleados por 3 RMU (ARWU, QS y THE) los cuales permiten ponderar CI. ...
... Estos 4 RMU han sido seleccionados para el estudio debido a que tienen una cobertura global, clasifican a más de 1.000 IES cada uno, la superposición encontrada de universidades entre ellos es considerable, existe correlación moderada entre los parámetros que cada uno utiliza, y se evidencia una consistencia en la distribución geográfica de las instituciones clasificadas en cada RMU a nivel de países y regiones (Dobrota y Dobrota, 2016;Moed, 2017;Ya-Wen y Jacob, 2017;Kivinen et al., 2017;Shehatta y Mahmood, 2016;Mussard y Pappachen, 2018;Olcay y Bulu, 2016). ...
... Los hallazgos indican que aunque cada sistema de clasificación aplica una metodología diferente, hay desde un moderado hasta altas correlaciones entre las seis entidades estudiadas; los valores de URAP y NTU muestran la correlación más fuerte entre rankings. Con un análisis cuidadoso de los indicadores de rendimiento clave de estas 49 IES, se puede definir fácilmente las características comunes para una universidad de clase mundial (Shehatta y Mahmood, 2016). ...
Chapter
La presente obra como parte de la “Colección Monográficos 2020” de la Editorial UTMACH, es un aporte a la comunidad científica de los resultados de investigaciones de docentes de diferentes universidades, que tiene la finalidad de promover una cultura de difusión del conocimiento y de creación de redes a nivel de instituciones de educación superior para lograr una mayor internacionalización con un enfoque generador de impacto y visibilización global.
... University Rankings is a global phenomenon that has gained interests from all stakeholders such as students, parents, academics, political leaders, funding bodies, governments, employers, and all universities around the world (Marginson & van der Wende 2007;Marginson 2007;Rauhvargers 2011& 2013Hazelkorn 2014Hazelkorn & 2015Shehatta & Mahmood 2016a). Higher education institutions (HEIs)' efficiency, concerning their contribution to the world scientific & education space, is evaluated and rated by various ranking systems. ...
... These comparative ranking studies fall into two main categories, i.e., qualitative and quantitative analyses. The qualitative studies focused on the classification of ranking indicators into various HE missions (teaching, research, and community services) and dimensions covering all inputs, processes, and outputs such as beginning characteristics, infrastructures, resources (staff, finance, materials), quality and reputation etc. (Bowden 2000;Dill and Soo, 2005;Usher and Savino 2007;Buela-Casal et al. 2007), whereas the quantitative Vol.22 Shehatta, Al-Rubaish & Mahmood (2020) studies are mainly based on overlapping universities and ranking correlation coefficients (Aguillo et al. 2010a;Hou et al. 2011;Huang 2011;Cheng 2011;Thamm & Mayr 2011;Chen & Liao 2012;Lee & Park 2012;Khosrowjerdi & SeifKashani 2013;Pandey 2014;Liu & Liu 2016;Shehatta and Mahmood 2016a). Thamm & Mayr (2011) examined the possibility of using hyperlink-based indicators to rank academic websites for German universities. ...
... These lessons are: the university should strive to conduct innovative research, delivering excellent international-based teaching, continuing government support, and enhancing reputation. Shehatta and Mahmood (2016a) studied the correlation among 2015 results of six well-known university ranking systems: ARWU, QS, THE, USNWR, NTU, and URAP. They found a moderate to high correlation among these ranking systems, although each ranking applied different indicators and weights. ...
Article
Full-text available
Global university rankings continue to gain growing interest and have high visibility from all stakeholders. Of these, Webometrics Ranking (WR) faces many criticisms about its function. Some people believe WR evaluates only the websites of universities but not their global performance and impact as mentioned by WR authors. This stimulates us to examine the idea of using WR as a reliable academic ranking for the world universities. To test this hypothesis, we apply the WR results with two widely accepted indexes, i.e., the global university rankings and the bibliometrics. Therefore, the WR ranking of the Top 100 institutions are correlated with the corresponding values of six world ranking systems’ 2015 edition (ARWU, USNWR, QS, THE, NTU and URAP) that commonly accepted to evaluate the academic performance of the university, as well as with the objectively bibliometric indicators gathered from the Web of Science (WOS) InCitesTM - Thomson Reuters. The findings revealed that the WR results provide a good correlation with both ranking systems’ results and with 12 bibliometric variables namely: WOS Documents, Times Cited, Citation Impact (CI), Citation Impact: Category Normalized (CNCI), Citation Impact: Journal Normalized (JNCI), Impact Relative to World, % of Top 1% Documents, % of Top 10% Documents, Highly Cited Papers, h-index, International Collaborations, and % Industry Collaborations. The consistency between WR and the studied six rankings increases with increasing the weight percent of the research or bibliometric indicators in these six global rankings. Moreover, the consistency between WR and survey-based rankings (USNWR, THE and QS) increases with decreasing the weight of the subjective reputation survey indicators. The North American, especially USA universities are characterized by the extremely high visibility in WR as well as in the studied seven global rankings. Thus, web-based indicators ranking (WR) offers results of comparable and similar quality to those of the six major global university rankings. Accordingly, they have the capability to rank institutional academic performance. Moreover, the reliability could be enhanced if each university has only one web-domain that accurately reflects its actual performance and activity. We recommend all institutions to apply all ranking systems together since their criteria and indicators complement each other and can form a comprehensive index for covering various HEIs activities/functions worldwide.
... GURs emerged after 2003 to measure university performance. They evaluate universities' management by using indicators associated with intangible assets, like organizational reputation or institutional prestige, and they employ different methodologies to assess their Intellectual Capital (IC) [2]. The most currently available schemes for the performance-based ranking of Universities are Academic Ranking of World Universities (ARWU), best known as Shanghai Ranking's Academic Ranking (since 2003), Quacquarelli Symonds (QS) World University Rankings (since 2004), and Times Higher Education (THE) World University rankings (since 2010). ...
... This might be due to science being abstract and is much more difficult to measure tan any physical product, which has led to studies that have compared the positions that universities occupy in rankings. Some such studies have found a high correlation among the positions occupied by universities in ARWU, QS, and THE [2,3,7,8], and a low correlation of three GURs with Webometrics [3]. However, this work analyzed the geographical distribution of the universities in the Top 500 of the four studied GURs and evidences a different occupancy in them according to regions either for each quartile or the total of universities. ...
... Yet despite the geographical differences found among the four GURs, these geographical outcomes corroborated those indicated in other works [7][8][9]52], in which more universities were found in Europe and North America, and in this order [2,7]. The rankings in which European universities predominate more than North American ones are THE and QS. ...
Article
Full-text available
Global University Rankings (GURs) intend to measure the performance of universities worldwide. Other rankings have recently appeared that evaluate the creation of environmental policies in universities, e.g., the Universitas Indonesia (UI) GreenMetric. This work aims to analyze the interaction between the Top 500 of such rankings by considering the geographical location of universities and their typologies. A descriptive analysis and a statistical logistical regression analysis were carried out. The former demonstrated that European and North American universities predominated the Top 500 of GURs, while Asian universities did so in the Top 500 of the UI GreenMetric ranking, followed by European universities. Older universities predominated the Top 500 of GURs, while younger ones did so in the Top 500 of the UI GreenMetric ranking. The second analysis demonstrated that although Latin American universities were barely present in the Top 500 of GURs, the probability of them appearing in the Top 500 of the UI GreenMetric ranking was 5-fold. We conclude that a low association exists between universities’ academic performance and their commitment to the natural environment in the heart of their institutions. It would be advisable for GURs to include environmental indicators to promote sustainability at universities and to contribute to climate change.
... Un mecanismo que permite determinar la GC, medir el CI y visualizar el desempeño, productividad y calidad de una IES es su posición en los Rankings Mundiales de Universidades (RMU), tales como: QS Top Universities, Academic Ranking of World Universities ARWU, Web of Metrics (WofM) y Times Higher Education THE, que son cuatro de los RMU más representativos a nivel global y determinan la gestión de las universidades por medio de la utilización de indicadores y aplicando diferentes metodologías (Shehatta y Mahmood, 2016;Moed, 2017;Kivinen et al. 2017;Buela-Casal et al. 2017). Y en investigaciones como la de Mussard y James (2018) se realiza una valoración a los indicadores empleados por 3 RMU (ARWU, QS y THE) los cuales permiten ponderar CI. ...
... Estos 4 RMU han sido seleccionados para el estudio debido a que tienen una cobertura global, clasifican a más de 1.000 IES cada uno, la superposición encontrada de universidades entre ellos es considerable, existe correlación moderada entre los parámetros que cada uno utiliza, y se evidencia una consistencia en la distribución geográfica de las instituciones clasificadas en cada RMU a nivel de países y regiones (Dobrota y Dobrota, 2016;Moed, 2017;Ya-Wen y Jacob, 2017;Kivinen et al., 2017;Shehatta y Mahmood, 2016;Mussard y Pappachen, 2018;Olcay y Bulu, 2016). ...
... Los hallazgos indican que aunque cada sistema de clasificación aplica una metodología diferente, hay desde un moderado hasta altas correlaciones entre las seis entidades estudiadas; los valores de URAP y NTU muestran la correlación más fuerte entre rankings. Con un análisis cuidadoso de los indicadores de rendimiento clave de estas 49 IES, se puede definir fácilmente las características comunes para una universidad de clase mundial (Shehatta y Mahmood, 2016). ...
Chapter
Full-text available
El crecimiento económico ha estado estrechamente relacionado con los avances tecnológicos y particularmente con la productividad, sin embargo la productividad empresarial en Ecuador ha sido poco abordada, aunque es de conocimiento que comprender el comportamiento de la productividad puede impulsar el crecimiento económico en el largo plazo (Solow, 1956; Romer, 1986). El objetivo de este trabajo es analizar la productividad del sector manufacturero ecuatoriano y determinar sus principales factores en el periodo 2007 – 2017. Para ello, se estima una función de producción a nivel de empresa con la metodología tradicional en un marco empírico simple al estilo Cobb Douglas (1928) con los insumos tradicionales: capital, empleo y materias primas. Para la estimación se utilizó un modelo semi-paramétrico de tal manera que se minimizó el problema de simultaneidad y endogeniedad en la selección de los insumos. Una vez estimada la función de producción se calcula la productividad total de los factores (PTF) y posteriormente se desarrolló un modelo de determinantes, donde se incluyó el ciclo económico del PIB, la inversión pública y privada, el consumo del gobierno, los ingresos no petroleros (Impuestos). Como principales conclusiones se obtuvieron que el sector manufacturero es intensivo en consumo de materias primas, seguido por el empleo y el capital. Además, la PTF presentó tasas bajas de crecimiento durante todo el periodo, y que en promedio creció solo un 0.31%. Finalmente, el ciclo del Producto Interno Bruto (PIB) y los ingresos no petroleros tienen una relación positiva con la PTF manufacturera, mientras que la inversión pública y privada tiene una relación negativa con la PTF.
... International rankings have been the subject of numerous investigations that have focused on studying the correlations and contributions of the different indicators. Indeed, measurement techniques have been used, such as factor analysis [13,[25][26][27][28][29][30], analysis of the principal, regression, and correlative components [13,[26][27][28][29][30][31][32]. Also, the I-distance analysis method [33] was used to comprehensively study the classifications. ...
... International rankings have been the subject of numerous investigations that have focused on studying the correlations and contributions of the different indicators. Indeed, measurement techniques have been used, such as factor analysis [13,[25][26][27][28][29][30], analysis of the principal, regression, and correlative components [13,[26][27][28][29][30][31][32]. Also, the I-distance analysis method [33] was used to comprehensively study the classifications. ...
Article
Full-text available
International rankings have achieved great prestige as an instrument for measuring university excellence as a quality assurance mechanism. Thus, this research aims to observe the behavior of the best-positioned universities in these rankings, as well as the trajectories of indicators and institutions. In fact, the Top-15 universities of ARWU, THE, and QS were selected, and an analysis was carried out using the Dynamic Biplot technique, which allows the study of the relationship between a set of multivariate data developed on more than one occasion. The results prove that world-class universities show different characteristics and trajectories when analyzed multivariate and dynamically. The ranking indicators also reveal different correlations that may affect the final ranking.
... But it gives no opportunity to carry out a crosscorrelation analysis with other World University Rankings, as it does not rank universities in an ordinary way. Moed (2017) used the data of the 2016 U-Multirank ranking for a comparative analysis with ARWU (2015), CWTS Leiden (2016), -2016), and QS (2015-2016, as U-Multirank singles out TOP-100 universities and makes it possible to obtain quantitative data by its certain indicators. He obtained the university overlappings between 5 ranking systems, showing that for TOP-100s of those rankings, the total number of different universities was 194, and the number of the overlapping universities was 35. ...
... But it gives no opportunity to carry out a crosscorrelation analysis with other World University Rankings, as it does not rank universities in an ordinary way. Moed (2017) used the data of the 2016 U-Multirank ranking for a comparative analysis with ARWU (2015), CWTS Leiden (2016), -2016), and QS (2015-2016, as U-Multirank singles out TOP-100 universities and makes it possible to obtain quantitative data by its certain indicators. He obtained the university overlappings between 5 ranking systems, showing that for TOP-100s of those rankings, the total number of different universities was 194, and the number of the overlapping universities was 35. ...
Article
Full-text available
The article examines the global university reputation race, launched in 2003. Between 2003 and 2010, there appeared a cluster of publications on the qualitative comparative analysis of their methodologies, and since 2010, a cluster of publications on the quantitative comparative analysis of university rankings has started to form. The review made it possible to identify a number of unsolved problems concerning the stability of university rankings, aggregation of the number of universities and their Overall Scores (Total Scores) by country in various rankings. Our study aimed at solving these tasks was carried out for TOP-100s of ARWU, QS, and THE rankings. When calculating the fluctuation range of the university rankings, the top twenty of the most stable and most unstable university rankings were identified in the rankings under study. The best values of the aggregated indicators by the number of universities and the Overall Scores were identified for the USA and the UK.
... Finally, the rankings may contain ties, allowing multiple universities to occupy the same rank. All these features make some well-known and also frequently used metrics, such as Spearman's rank correlation (Chen & Liao, 2012;Moed, 2017;Shehatta & Mahmood, 2016;Soh, 2011), Kendall's τ distance (Angelis et al., 2019;Liu, Zhang, et al., 2011;, Spearman's footrule (Abramo & D'Angelo, 2016;Aguillo et al., 2010) incapable of accurately quantifying the similarities or differences among university rankings. Despite different opinions presented by different studies, there might be one conclusion reached in common: It is very rare, if not impossible, that a university would hold the same position in all rankings. ...
... This prompts us to ask the second question: To what extent can a university's rank be raised in different rankings. It is straightforward to find a university's best rank, or the fluctuation of its rank Shehatta & Mahmood, 2016;Shi, Yuan, & Song, 2017;Soh, 2011;. However, to quantitatively measure the boost, we also need a baseline that represents the "average" or consensus rank of this university by combining information on all rankings considered. ...
Article
Full-text available
University ranking has become an important indicator for prospective students, job recruiters, and government administrators. The fact that a university rarely has the same position in different rankings motivates us to ask: To what extent could a university’s best rank deviate from its “true” position? Here we focus on 14 rankings of Chinese universities. We find that a university’s rank in different rankings is not consistent. However, the relative positions for a particular set of universities are more similar. The increased similarity is not distributed uniformly among all rankings. Instead, the 14 rankings demonstrate four clusters where rankings are more similar inside the cluster than outside. We find that a university’s best rank strongly correlates with its consensus rank, which is, on average, 38% higher (towards the top). Therefore, the best rank usually advertised by a university adequately reflects the collective opinion of experts. We can trust it, but with a discount. With the best rank and proportionality relationship, a university’s consensus rank can be estimated with reasonable accuracy. Our work not only reveals previously unknown patterns in university rankings, but also introduces a set of tools that can be readily applied to future studies.
... Descriptive statistics are used to measure the central tendency (Çakır, Çilingir, 2015; Khosrowjerdi & Kashani, 2013;Olcay & Bulu, 2017). When determining the strength of the relationship between the individual ranking systems and the indicators they use, the Spearman's correlation (Chen & Liao, 2012;Khosrowjerdi & Kashani, 2013;Moed, 2017;Shehatta & Mahmood, 2016) and Pearson's correlation were used (Claassen, 2015;Delgado-Márquez, Hurtado-Torres, & Bondar, 2011;Olcay & Bulu, 2017;Shehatta & Mahmood, 2016;Waltman et al., 2012). ...
... Descriptive statistics are used to measure the central tendency (Çakır, Çilingir, 2015; Khosrowjerdi & Kashani, 2013;Olcay & Bulu, 2017). When determining the strength of the relationship between the individual ranking systems and the indicators they use, the Spearman's correlation (Chen & Liao, 2012;Khosrowjerdi & Kashani, 2013;Moed, 2017;Shehatta & Mahmood, 2016) and Pearson's correlation were used (Claassen, 2015;Delgado-Márquez, Hurtado-Torres, & Bondar, 2011;Olcay & Bulu, 2017;Shehatta & Mahmood, 2016;Waltman et al., 2012). ...
Article
Full-text available
The main goal of this paper is to determine the significance of identified university ranking indicators in relation to the overall measurement system of selected global ranking systems. The realization of the research was divided into two phases. The first phase consists of a systematic overview of the literature that has resulted in the identification of 16 global ranking systems. In the second phase, an empirical analysis of 10 active ranking systems which were selected based on the set criteria. The ranking systems are observed regarding their measurement focus. According to the results of empirical analysis, the research category indicators account for 67.93% of the sum of the weight coefficients in the overall measurement system, followed by the reputation category indicators with 13.5% and the web performance category indicators with 9.37%. The most significant number of global ranking systems dominantly puts its focus in the context of measuring research performance as the crucial indicator of the quality and competitiveness of universities.
... Una tercera mirada es correlacionar universidades acreditadas y tamaños de producción científica (Shehatta;Mahmood, 2016). En el caso de Colombia ese umbral es 150 artículos científicos por año, y en el caso de Chile, la última universidad acreditada en investigación por la Comisión Nacional de Acreditación es la Universidad de Magallanes (la más austral del mundo), con 154 documentos en 2018. ...
... Una tercera mirada es correlacionar universidades acreditadas y tamaños de producción científica (Shehatta;Mahmood, 2016). En el caso de Colombia ese umbral es 150 artículos científicos por año, y en el caso de Chile, la última universidad acreditada en investigación por la Comisión Nacional de Acreditación es la Universidad de Magallanes (la más austral del mundo), con 154 documentos en 2018. ...
Article
Full-text available
El objetivo de este artículo es generar una discusión en torno a cómo diferenciar universidades de investigación, universidades docentes que hacen investigación y universidades docentes. Se analiza la distinción que hacen las taxonomías universitarias, los rankings globales de educación superior y la información académica generada por los propios actores del sistema. El estudio demuestra como actualmente las taxonomías y modelos tradicionales no caracterizan los sistemas universitarios, ni siquiera en el país donde nacieron. Los rankings globales de universidades muestran ser una fuente útil de información y una forma de contribuir a la visibilidad y reputación institucional. La información académica, cuando se trata de indicadores cualitativos de la producción científica, encuentra limitaciones de acceso, especialmente para universidades docentes que hacen investigación. Se documentan ejemplos de falta de rigor metodológico cuando se reutilizan dichos indicadores. La caracterización de la producción científica debe equilibrar los indicadores que describen el tamaño de la producción, con aquellos que caracterizan la performance, impacto y excelencia. El análisis de la información empírica muestra que algunas instituciones, de todos los países analizados y de todos los tamaños, presentan resultados de impacto que están descendidos de la media del mundo, evidenciando que no existe una correlación positiva entre desempeño investigador y tamaño institucional. La dificultad no radica tanto en determinar cuándo estamos frente a una universidad de investigación, sino en definir la frontera entre una universidad docente que hace investigación y la que es esencialmente docente. Como mínimo una universidad docente que hace investigación produce 100 artículos por año, contados en ventanas de cinco años, siendo deseable que ese umbral de tamaño esté por encima de los 150 artículos anuales. Una universidad de investigación produce como mínimo entre 1.000 y 2.000 artículos por año y gradúa al menos 20 doctores por año. En ambos casos, las instituciones deben alcanzar unos indicadores de performance, impacto y excelencia, ambos liderados, que demuestren que la universidad cuenta con un claustro de profesores en condiciones de hacer investigación original de forma autónoma, con un nivel de resultados equivalentes a los de sus colegas en el mundo.
... There have been instances of industry-academia collaborations or partnerships for specific research outcomes resulting in technology development or creating a business opportunity for the company. Such collaborative engagements are reported with the large industries in the pharmaceutical [2] and Information Technology (IT) [3,4] sectors. The collaborative engagements enable the utilization of expertise in academia for technology development. ...
Conference Paper
Small and Medium-sized Enterprises (SMEs) can benefit significantly by exploiting and successfully commercializing technological innovations in academic institutions. The primary challenge in realizing this requirement is the non-availability of systematic repositories and lower Technology Readiness Levels (TRL) with systems or products available at academic institutions. This paper presents Research2Market Connect to aggregate innovations within academic institutions and a listing of the repository on a cloud-based platform for offering shared access to SMEs. As academic research may not be readily suited to market needs, the platform facilitates enhancing the TRL level (from TRL 4 to TRL 7) through online collaborations and resource provisioning using the service-oriented model. The platform elements-an academic profile, an industry profile, and a mechanism for improving the TRL level using the Design-as-a-Service (DaaS) framework, are conceived and discussed in the paper. An illustrative case study of the rapid development and deployment of a face shield during the COVID-19 pandemic is presented to showcase the importance of collaborative innovation using a cloud-based platform. It has been shown that cloud-based platforms similar to Research2Market Connect can enable SMEs to leverage research in academic institutions with human expertise and computational resources to overcome resource barriers and swiftly respond to changing market conditions.
... The recent rankings of the global Top 10 universities by various organizations are reported in Table 2. The rankings illustrate similarities and differences, consistent with those that have been observed in multiple previous investigations (Buela-Casal et al., 2007;Chen & Liao, 2012;Moskovkin et al., 2022;Shehatta & Mahmood, 2016). ...
Article
Full-text available
Purpose The quantitative rankings of over 55,000 institutions and their institutional programs are based on the individual rankings of approximately 30 million scholars determined by their productivity, impact, and quality. Design/methodology/approach The institutional ranking process developed here considers all institutions in all countries and regions, thereby including those that are established, as well as those that are emerging in scholarly prowess. Rankings of individual scholars worldwide are first generated using the recently introduced, fully indexed ScholarGPS database. The rankings of individual scholars are extended here to determine the lifetime and last-five-year Top 20 rankings of academic institutions over all Fields of scholarly endeavor, in 14 individual Fields, in 177 Disciplines, and in approximately 350,000 unique Specialties. Rankings associated with five specific Fields (Medicine, Engineering & Computer Science, Life Sciences, Physical Sciences & Mathematics, and Social Sciences), and in two Disciplines (Chemistry, and Electrical & Computer Engineering) are presented as examples, and changes in the rankings over time are discussed. Findings For the Fields considered here, the Top 20 institutional rankings in Medicine have undergone the least change (lifetime versus last five years), while the rankings in Engineering & Computer Science have exhibited significant change. The evolution of institutional rankings over time is largely attributed to the recent emergence of Chinese academic institutions, although this emergence is shown to be highly Field- and Discipline-dependent. Research limitations The ScholarGPS database used here ranks institutions in the categories of: (i) all Fields, (ii) in 14 individual Fields, (iii) in 177 Disciplines, and (iv) in approximately 350,000 unique Specialties. A comprehensive investigation covering all categories is not practical. Practical implementations Existing rankings of academic institutions have: (i) often been restricted to pre-selected institutions, clouding the potential discovery of scholarly activity in emerging institutions and countries; (ii) considered only broad areas of research, limiting the ability of university leadership to act on the assessments in a concrete manner, or in contrast; (iii) have considered only a narrow area of research for comparison, diminishing the broader applicability and impact of the assessment. In general, existing institutional rankings depend on which institutions are included in the ranking process, which areas of research are considered, the breadth (or granularity) of the research areas of interest, and the methodologies used to define and quantify research performance. In contrast, the methods presented here can provide important data over a broad range of granularity to allow responsible individuals to gauge the performance of any institution from the Overall (all Fields) level, to the level of the Specialty. The methods may also assist identification of the root causes of shifts in institution rankings, and how these shifts vary across hundreds of thousands of Fields, Disciplines, and Specialties of scholarly endeavor. Originality/value This study provides the first ranking of all academic institutions worldwide over Fields, Disciplines, and Specialties based on a unique methodology that quantifies the productivity, impact, and quality of individual scholars.
... Both University Rankings by Academic Performance (URAP) and the National Taiwan University (NTU) Rankings focus exclusively on scientific research performance. US News & World Report's (USNWR) Top Global University Rankings emphasize academic research and overall recognition [46]. By aligning sustainability practices with institutional goals, sustainability-related university rankings can help university administrators focus on sustainable development actions. ...
Article
Full-text available
The concept of sustainability has become more important, especially as a result of the depletion of energy resources and increasing environmental concerns. UI GreenMetric ranks universities based on sustainability, environmental, and energy concerns, addressing issues of environmental pollution, food and water scarcity, and energy supply. By prioritizing sustainability on their campuses and campuses, universities are working to ensure a more sustainable future for humanity. This study evaluates university sustainability in energy and climate change using the UI GreenMetric ranking, focusing on Turkish universities’ sustainability ranking. It incorporates variables like infrastructure, energy, climate change, waste, water, public transportation, and educational research, using weighting approaches to reveal the most important variables for the country’s universities. The study utilized weighting techniques like CRITIC, entropy, standard deviation-based, and equal weighting approaches to obtain rankings for UI GreenMetric rankings. Entropy and equal weighting methods were found to be closest to the UI GreenMetric rankings. Universities’ rankings were analyzed using the TOPSIS method and four weighting techniques for 83 Turkish universities. For Turkish universities, the CRITIC method yielded the highest weight for energy and climate change variables, while water was identified as the most significant factor for entropy, installation infrastructure, and standard deviation-based weighting techniques.
... Traditionally, scholarly reputation of program faculty has been the measuring stick for determining PhD program success (Brooks, 2005;Cartter, 1966;Safón, 2019;Shehatta & Mahmood, 2016). However, there is a growing need to move beyond scholarly recognition of program faculty and other discrete variables, such as time-to-degree, and toward better understanding of how students and graduates themselves experience and view their PhD programs (Beiber & Worley 2006;Rosemary et al., 2020;Yusuf et al., 2020). ...
Article
Best practices for graduate student education and training has been under recent scrutiny, especially as universities are looking for methods of improving their programs. A method of demonstrating proficient and successful programs is providing evidence of successful alumni. This study will provide a quantitative and qualitative analysis of an alumni survey of a criminology doctoral program from a mid-sized university in the Northeast. With the results, we intend to highlight not only the successes of the alumni, but also potential lessons that other doctoral programs can apply to enhance future successes with their graduates.
... In this sense, the strong relationship between the indices found in this research supports the view that using different indices has a low impact on ranking results. The findings also substantiate previous findings on the relationship between different ranking results in the literature (Arkalı Olcay & Bulu, 2017; Shehatta & Mahmood, 2016), which showed a significant correlation existed between rankings. ...
Article
Full-text available
Research in academic university rankings mainly focuses on the methodological improvements in ranking or concern the practice, not the principle. There is a tendency in the core literature of rankings that they are ontologically accepted as reality-reflecting phenomena. However , this research tries a political analysis of ranking systems as hegemonic governing apparatus within the Gramscian Theory of Hegemony framework. For this purpose, we analyzed the top 100 lists of global university rankings and indices used in the rankings as research indicator sources. Even if this research is designed as political analysis, we integrated statistical findings to reveal the hegemonic oligarchs in rankings. The results show that there is a dominance of the USA and major Western European countries in ranking results and indices in terms of possession of journals. Moreover, correlation analysis gives evidence that different ranking system results reproduce a pre-given hierarchy. Drawing on Gramsci, the article resists the view of rankings as apolitical , subjective performance criteria of educational value, instead makes the rankings open to discussion in the realm of contestable politics as valuation and hierarchization tools of academic capitalist and neoliberalist forces to shape higher education globally within the frames of the best model, defined by global elites.
... Сравнительный анализ пяти ГРУ (ARWU, QS-THE, Webometrics, Leiden, HEEACT 1 ), основанный 1 ARWU (Academic Ranking of World Universities) -академический рейтинг университетов мира, разрабатываемый Шанхайским университетом (Shanghai Jiao Tong University); QS-THE (QS World University Rankings -Times Higher Education) -до 2009 г. совместный рейтинг британского издания Times Higher Education (THE) совместно с компанией Quacquarelli Symonds на оценке их ранговой сопоставимости [10], показал большое сходство рейтингов ARWU и HEEACT, тогда как остальные сильно различались между собой [11]. Наличие корреляции между этими двумя ГРУ было также обнаружено в исследованиях [12,13], тогда как в работе [14] наиболее тесная корреляция -между тайваньским NTU и турецким University Ranking by Academic Performance (URAP). Факторный анализ показателей рейтинговых продуктов ARWU, THE и QS проиллюстрировал, что названные системы не являются взаимоподдерживающими и аддитивными [15]. ...
Article
Full-text available
The article deals with the problem of identifying world-class universities (WCU) on the basis of information provided by various ranking systems. The relevance of the problem is due to the fact that in 2022 Russia was “cut off” from the world community, including the interruption of cooperation with leading international ranking universities, so the country risks losing the opportunity to self-check its successes and failures by generally recognized criteria. In this regard, the purpose of this article is hypothesis verifcation that the “friendly” ranking of ARWU base can serve as an effective substitute for the “unfriendly” QS ranking base. To test the formulated hypothesis, we used the previously developed algorithm for identifying WCU using statistical data from the fve Global University Rankings — Quacquarelli Symonds (QS), Times Higher Education (THE), Academic Ranking of World Universities (ARWU), Center for World University Rankings (CWUR) and National Taiwan University Ranking (NTU) — and two University Rankings by subject — QS and ARWU. Conducted calculations disproved the general hypothesis and revealed a fundamental inconsistency of results obtained on the basis of different rankings. In addition, by the example of the ARWU, a profound contradiction in the logic of compiling the GUR and the SRU was uncovered. That raises a broader question about adequacy of the concept of the WCU itself. To answer this question, we conducted a “humanitarian test” for the validity of modern WCU, which showed the presence of elementary illiteracy and lack of culture among graduates of advanced universities. Collected stylized examples allowed to establish that modern world market leaders’ universities do not pass the “humanitarian test”, and therefore the entire rating system cannot be considered a reliable basis for conclusions about the activities of universities. The question of replacing the term WCU with a less pretentious “product” category — practice-oriented universities — is being discussed.
... On the other hand, THE covered more than 1500 universities in its 2021 Edition, while the Leiden Ranking, for the year 2021, covered around 1200 universities. These academic ranking systems apply their methodology in the assessment of universities, and there is a moderate to high correlation between these systems (Shehatta & Mahmood, 2016). All academic ranking systems try to measure academic excellence using various indicators such as number of publications, citations per faculty, number of students, awards, number of articles in Nature and Science journals, etc. Academic ranking systems can serve as a proxy for universities' educational and research quality. ...
Article
Full-text available
Interest in academic ranking systems increased substantially in the last two decades. The majority of existing ranking systems are highly exclusive and cover up to 1500 best-positioned world universities. An exception to these ranking systems is the Webometrics ranking, which ranks more than 31000 universities throughout the world. In this study, we wanted to examine what factors best predict the Webometrics rankings. The sample for this study consisted of 102 European universities, with the Webometrics ranks ranging from 18th position to 6969th position. We examined the effects of the number of Web of Science publications, Scopus publications, and ResearchGate-related data on Webometrics ranking. Data retrieved from the academic social network site ResearchGate predicted 72% of the variance in the Webometrics ranking. The number of Scopus publications was the single best determinant of whether the university will be positioned in the top 1000 ranked universities. These results indicate the potential use of ResearchGate scores in the rankings of universities and serve as a proxy for universities' excellence. This, in turn, can be useful to government policymakers and university leaders in creating better strategies for enhancing the reputation of universities.
... The world's topranking universities attract industry funding for their projects and innovations. The pharmaceutical industry is one of the most prominent investors in universities (Shehatta and Mahmood, 2016). The IT industry has also started investing in academies because it has concluded that it is beneficial to spend on students and academia. ...
Article
Full-text available
The study has been undertaken to integrate two different aspects of the triple helix model: universities and the industry. Special attention has been paid to the prevailing difference between the two, hampering their working as a coherent unit. Integrating the existing knowledge in the study, we proposed the Academia-Industry Collaboration Plan (AICP) design model. The model comprises processes, methods or approaches, and tools. Processes serve as a road map to third parties for establishing collaboration between academia and the industry. It has all the essential process models and a series of steps that help minimize the organizational complexity of the collaboration process between academia and the industry. Methods or approaches serve the purpose of implementing those processes effectively. Finally, appropriate tools are selected to integrate possible collaboration improvements that lead to innovation.
... Typically, universities that are higher ranked attract more competitive students, which could lead to higher performance at a higher ranked university (Jabjaimoh et al., 2019). Rankings are largely based on academic achievement and the US News and World Report Best Global University Rankings (USNWR) is a well-known ranking system that uses quantitative measurement for its ranking calculations but is highly correlated with the other ranking systems (Shehatta and Mahmood, 2016). In this instance, UF is Professor rating † Score out of a 4.0 or 5.0 scale converted to a decimal TA number * Ranging from 0 to 13 TAs per semester * This variable was only analyzed using the data from the UF, as the course was only taught in the spring and in-person with either no or one TA per semester at the UNR. ...
Article
Full-text available
The transition of courses from in-person to an online format due to the COVID-19 pandemic could have potentially affected overall student performance in lecture-based courses. The objective of this case study was to determine the impact of course format, as well as the effects of student sex, time of year at which the course was taken, and the institution it was taken at on student performance in an undergraduate animal science course. The course used for this study was taught at two institutions (University of Florida; UF and University of Nevada, Reno; UNR) over seven years (2014-2017 at UNR and 2018-2021 at UF). Student performance (n = 911) was evaluated using both quizzes and exams from 2014 through the spring semester 2020 and only exams were used for summer and fall semesters of 2020 and the spring and summer semesters of 2021. The final score (out of 100%) for each student was used to evaluate student performance. In addition, students were classified as high performing students if they scored ≥ 95% and low performing students if they scored ≤ 70%. The variables that were evaluated were the effects of semester (spring, summer, or fall), institution (UF or UNR), sex (male or female), number of teaching assistants (TAs; 0 to 13), and course format (online or in-person). The course was taught in-person at UNR and in-person and online at UF. The spring semester of 2020 was taught in-person until March but was switched to online approximately nine weeks after the semester started and was considered an online semester for this analysis. As the course was only taught online at UF, the variable course format was assessed using UF records only. Data was analyzed using both linear models and logistic regressions. The probability that students were high performing was not affected by sex or institution. Interestingly, both fall semester, and the online format had a positive, desirable effect on the probability that students were high performing. The probability that students were low performing was not affected by sex. However, if a student performed poorly in the class, they were more likely to have taken the course at UNR, or at UF with many TAs. Thus, student performance was impacted by changing the course format, as well as institution, the number of TAs, and the semester in which the course was taken.
... The Kingdom of Saudi Arabia (KSA) is among the emerging countries in the world university rankings and has been significantly investing in higher education in creating worldclass universities [1], [2]. There are many currently ABETaccredited academic programs in Saudi higher educational institutions at the bachelor's degree level [3]. ...
... QS-WUR evaluates universities based on six metrics: (1) academic reputation; (2) employer reputation; (3) faculty/student ratio; (4) citations per faculty; (5) international faculty ratio; and (6) international student ratio. According to Shehatta and Mahmood (2016) the QS-WUR and five major university rankings, such as the Academic Ranking of World Universities, the Times Higher Education World University Ranking, the US News & World Report Best Global University Rankings, the National Taiwan University Ranking, and the University Ranking by Academic Performance, showed moderate-to-high internal correlations among them regardless of their methodological differences. Other than QS-WUR and other well-known university ranking systems, S-IR has not been discussed comparatively with QS-WUR. ...
Article
Full-text available
The mission statement(s) (MS) is one of the most-used tools for planning and management. Universities worldwide have implemented MS in their knowledge planning and management processes since the 1980s. Research studies have extensively explored the content and readability of MS and its effect on performance in firms, but their effect on public or nonprofit institutions such as universities has not been scrutinized with the same intensity. This study used Gunning's Fog Index score to determine the readability of a sample of worldwide universities' MS and two rankings, i.e., Quacquarelli Symonds World University Ranking and SCI-mago Institutions Rankings, to determine their effect on performance. No significant readability differences were identified in regions, size, focus, research type, age band, or status. Logistic regression (cumulative link model) results showed that variables, such as universities' age, focus, and size, have more-significant explanatory power on performance than MS readability.
... Correlation analysis on scientometric data provides very interesting insights and have been used in several studies such as to find the relationship between two university ranking systems [43], to investigate the correlation among top 100 universities etc. [44]. Data collected for our study is not normally distributed, therefore instead of Pearson correlation, Spearman correlation analysis has been used to determine the monotonic relationships among variables [45]. ...
Preprint
Annual ranking of higher educational institutes (HEIs) is a global phenomena and past research shows that they have significant impact on higher education landscape. In spite of criticisms regarding the goals, methodologies and outcomes of such ranking systems, previous studies reveal that most of the universities pay close attention to ranking results and look forward to improving their ranks. Generally, each ranking framework uses its own set of parameters and the data for individual metrics are condensed into a single final score for determining the rank thereby making it a complex multivariate problem. Maintaining a good rank and ascending in the rankings is a difficult task because it requires considerable resources, efforts and accurate planning. In this work, we show how exploratory data analysis (EDA) using correlation heatmaps and box plots can aid in understanding the broad trends in the ranking data, however it is challenging to make institutional decisions for rank improvements completely based on EDA. We present a novel idea of classifying the rankings data using Decision Tree (DT) based algorithms and retrieve decision paths for rank improvement using data visualization techniques. Using Laplace correction to the probability estimate, we quantify the amount of certainty attached with different decision paths obtained from interpretable DT models . The proposed methodology can aid HEIs to quantitatively asses the scope of improvement, adumbrate a fine-grained long-term action plan and prepare a suitable road-map.
... In response to these criticisms, we elected to use three of the most popular international business school rankings. This is consistent with previous work attempting to make up for individual deficiencies by taking multiple rankings into consideration (e.g., Shehatta & Mahmood, 2016). As mission statements are inherently complex rhetorical signal sets, future work could examine the extent to which awareness influences their perceptions of the institution. ...
... There are several other global rankings of universities, including the Quacquarelli Symonds World University Ranking (QS) and the Times Higher Education World University Ranking (THE). All of these rankings are highly correlated (Shehatta and Mahmood, 2016;Dill and Soo, 2005). Aguillo et al. (2010) compared university rankings using similarity measures and found that each ranking is similar to other rankings. ...
Article
Full-text available
We use survey data for up to 292 universities in 17 European countries to examine the influence of the employment share in knowledge-intensive services (KIS), location in a metropolitan region, and competition from other universities and research institutes in the same region on three measures of university knowledge transfer outcomes: the number of research agreements, licensing, and the number of start-ups. The results show that the KIS employment share has a positive correlation with the number of start-ups, while the location in a metropolitan region is positively correlated with the number of research agreements. Competition from quality-weighted universities in the same region as the focal university decreases the number of research and licensing agreements, although the highest-ranked 13.4% of universities benefit from the regional co-location of other high-quality universities for license income. The number of research institutes in the same region is unrelated to the number of research agreements, licenses and start-ups, but has a positive effect on license income. These results suggest that universities compete with top-ranked universities for regional demand for knowledge.
... (2) employer reputation; (3) faculty/student ratio; (4) citations per faculty; (5) international faculty ratio; and (6) international student ratio. According to Shehatta and Mahmood (2016) the QS-WUR and five major university rankings, such as the Academic Ranking of World Universities, the Times Higher Education World University Ranking, the US News & World Report Best Global University Rankings, the National Taiwan University Ranking, and the University Ranking by Academic Performance, showed moderate-to-high internal correlations among them regardless of their methodological differences. ...
Preprint
Full-text available
The mission statement(s) (MS) is one of the most-used tools for planning and management. Universities worldwide have implemented MS in their knowledge planning and management processes since the 1980s. Research studies have extensively explored the content and readability of MS and its effect on performance in firms, but their effect on public or nonprofit institutions such as universities has not been scrutinized with the same intensity. This study used Gunning's Fog Index score to determine the readability of a sample of worldwide universities' MS and two rankings, i.e., Quacquarelli Symonds World University Ranking and SCImago Institutions Rankings, to determine their effect on performance. No significant readability differences were identified in regions, size, focus, research type, age band, or status. Logistic regression (cumulative link model) results showed that variables, such as universities' age, focus, and size, have more-significant explanatory power on performance than MS readability.
... Именно поэтому один и тот же университет может иметь совершенно разные позиции в разных рейтингах [13]. Однако стоит отметить, что в большинстве глобальных рейтингов, как правило, присутствуют одни и те же университеты, число которых составляет не более 10% от всех действующих сегодня в мире [14]. И не только потому, что они лучшие, но и потому, что более активные. ...
Article
Full-text available
The paper suggests a new approach to performance evaluation of universities in Russia and worldwide based on aggregation of the results of ten global rankings and the Database of External Quality Assurance Results (DEQAR). The method of league analysis (MetALeague) allows the authors to integrate the diverse results of ranking and evaluation regardless of considerable variations in ranking methodology and indicators. As a result of MetALeague integration, the Global Aggregated Ranking of higher education institutions has been made. The paper analyses the global rankings and the position of Russian HEIs in these rankings. The authors have also analyzed the positions of countries based on the number of HEIs included in the Global Aggregated Ranking. The due consideration of research results, according to the authors, can propel the leading Russian universities to higher positions in the global educational environment.
... Ranking has been widely considered as an important tool for evaluating the performance, competitiveness, and success of academic institutions(Zare Banadkouki et al., 2018) (Tijssen & Winnink, 2018) though it has different meaning to different stake holders. In recent years university rankings have gained much interest and importance from a wide range of stakeholders including students, parents, policy makers and more importantly funding agencies (Shehatta & Mahmood, 2016) which is very crucial for a developing country like India to attract international agencies for collaborative research and funding. In a very short period of time the ranking of institutions has gained the foreground in the policy arena of higher education (Goglio, 2016). ...
Article
Full-text available
The first decade of 21st century has witnessed an unprecedented increase of Higher Education Institutions (HEI) in India. In the first decade itself 19493 new colleges and 257 universities were established, bringing the total number of colleges and universities to 31,324 and 493 respectively by the end of 2009-2010 (UGC, 2010) as against 11831 colleges and 236 universities till the year 1999-2000 (UGC, 2000). This sudden surge in the number of institutions attracted many debate on the quality of higher education in the country. Apart from the mandatory accreditation of courses/institutions by government established bodies, in 2015 government of India instituted the National Institutional Ranking Framework (NIRF) to evaluate and judge the annual performance of HEIs through pre defined criteria. This paper reports a comparative study of the scientific publications of national ranked engineering institutions five years on both the sides of launching of NIRF. The study aims to check the trend of research and to find out the relationship between the ranking of institutions in terms of research output and the overall ranking as per NIRF. The study uses scientometric indicators to rank the engineering institutions based on research output and its impact. In order to calculate this, the data of scholarly output of the institutions under study and the citations received to these publications subsequently has been retrieved from WoS. The current study evaluates four primary aspects of research output i.e. productivity, research impact, funding to these research and international collaboration.
... Their job was to consider the internal advances of the institutions. However, global university rankings present some critical issues about weighting schemes, reproducibility, and "other pitfalls" (Bowman and Bastedo 2010;Bougnol and Dulá 2014;Collins and Park 2016;Johnes 2018;Moed 2017;Olcay and Bulu 2017;Safón 2019;Shehatta and Mahmood 2016;Uslu 2020). ...
Article
The economic and social need to spread knowledge between universities and industry has become increasingly evident in recent years. This paper presents a ranking based partly on research and knowledge transfer indicators from U-multirank data but using data-driven weights. The choice of specific weights and the comparison between ranks remain a sensitive topic. A restricted version of the benefit of the doubt method is implemented to build a new university ranking that includes an endogenous weighting scheme. Furthermore, a novel procedure is presented to compare the principal method with U-multirank. At the best of my knowledge, the U-multirank data set has been unapplied to achieve alternative rankings that include research and knowledge transfers dimensions. A significant result arises from the benefit of the doubt: the highest importance weight is assigned to the co-publications with industrial partners and interdisciplinary publication indicators. This paper fills a bit of the existing gap on the role of co-publications with industrial partners in the university efficiency around the world.
... Un factor clave que permite visualizar el desempeño, productividad y calidad de una IES es su ubicación en los Rankings Mundiales de Universidades (RMU), tales como: QS Top Universities, ARWU, WeboMetrics o THE. Estos RMU determinan la gestión de las universidades por medio de la utilización de indicadores y aplicando diferentes metodologías que permiten valorar CI (Shehatta y Mahmood, 2016). ...
Article
Full-text available
El Capital Intelectual (CI) de las Instituciones de Educación Superior (IES), como principal activo intangible para el incremento del conocimiento y avance de la ciencia, es medido por diferentes Rankings Mundiales de Universidades (RMU). El objetivo del presente trabajo es comparar las metodologías utilizadas por cuatro RMU: ARWU, THE, QS y WebofMetrics, con la finalidad de establecer los principales indicadores que permiten medir el CI. Se consolidaron los datos 2018 de los RMU de 47 IES públicas de España y de 39 de Latinoamérica, pertenecientes estas últimas a 10 países de la región. Al analizar la información por medio de tablas de contingencia y cruce de variables se observaron que los indicadores comunes entre los 4 RMU fueron: empleabilidad, tasas de retención y titulación, producción científica de artículos y libros/capítulos, internacionalización de estudiantes y profesores, formación de posgrado (maestrías y doctorados). El análisis ubica a 9 IES españolas junto con 8 latinoamericanas entre las 500 mejores del mundo, por sus valores promedios obtenidos en la clasificación de los 4 RMU, y se establece la necesidad de mejorar la divulgación de los logros, reconocimientos y productos obtenidos a nivel institucional por arte de las IES.
... elite status (Jacob & Meek, 2013). In our analysis, a dummy variable equal to 1 for universities included in the first 50 positions of the yearly ARWU classification is included to measure university prestige, because it has been reported that the top 50 universities in a ranking define the common features of a world-class university (see Shehatta & Mahmood, 2016). ...
Article
Full-text available
As the global mobility of researchers increases, many of whom are supported by national funding agencies’ mobility schemes, there is growing interest in understanding the impact of this overseas mobility on knowledge production and networking. This study addresses a relatively understudied mobility—the temporary international mobility of PhD students in STEM fields—and its relation to the establishment of research collaborations between mobile PhD students and researchers at the host university and with other researchers overseas. First, we find that 55% of the participants established relevant international collaborations (i.e., with hosting supervisors and/or others at the hosting university), and we explore these collaboration patterns in detail by taking a novel research propagation approach. Second, we identify features of the visiting period that influence the formation of research collaborations abroad, such as the prestige of the host university, the duration of the international mobility period, the cultural distance, and the number of peer PhD students at the host university. Previous research collaborations between the home and host supervisors are also found to play a crucial role in research collaboration development. Age at the time of mobility is not found to be particularly relevant. We find that female PhD students are less able to benefit from collaborative research efforts than male students. These findings advance the knowledge of global research networks and provide important insights for research funding agencies aiming to promote international research mobility at the doctoral level. Peer Review https://publons.com/publon/10.1162/qss_a_00096
... We constructed an ordinal variable that indicated whether the institute a respondent was affiliated with was part of the WUR's top 20, top 40, top 60, top 80 or top 100. Many competing university rankings exist, and they use different methodologies, but they tend to have similar outcomes (Aguillo et al. 2010;Saisana et al. 2011;Shehatta and Mahmood 2016). We chose the WUR ranking because of its prominence. ...
Article
Full-text available
Although many studies have been conducted on the drivers of and barriers to research collaborations, current literature provides limited insights into the ways in which individual researchers choose to engage in different collaborative projects. Using a choice experiment, we studied the factors that drive this choice using a representative sample of 3145 researchers from Western Europe and North America who publish in English. We find that for most researchers, the expected publication of research in scientific journals deriving from a project is the most decisive factor driving their collaboration choices. Moreover, most respondents prefer to collaborate with other partners than industry. However, different factors’ influence varies across groups of researchers. These groups are characterised as going for the ‘puzzle’ (60% of the sample), the ‘ribbon’ (33%) or the ‘gold’ (8%), i.e., primarily oriented toward intellectual goals, recognition or money, respectively. This heterogeneity shows that a combination of interventions will be required for governments aiming to promote university–industry collaborations.
... The first, and the main, problem is discrepancy between ranking systems [25]. Study [5] illustrated that the intersection of indicators in several rankings is small what means that the ranking systems evaluate different areas of education and non of them gives a comprehensive analysis of universities. ...
Thesis
Full-text available
The research is devoted to the development of a performance ranking of Russian universities included in world university rankings using mathematical approaches to assessing efficiency - data envelopment analysis and stochastic frontier analysis. The study conducted a full ranking development cycle - from collecting data on websites using web scraping to application of models. An important component of the analysis is the proof of the inconsistency of world university rankings using statistical methods as well as exploratory data analysis and the usage of the principal component analysis for dimensionality reduction. Programming languages such as Python and R were used to solve these problems.
... University rankings have attracted increasing interest from several groups including students, parents, institutions, academics, policy makers, media outlets, etc. in the past decade. They are used in making decisions by each such group for different purposes (Dill and Soo 2005;Bowman and Bastedo 2011;Hazelkorn 2008;Tofallis 2012;Shehatta and Mahmood 2016). ...
Article
The reproducibility of the results of university ranking systems and the problem of how to climb in rankings have been discussed in the literature for a long time. We created the simulation software WURS which can be a useful tool to shed light on these discussions. In this paper, we present the features and usage of WURS.
Article
The study examines and compares the indicators used by the six global ranking systems, as well as the weights assigned to the indicators used by each ranking system. In the comparison process, the inter-ranking correlation between the ranking systems is determined by Spearman’s rank correlation method to understand primarily the similarities between different ranking systems based on rank or positions held by different institutions. The study frames all the indicators, considering various ranking systems, in a common ranking framework for establishing the co-relation matrix. The framing of seven distinct ranking characteristics or indications aids in understanding the weighting scheme used by the six ranking methods. The work then further investigates the reasons for differing Spearman’s ranking correlation values by exploring different ways of calculating the ranking indices adopted by different ranking agencies. The weight distribution across the indicators is another factor which is required to be taken into consideration while observing the correlation between the two ranking systems. Karl Pearson’s statistical measure is applied to find the similarities of the weights associated with each indicator in different raking systems. The outcome of this research provides a comprehensive view of countries like India which are participating in different international ranking systems to improve their ranking while taking part in various ranking systems. Different test cases and in-depth analyses show that it is not an easy task for the institutes to perform better in all the ranking systems.
Article
University ranking systems use various single and multi-faceted methodologies. Despite being efficient and less biased, the former fails to cover all academic performance dimensions, requiring solutions to improve its effectiveness. Previous studies found universities’ ranks to be partly correlated to their social presence and activities via their official accounts. However, altmetrics have a comparatively more diversified and all-inclusive nature. Moreover, altmetrics are assumed to reflect various impact types and therefore represent different academic performance dimensions. This study attempted to discover if the altmetrics aggregated at the university level can bridge the gap between single and multi-faceted rankings. Focusing on Leiden and Nature Index as single-faceted, and Times Higher Education and Quacquarelli Symonds as multi-faceted rankings, it explored a sample of the universities jointly ranked by the systems in 2017 and 2020. Their overall scores in Times Higher Education and Quacquarelli Symonds were regressed against their Leiden crown indicator (PP top 10%), Article-Weighted Fractional Count in Nature Index, Altmetric Attention Score, tweets, and Mendeley readership. According to the results, the universities’ scores in Leiden and Nature Index predicted theirs in Quacquarelli Symonds (33.5% and 21.4%, respectively) and Times Higher Education (63.7% and 33.4%, respectively). Altmetric Attention Score, tweets, and Mendeley readership boosted the predictions, implying their ability to reflect academic performances. However, they differed in their effects’ strengths, importance, and directions, which may be resulted from their differences in the impact realms and values for different social sections, which are not necessarily proportional to the corresponding dimensions’ weights in the rankings.
Article
Full-text available
Background: The internationalization of universities allows the exchange of knowledge, experiences, attitudes, and cultures across geographical borders, which leads to benefits such as visibility, human resource development, quality improvement and revenue generation for universities. Therefore, the assessment of universities is very important in terms of internationalization. The purpose of this study was to identify the indicators of internationalization assessment for medical universities in a logical framework. Methods: The reporting of this scoping review conforms to the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Review checklist (PRISMA- ScR). Articles were retrieved through the search of related keywords in databases including Web of Science, PubMed, Scopus, Science Direct, and Google Scholar from January 2000 to October 2021 and by searching the references of retrieved articles. After applying the inclusion criteria, 36 papers were selected from a total of 1264. Data analysis is underpinned by the Ritchie and Spencer five-step framework. Results: 102 indicators have been identified and organized in the framework of IPO, which has provided input, process and output indicators in the educational, research, and management dimensions. Most indicators have been classified in the "Education" dimension (n=40) which consists of 6 inputs, 14 processes and 20 Outputs. The "Research" dimension consists of 3 inputs, 9 processes and 12 Outputs, and the "Management" dimension consists of 13 inputs, 16 processes and 9 Outputs. Conclusion: There is no single set of target indicators for the internationalization of all medical universities. Therefore, the selection of target indicators for medical universities to proceed toward internationalization depends on the strengths and weaknesses of universities in each dimension, as well as the feasibility of further ambition according to the national context. Also, the identified indicators are mainly in the four areas of facilities management, visibility, marketing, and networking.
Article
Full-text available
Nowadays, university rankings are used to assess all aspects of universities. Due to the impact of university rankings on assessing the performance of universities, this research aims to explore university rankings in depth. University rankings are considered contributors to assessing university performance. Previous literature showed different types of goals, such as output and support goals, where the literature advised to align between these two types of goals. Universities have different goals, but still, university rankings measure all universities on the same criteria. Subsequently, this research has used the most used university rankings in the literature, QS world ranking dataset. Then unsupervised machine learning was performed to cluster the universities. The results divided universities among four clusters. This study helps in allocating the university in the adequate cluster. This study helps university managers define the goals of their universities. The study recommends universities align their support goals with their output goals. The study recommends universities to develop international goals and strategies, and support the research in the universities by supporting the scholars. This study’s novelty lies in connecting the university rankings and goals using management analytics in education.
Book
Full-text available
В монографии развиты математические методы и инструменты многомерного анализа и прогнозирования показателей университетов в мировых рейтингах. Глобальные университетские рейтинги расссматриваются в качестве многомерных объектов, как множество университетов входящих в рейтинг со своими интегральными показателями и как отдельный университетский рейтинг, интегральный показатель которого рассчитывается на основе множества индикаторов. Изучена структурная динамика и устойчивость рейтингов, их чувствительность к изменению весовых коэффициентов индикаторов. Показана стабильность временных ранжировок университетов и их слабая чувствительность к изменению весовых коэффициентов индикаторов в их верхних частях. Проделанные послойные корреляции между значениями индикаторов и интегральными показателями рейтингов позволяют нащупывать наиболее чувствительные индикаторы, на которые, в первую очередь, следует обратить внимание университетского менеджмента. Для разных университетских рейтингов и стран мира получена корреляционная связь между рангами этих рейтингов и стоимостями обучения в них иностранных студентов, а также подсчитан доход, который они приносят стране прибывания в доле прямых иностранных инвестиций. В прогнозных целях разработаны несколько пороговых методов, которые позволяют с большой степенью надёжности говорить о вхождении университета в TOP – N рассматриваемого рейтинга. В качестве прогнозных динамических моделей в работе использован аппарат анализа временных рядов, а также качественные методы анализа динамических систем, на примере адаптированных к задаче исследования уравнениям Лотка – Вольтерра, совместно с их численным решением методом Рунге-Кутта.
Conference Paper
This study compares 12 machine learning algorithms for predicting university rankings. We have collected scores of 316 universities on QS World University Rankings across 9 countries namely Australia, Brazil, United States, India, Germany, France, China, Japan, and Russia. In this data, only values of top universities with scores above a certain are available. This means that for the other universities, the scores are censored. These scores are predicted from data that are publicly available about the universities; examples are the number of publications or the total number of students. Cross-validation with the concordance index is used for model comparison. It is found that gradient boosting machines and random survival forests perform best on this application.
Article
Objective Altmetrics are claimed to measure the scientific, societal, educational, technological and economic impacts of science. They have some of these dimensions in common with university ranking and evaluating systems. Their results are, therefore, expected to be partially convergent with the systems’. Given the importance of the scientific and non-scientific impacts of science, this study investigated the correlations of universities’ altmetrics with their total and dimensional scores in Nature Index, Leiden, Times Higher Education (THE) and Quacquarelli Symonds (QS). Methodology Following a correlational design, it explored an available sample of the universities commonly ranked in the systems in 2017. The data were collected from online documents using checklists and analysed by the Spearman correlation. As Altmetric Attention Score (ASS) is efficient in that it integrates several indicators into a single one, it was used as the proxy of the universities’ social performance. Findings The universities showed significant positive correlations between their ASSs and their performance scores on the total and dimensional levels, except for industry income in THE, with an insignificant correlation, and proportion of collaborative publication less than 100 km. in Leiden, with an inverse correlation. The correlations ranged from weak to marginally strong. Conclusion The positive relationships between the universities’ performance and ASSs signified that there existed some similarities in what they measured. However, they were of weak-to-marginally strong powers, implying that the metrics differed in what they measured. The findings contribute to the existing knowledge by providing some evidence of convergence between university-level altmetrics and university performances in various dimensions.
Article
Full-text available
Due to the unbalance between Asian and Western countries in terms of higher education development and pressure from global competition, universities in several East Asian countries have striven to become world-class universities (WCUs) by actively assessing themselves using various global ranking systems and subsequently investing in key performance indicators. Numerous scholars have suggested that for these East Asian catch-up universities (EACUs), independently improving the elements related to high-weight indicators could produce short-term increases in ranking performance; however, this approach is not conducive to sustainable development. In addition, little is currently understood regarding sustainable development strategies for developing EACUs into WCUs. This study proposes a systematic evaluation model for self-assessment and the creation of strategies to transform EACUs into sustainable WCUs. The fuzzy Delphi method was used to determine criteria for a new evaluation framework, and the decision-making trial and evaluation laboratory method was employed to construct the influential relationships among the criteria. Two cases were then selected to demonstrate the superiority of the model for creating sustainable development strategies for EACUs. This study provides a systematic perspective and a useful tool for decision-makers at EACUs to achieve sustainable development goals.
Article
Full-text available
Global rankings help boost the international reputation of universities, which thus attempt to achieve good positions on them. These rankings attract great interest each year and are followed attentively by stakeholders in higher education. This paper investigates the trajectory of Spanish universities in the ARWU and THE rankings over the last 5 years using the dynamic biplot technique to study the relationship between a multivariate dataset obtained at more than one time point. The results demonstrate that Spanish universities achieve low positions on international rankings when analyzed using this multivariate and dynamic approach. Indeed, only a small percentage occupy good positions in both studied rankings and stand out in terms of some of the indicators, whereas most achieve weak scores in the global context. Spanish universities should attempt to improve this situation, since the prestige resulting from a good position on these lists will always be beneficial in terms of the visibility of both the universities themselves and the whole Spanish university system.
Article
Full-text available
The objective of this study is to investigate the research productivity of the Sri Lankan state universities depicted in reputed international university ranking systems during 2015-2020 and to identify the areas that can be used to develop the research productivity of the state universities. Research–related scores of the Sri Lankan state universities from 2015-2020 in four ranking systems (THE, QS, SIR, and URAP) were analysed. The study established that the research productivity, impact, and collaboration are the major aspects considered by the ranking systems. Only a few universities are ranked and the scores have a considerable scope to be improved. Several recommendations are made on how the university librarians can support the improvement of research-related related rankings. This is the first study on research productivity scores of Sri Lankan state universities based on international ranking systems. Hence the findings will be useful for the university policymakers in Sri Lanka as well in other countries with similar educational contexts.
Chapter
The present paper envisages an analysis of the main goals and future directions of action in the higher education sector at world level. In recent years higher education institutions apply professional management principles and guide their activities using business specific strategic tools. Starting from the fact that higher education institutions can learn from the best, the purpose of the paper is to analyze the main directions included in the mission statements of the top 150 higher education institutions at world level, according to rankings. The two main objectives of the analysis are: (a) to identify the main directions included in the mission statements of the best universities in the world and (b) to identify commonalities and differences among those. The paper will focus in its first part on theoretical insights on the subject: mission statements and strategic management, their peculiarities in the field of higher education. The methodology used is qualitative content analysis based on the documentation through the study of the public information disseminated by universities on their web sites. The results of the study illustrate that one of the main themes included in many of the top 150 higher education in the world is excellence. There are also differences in the scope of the declared mission statements as some universities see their activity applying to the local (town/state) community, others see their roles at national level, some declare themselves as acting in the interest of the whole world and some see their role at all three levels.
Article
Full-text available
A new technique called Method of Analysis of Leagues (MethALeague) is proposed for comparing performance of higher education institutions measured by different assessment methods. The MethALeague uses the convolution operations from the theory of small-group decision making to create aggregate charts of university leagues based on the performance indicators obtained with different assessment techniques. Specifically, researchers are given the opportunity to bring widely divergent university performance indicators into unified assessment charts and carry out comparative analysis of different assessment approaches. The MethALeague was applied successfully to compare the performance indicators of the Project 5–100 universities reflected in three major global rankings, Academic Ranking of World Universities, QS World University Rankings, and Times Higher Education World University Rankings. A formalized concept of “world ranking” proposed in the article makes it possible to visualize the performance dynamics of Russia’s top universities and compare it to that of the top universities in other countries (United States, Great Britain, Australia, Germany, and China). Suggestions are made on using a modified version of the MethALeague at the national level to analyze the results of university performance monitoring and compare them to the universities’ global ranking positions. The method described in the article could be applied by educational authorities, researchers and higher education institutions to determine the frameworks of strategic development, both for specific universities and for Russia’s higher education system as a whole.
Article
This paper studies the influence of funding on the position reached by the top 300 universities (institutions) of the Quaccarelly Symond World University Ranking 2018. Geographical location, ownership nature (public/private), size and financial resources of these top universities are examined. Our analysis shows that public funding become critical up to 84% of these top universities, especially for the European universities. Moreover, funding explains up to 51% of the variability of the positions attained by the universities in the ranking. The influence of funding in the improvement of the universities’ ranking score is also studied. We provide evidence that the top 100 universities double the funds of the universities located in positions 101–200 and triple those of the universities in positions 201–300 of the ranking. This is an exceptional exercise in relating university funding and excellence that usually are examined for university systems (countries) not for universities themselves.
Article
Global university rankings have become a critical factor in the higher education sector, engendering increasing interest and exerting a notable influence over a wide variety of stakeholders. They are presented to different audiences as tools that evaluate and rank universities according to quality. However, some authors are of the opinion that rankings express reputational factors to a large extent. This article presents a model of the intra- and inter-ranking relationships from the perspective of reputation along with an empirical study on two of the most influential rankings: the Academic Ranking of World Universities and the Times Higher Education World University Rankings. Data from these two rankings between 2010 and 2018, and the application of ordinal regressions, provide evidence that both rankings are mutually influential, generating intra and reciprocal reputational effects over time.
Article
Full-text available
There are a variety of ranking systems for universities throughout the different continents of the world. The majority of the world ranking systems have paid special attention toward evaluation of universities and higher education institutions at the national and international level. This paper tries to study the similarities and status of top Asian universities in the list of top 200 universities by these world ranking systems. Findings show that there are some parallelisms among these international rankings. For example it was found some correlations between QS-Webometrics rankings (R= 0.78); QS-THE rankings (R= 0.53); and Shanghai-HEEACT rankings (R= 0.58). The highest correlation rate belongs to QS-Webometrics (R=0.78). The findings show no evidence to prove that the origin country of ranking system has any bias toward the rank of universities of its own country among other countries. For instance QS ranking of the United States classifies many universities of China and Japan as top Asian universities. HEEACT Ranking System of Taiwan includes just one university of Taiwan in the high ranking category (as other rankings do). Shanghai Ranking of China assigns a lower grade to universities of China and Hong Kong in comparison with QS ranking of the USA. Finally, some suggestions are made to improve the benefits of the ranking systems in order to promote the situation of higher education in the world, and recommendations for combining the indicators of these ranking systems to have a more comprehensive one for the world.
Article
Full-text available
Recent interest towards university rankings has led to the development of several ranking systems at national and global levels. Global ranking systems tend to rely on internationally accessible bibliometric databases and reputation surveys to develop league tables at a global level. Given their access and in-depth knowledge about local institutions, national ranking systems tend to include a more comprehensive set of indicators. The purpose of this study is to conduct a systematic comparison of national and global university ranking systems in terms of their indicators, coverage and ranking results. Our findings indicate that national rankings tend to include a larger number of indicators that primarily focus on educational and institutional parameters, whereas global ranking systems tend to have fewer indicators mainly focusing on research performance. Rank similarity analysis between national rankings and global rankings filtered for each country suggest that with the exception of a few instances global rankings do not strongly predict the national rankings.
Book
Full-text available
University rankings have gained popularity worldwide because they appear to fulfil demands by students, parents, policymakers, employers, and other stakeholders for information and transparency. They are often equated with quality, and are now a significant factor shaping institutional reputation. Today, there are eleven global rankings, experiencing varying degrees of popularity, reliability and trustworthiness, and national rankings in over 40 countries. Despite their popularity, how much do we really know and understand about the influence and impact of rankings? This book is the first comprehensive study of rankings from a global perspective. Based on original international surveys and interviews with universities and stakeholders, Ellen Hazelkorn draws together a wealth of international experience to chronicle how rankings are helping reshape higher education in the age of globalization. Written in an easy but authoritative style, the book makes an important contribution to our understanding of the rankings phenomenon. It is essential reading for policy makers, institutional leaders, managers, advisors, and scholars.
Article
Full-text available
Global rankings are creating a furore wherever or whenever they are published or mentioned. They have become a barometer of global competition measuring the knowledge-producing and talent-catching capacity of higher education institutions. These developments are injecting a new competitive dynamic into higher education, nationally and globally, and encouraging a debate about its role and purpose. As such, politicians regularly refer to them as a measure of their nation’s economic strength and aspirations, universities use them to help set or define targets mapping their performance against the various metrics, while academics use rankings to bolster their own professional reputation and status. Based on an international survey (2006) and extensive interviews in Germany, Australia and Japan, (2008), this paper provides a comparative analysis of the impact and influence of rankings on higher education and stakeholders, and describes institutional experiences and responses. It then explores how rankings are influencing national policy and shaping institutional decision making and behaviour.Some changes form part of the broader modernisation agenda, improving performance and public accountability, while others are viewed as perverse. Their experiences illustrate that policy does matter. Les classements mondiaux suscitent l’enthousiasme chaque fois qu’ils sont publiés ou mentionnés. Ils sont devenus le baromètre de la concurrence mondiale, mesurant la capacité des institutions d’enseignement supérieur en termes de production de savoir et de captation des talents. Ces développements injectent une nouvelle dynamique de compétition dans l’enseignement supérieur, au niveau national et mondial, et suscitent un débat sur son rôle et ses objectifs. À ce titre, les hommes/femmes politiques y font régulièrement référence en tant qu’instrument de mesure de la puissance économique et des aspirations de leur nation, les universités s’en servent pour établir ou définir leurs objectifs en termes de performance par rapport à diverses métriques, tandis que les universitaires utilisent les classements pour appuyer leurs propres réputation et statut professionnels.Cet article se fonde sur une enquête internationale (2006) et des entretiens approfondis menés en Allemagne, en Australie et au Japon (2008) pour réaliser une analyse comparative de l’impact et de l’influence des classements sur l’enseignement supérieur et ses parties prenantes et pour décrire les expériences et réponses institutionnelles. Cet article étudie également la manière dont les classements influencent la politique nationale et façonnent la prise de décision et les comportements institutionnels. Certains changements s’inscrivent dans le cadre plus large du programme de modernisation qui tend vers une amélioration des performances et une plus grande responsabilité publique, tandis que d’autres sont considérés comme pervers. Leurs expériences démontrent l’importance des choix politiques.
Article
Full-text available
Ten years after the first global rankings appeared, it is clear that they have had an extraordinary impact on higher education. While there are fundamental questions about whether rankings measure either quality or what's meaningful, they have succeeded in exposing higher education to international comparison. More so, because of the important role higher education plays as a driver of economic development, rankings have exposed both an information deficit and national competitiveness. Accordingly, both nations and institutions have sought to maximise their position vis-á-vis global rankings with positive and perverse effects. Their legacy is evident in the way rankings have become an implicit — and often explicit – reference point for policymaking and higher education decision-making, and have reinforced an evaluative state's over-reliance on quantitative indicators to measure quality. They are embedded in popular discourse, and have informed the behaviour of many stakeholders, within and outside the academy. This paper reflects on three inter-related issues; i) considers the way rankings have heightened policy and investment interest in higher education, ii) discusses whether the modifications to rankings have resolved some of the questions about what they measure, and iii) looks at how rankings have influenced stakeholder behaviour. Finally, the paper reflects on what we have learned and some outstanding issues.
Article
Full-text available
The main objective of this article is to examine the role and usefulness of public information mechanisms, such as the rankings and similar classification instruments that are increasingly relied upon to measure and compare the performance of tertiary education institutions. The article begins with a typology of ranking instruments used for public accountability purposes, followed by a discussion of the political economy of the ranking phenomenon. It then attempts to assess their respective merits and disadvantages and makes some recommendations for policy-makers, tertiary education institutions , and the public at large.
Article
Full-text available
This paper draws on the results of an international survey of HE leaders and senior managers, which was supported by the OECD Programme on Institutional Management for Higher Education (IMHE) and the International Association of Universities (IAU). It focuses on how HEIs are responding to league tables and rankings (LTRS), and what impact or influence — positive or perverse — they are having on institutional behaviour, decision-making and actions. The growing body of academic research and journalist reportage is referenced to contextualize this international experience. The paper shows that while HE leaders are concerned about the impact of rankings, they are also increasingly responsive and reactive to them. In addition, key stakeholders use rankings to influence their decisions: students use rankings to ‘shortlist’ university choice, and others make decisions about funding, sponsorship and employee recruitment. Rankings are also used as a ‘policy instrument’ to underpin and quicken the pace of HE reform.Higher Education Policy (2008) 21, 193–215. doi:10.1057/hep.2008.1
Article
Full-text available
Global university rankings have arrived, and though still in a process of rapid evolution, they are likely to substantially influence the long-term development of higher education across the world. The inclusions, definitions, methods, implications and effects are of great importance. This paper analyses and critiques the two principal rankings systems prepared so far, the research rankings prepared by Shanghai Jiao Tong University and the composite rankings from the Times Higher Education Supplement. It goes on to discuss the divergence between them in the performance of Australian universities, draws attention to the policy implications of rankings, and canvasses the methodological difficulties and problems. It concludes by advocating the system of university comparisons developed by the Centre for Higher Educational Development (CHE) in Germany. This evades most of the problems and perverse effects of the other rankings systems, particularly reputational and whole-of-institution rankings. It provides data more directly useful to and controlled by prospective students, and more relevant to teaching and learning.
Article
Full-text available
Global rankings are creating a furore wherever or whenever they are published or mentioned. They have become a barometer of global competition measuring the knowledge-producing and talent-catching capacity of higher education institutions. These developments are injecting a new competitive dynamic into higher education, nationally and globally, and encouraging a debate about its role and purpose. As such, politicians regularly refer to them as a measure of their nation’s economic strength and aspirations, universities use them to help set or define targets mapping their performance against the various metrics, while academics use rankings to bolster their own professional reputation and status. Based on an international survey (2006) and extensive interviews in Germany, Australia and Japan, (2008), this paper provides a comparative analysis of the impact and influence of rankings on higher education and stakeholders, and describes institutional experiences and responses. It then explores how rankings are influencing national policy and shaping institutional decision making and behaviour. Some changes form part of the broader modernisation agenda, improving performance and public accountability, while others are viewed as perverse. Their experiences illustrate that policy does matter. Les classements mondiaux suscitent l’enthousiasme chaque fois qu’ils sont publiés ou mentionnés. Ils sont devenus le baromètre de la concurrence mondiale, mesurant la capacité des institutions d’enseignement supérieur en termes de production de savoir et de captation des talents. Ces développements injectent une nouvelle dynamique de compétition dans l’enseignement supérieur, au niveau national et mondial, et suscitent un débat sur son rôle et ses objectifs. À ce titre, les hommes/femmes politiques y font régulièrement référence en tant qu’instrument de mesure de la puissance économique et des aspirations de leur nation, les universités s’en servent pour établir ou définir leurs objectifs en termes de performance par rapport à diverses métriques, tandis que les universitaires utilisent les classements pour appuyer leurs propres réputation et statut professionnels. Cet article se fonde sur une enquête internationale (2006) et des entretiens approfondis menés en Allemagne, en Australie et au Japon (2008) pour réaliser une analyse comparative de l’impact et de l’influence des classements sur l’enseignement supérieur et ses parties prenantes et pour décrire les expériences et réponses institutionnelles. Cet article étudie également la manière dont les classements influencent la politique nationale et façonnent la prise de décision et les comportements institutionnels. Certains changements s’inscrivent dans le cadre plus large du programme de modernisation qui tend vers une amélioration des performances et une plus grande responsabilité publique, tandis que d’autres sont considérés comme pervers. Leurs expériences démontrent l’importance des choix politiques.
Article
Full-text available
Educationalists are well able to find fault with rankings on numerous grounds and may reject them outright. However, given that they are here to stay, we could also try to improve them wherever possible. All currently published university rankings combine various measures to produce an overall score using an additive approach. The individual measures are first normalized to make the figures ‘comparable’ before they are combined. Various normalization procedures exist but, unfortunately, they lead to different results when applied to the same data: hence the compiler’s choice of normalization actually affects the order in which universities are ranked. Other difficulties associated with the additive approach include differing treatments of the student to staff ratio, and unexpected rank reversals associated with the removal or inclusion of institutions. We show that a multiplicative approach to aggregation overcomes all of these difficulties. It also provides a transparent interpretation for the weights. The proposed approach is very general and can be applied to many other types of ranking problem. KeywordsLeague tables–Performance measure–University rankings
Article
Full-text available
Economic and cultural globalisation has ushered in a new era in higher education. Higher education was always more internationally open than most sectors because of its immersion in knowledge, which never showed much respect for juridical boundaries. In global knowledge economies, higher education institutions are more important than ever as mediums for a wide range of cross-border relationships and continuous global flows of people, information, knowledge, technologies, products and financial capital. Even as they share in the reinvention of the world around them, higher education institutions, and the policies that produce and support them, are also being reinvented. For the first time in history every research university is part of a single world-wide network and the world leaders in the field have an unprecedented global visibility and power. Research is more internationalised than before and the mobility of doctoral students and faculty has increased. The specifically global element in academic labour markets has gained weight, especially since the advent of global university rankings. This working paper explores the issues for national policy and for individual institutions. Part I provides an overview of globalisation and higher education and the global responses of national systems and individual institutions of higher education. Part II is focused on certain areas of policy with a strong multilateral dimension: Europeanisation, institutional rankings and typologies and cross-border mobility. Avec la mondialisation �conomique et culturelle, l'enseignement sup�rieur entre dans une nouvelle �re. Jusqu'ici, l'enseignement sup�rieur a toujours �t� un secteur plus international que les autres, car plong� dans la connaissance, sans �gard aux fronti�res juridiques. Dans les �conomies mondiales de la connaissance, les �tablissements d'enseignement sup�rieur sont plus importants que jamais en tant qu'interm�diaires dans une multiplicit� de rela
Article
Full-text available
Recently there is increasing interest in university rankings. Annual rankings of world universities are published by QS for the Times Higher Education Supplement, the Shanghai Jiao Tong University, the Higher Education and Accreditation Council of Taiwan and rankings based on Web visibility by the Cybermetrics Lab at CSIC. In this paper we compare the rankings using a set of similarity measures. For the rankings that are being published for a number of years we also examine longitudinal patterns. The rankings limited to European universities are compared to the ranking of the Centre for Science and Technology Studies at Leiden University. The findings show that there are reasonable similarities between the rankings, even though each applies a different methodology. The biggest differences are between the rankings provided by the QS-Times Higher Education Supplement and the Ranking Web of the CSIC Cybermetrics Lab. The highest similarities were observed between the Taiwanese and the Leiden rankings from European universities. Overall the similarities are increased when the comparison is limited to the European universities.
Article
Full-text available
The global expansion of access to higher education has increased demand for information on academic quality and has led to the development of university ranking systems or league tables in many countries of the world. A recent UNESCO/CEPES conference on higher education indicators concluded that cross-national research on these ranking systems could make an important contribution to improving the international market for higher education. The comparison and analysis of national university ranking systems can help address a number of important policy questions. First, is there an emerging international consensus on the measurement of academic quality as reflected in these ranking systems? Second, what impact are the different ranking systems having on university and academic behavior in their respective countries? Finally, are there important public interests that are thus far not reflected in these rankings? If so, is there a needed and appropriate role for public policy in the development and distribution of university ranking systems and what might that role be? This paper explores these questions through a comparative analysis of university rankings in Australia, Canada, the UK, and the US.
Book
Ten years have passed since the first global ranking of universities was published. Since then, university rankings have continued to attract the attention of policymakers and theacademy, challenging perceived wisdom about the status and reputation, as wellas quality and performance, of higher education institutions. Their impact andinfluence has impacted and influenced policymakers, students and parents,employers and other stakeholders in addition to higher education institutionsaround the world. They are now a significant factor shaping institutionalambition and reputation, and national priorities. The second edition of Rankings and the Reshaping of HigherEducation, now in paperback, brings the story of rankings up-to-date. It contains new originalresearch, and extensive analysis of the rankings phenomenon. Ellen Hazelkorndraws together a wealth of international experience to chronicle how rankingsare helping reshape higher education in the age of globalization. Written in aneasy but authoritative style, this book makes an important contribution to ourunderstanding of rankings and global changes in higher education. It is essentialreading for policymakers, institutional leaders, managers, advisors, andscholars.
Chapter
This chapter explores the issues confronting higher education in Saudi Arabia as it moves towards globalisation of learning and research and the integration of its universities into national economic and social policy frameworks. A particular emphasis is placed on the processes necessary for university engagement with multinational corporations, both inside and outside the Kingdom. The authors stress, however, that international collaboration carries risks as well as rewards. Determining an appropriate development strategy for the higher education sector that balances those risks and rewards is critical to the Kingdom’s future.
Article
Equating the unequal is misleading, and this happens consistently in comparing rankings from different university ranking systems, as the NUT saga shows. This article illustrates the problem by analyzing the 2011 rankings of the top 100 universities in the AWUR, QSWUR and THEWUR ranking results. It also discusses the reasons why the rankings offered by the three systems are grossly inconsistent. A proper reading of the rankings is suggested.
Article
University league tables have burgeoned in the UK in recent years and are being published by an ever-increasing number of newspapers. National and international university league tables are examined. Results of a number of university league tables that appeared in the UK in 1998 are compared and discussed, and some methodological issues are raised. The degree of usefulness of these league tables to prospective students is then considered, and it is argued that, for the majority of potential students, the tables do not provide them with the critical information needed to make an informed choice of where to study. Some of the likely future developments of these league tables are then discussed.
Article
Recently there are many organizations conducting projects on ranking world universities from different perspectives. These ranking activities have made impacts and caused controversy. This study does not favor using bibliometric indicators to evaluate universities’ performances, but not against the idea either. We regard these ranking activities as important phenomena and aim to investigate correlation of different ranking systems taking bibliometric approach. Four research questions are discussed: (1) the inter-correlation among different ranking systems; (2) the intra-correlation within ranking systems; (3) the correlation of indicators across ranking systems; and (4) the impact of different citation indexes on rankings. The preliminary results show that 55 % of top 200 universities are covered in all ranking systems. The rankings of ARWU and PRSPWU show stronger correlation. With inclusion of another ranking, WRWU (2009–2010), these rankings tend to converge. In addition, intra-correlation is significant and this means that it is possible to find out some ranking indicators with high degree of discriminativeness or representativeness. Finally, it is found that there is no significant impact of using different citation indexes on the ranking results for top 200 universities.
Article
This article presents the findings of a survey, conducted on league tables and rankings systems worldwide, including seventeen standard ones and one non‐standard league table. Despite the capacity of existing league tables and rankings to meet the interest of the public of transparency and information on higher education institutions, ranking systems still are in their “infancy”. The authors suggest that, had international ranking schemes to assume a quality assurance role, it would be the global higher education community that would have to identify better practices for data collection and reporting to achieve high‐quality inter‐institutional comparisons.
Article
Notwithstanding criticisms and discussions on methodological grounds, much attention has been and still will be paid to university rankings for various reasons. The present paper uses published information of the 10 top-ranking universities of the world and demonstrates the problem of spurious precision. In view of the problem of measurement error inherent in highly fallible educational data, grouping is suggested as an alternative to ranking to avoid precision which is more imagined than real.
Article
Higher education administrators believe that revenues are linked to college rankings and act accordingly, particularly those at research universities. Although rankings are clearly influential for many schools and colleges, this fundamental assumption has yet to be tested empirically. Drawing on data from multiple resource providers in higher education, we find that the influence of rankings depends on constituencies’ placement in the higher education field. Resource providers who are vulnerable to the status hierarchy of higher education––college administrators, faculty, alumni, and out-of-state students––are significantly influenced by rankings. Those on the periphery of the organizational field, such as foundations and industry, are largely unaffected. Although rankings are designed largely for stakeholders outside of higher education, their strongest influence is on those within the higher education field. KeywordsCollege-Rankings-Strategy-Resource dependence-Organizations-Institutions-Institutional theory
Article
Incl. bibl., abstract This article examines the role and usefulness of league tables that are increasingly used to measure and compare the performance of tertiary education institutions. The article begins with a general overview and a typology of league tables. It continues with a discussion of the controversies they have generated, including the basis and the range of criticism they have invited, the merit of indicators they use as measures of quality, and the potential conditions that place universities at an advantage or a disadvantage in ranking exercises. The paper ends with a discussion of implications of league tables for national policies and institutional practices both in the developing world and in industrial countries.
Article
Incl. bibl., abstract We look at some of the theoretical and methodological issues underlying international university ranking systems and, in particular, their conceptual connection with the idea of excellence. We then turn to a critical examination of the two best-known international university ranking systems - the Times Higher Education Supplement (THES) World University Rankings and the Shanghai Jiao Tong Academic Ranking of World Universities. We assess the various criteria used by the two systems and argue that the Jiao Tong system, although far from perfect, is a better indicator of university excellence. Based on our assessments of these two systems, we suggest how an ideal international university ranking system might look, concluding with some comments on the uses of ranking systems.
Article
Thesis (Ed. D., Multicultural Education Program)--University of San Francisco, 1986. Includes bibliographical references (leaves 113-118).
The comparison of performance ranking of scientific papers for world universities and other ranking systems
  • M X Huang
Huang, M. X. (2011). The comparison of performance ranking of scientific papers for world universities and other ranking systems. Evaluation Bimonthly, 29, 53-59.
A chance for European universities
  • J Ritzen
Ritzen, J. (2010). A chance for European universities. Amsterdam: University Press.
Analyzing the movement of ranking order in world universities’ rankings: How to understand and use universities’ rankings effectively to draw up a universities’ development strategy
  • Y Q Hou
  • R Morse
  • Z L Jiang
The German excellence initiative and efficiency change among universities
  • B Gawellek
  • M Sunder
Gawellek, B., & Sunder, M. (2016). The German excellence initiative and efficiency change among universities, 2001-2011. http://econstor.eu/bitstream/10419/126581/1/846676990.pdf. Accessed June 27, 2016.
Analyzing the movement of ranking order in world universities' rankings: How to understand and use universities' rankings effectively to draw up a universities' development strategy. Evaluation Bimonthly
  • Y Q Hou
  • R Morse
  • Z L Jiang
Hou, Y. Q., Morse, R., & Jiang, Z. L. (2011). Analyzing the movement of ranking order in world universities' rankings: How to understand and use universities' rankings effectively to draw up a universities' development strategy. Evaluation Bimonthly, 30, 43-49.
Impact of college rankings on institutional decision making: Four country case studies
IHEP (Institute for Higher Education Policy). (2009). Impact of college rankings on institutional decision making: Four country case studies. http://www.ihep.org/sites/default/files/uploads/docs/pubs/ impactofcollegerankings.pdf. Accessed June 26, 2016.