ArticlePDF Available

Reproducibility of the Shanghai academic ranking of world universities

Authors:

Abstract

This paper discusses and copes with the difficulties that arise when trying to reproduce the results of the Shanghai ranking of world universities. Although the methodology of the ranking is a little ambiguous with regard to absolute (not relative to the best performer) scores on the six indicators that compose the ranking, the paper shows how to develop procedures to compute raw results and final relative scores. Discrepancies between estimated scores and the results of the Shanghai ranking are mostly associated with the difficulties encountered in the identification of institutional affiliations, and are not significant. We can safely state that the results of the Shanghai ranking are in fact reproducible.
Noname manuscript No.
(will be inserted by the editor)
Reproducibility of the Shanghai academic ranking of world
universities results
the date of receipt and acceptance should be inserted later
Abstract This paper discusses and copes with the difficulties that arise when trying
to reproduce the results of the Shanghai ranking of world universities. Although the
methodology of the ranking is a little ambiguous with regard to absolute (not relative to
the best performer) scores on the six indicators that compose the ranking, the paper
shows how to develop procedures to compute raw results and final relative scores.
Discrepancies between estimated scores and the results of the Shanghai ranking are
mostly associated with the difficulties encountered in the identification of institutional
affiliations, and are not significant. We can safely state that the results of the Shanghai
ranking are in fact reproducible.
1 Introduction
I have recently paid some attention to the interpretation of the ARWU ranking in
terms of whole higher education systems, [Docampo (2011a, 2011b)]. In those analysis
I took the results directly from the ranking web pages, but felt that an extra effort
was needed in order to fully understand the way scores are assigned to universities. I
was aware of the unsuccessful efforts that other authors had made before to uncover
the secrets of ARWU, as reported in Florian (2007). I was also aware of the fact
that the the raw data are adjusted before computing scores, “. . . the distribution of
data for each indicator is examined for any significant distorting effect and standard
statistical techniques are used to adjust the indicator if necessary” Liu and Cheng
(2005). However, I had already uncovered a path to solve the problem when dealing
with the HiCi indicator. Specifically, I had found out that once the highest scoring
institution is identified, relative scores of the other institutions are calculated not in
direct proportionality to the top score, but in direct proportionality between the square
roots of the scores, Docampo (2008). This paper will show that the same rule applies
Domingo Docampo
Universidad de Vigo, Atlantic Research Center for Information and Communication Technolo-
gies; Campus Universitario, 36310 Vigo, Spain.
Tel.: +34-986-812134
Fax: +34-986-812100
E-mail: ddocampo@uvigo.es
2
to the remaining indicators. Besides the square root law, there are other indicator-
specific considerations that will allow us to accurately reproduce the results of the
ARWU ranking.
The rest of the paper goes as follows. First I cover the background by reproduc-
ing the methodology of the ranking. I then analyze the five raw indicators (Alumni,
Award, HiCi, N&S, and PUB) and show how the ARWU results can be accurately
reproduced. Finally, I deal with the composed indicator (PCP), and close the paper
with the conclusions.
2 ARWU Methodology
According to Liu and Cheng (2005), the six indicators that compose the ranking are:
Alumni The number of alumni of an institution winning Nobel Prizes and Fields
Medals. Alumni are defined as those who obtain bachelor, master’s or doctoral
degrees from the institution. Different weights are set according to the periods of
obtaining degrees. The weight is 100% for alumni obtaining degrees in 2001-2010,
90% for the period 1991-2000, 80% for the period 1981-1990, and so on, and finally
10% for alumni obtaining degrees in 1911-1920. If a person obtains more than one
degrees from an institution, the institution is considered once only.
Award The number of staff of an institution winning Nobel Prizes in Physics, Chem-
istry, Medicine and Economics and Fields Medal in Mathematics. Staff is defined
as those who work at an institution at the time of winning the prize. Different
weights are set according to the periods of winning the prizes. The weight is 100%
for winners in 2001-2010, 90% for winners in 1991-2000, and so on, and finally 10%
for winners in 1911-1920. If a winner is affiliated with more than one institution,
each institution is assigned the reciprocal of the number of institutions. For Nobel
prizes, if a prize is shared by more than one person, weights are set for winners
according to their proportion of the prize.
HiCi The number of highly cited researchers in 21 subject categories. The definition
of categories and detailed procedures can be found at the website of Thomson ISI.
N&S The number of papers published in Nature and Science between 2006 and 2010.
To distinguish the order of author affiliation, a weight of 100% is assigned for corre-
sponding author affiliation, 50% for first author affiliation (second author affiliation
if the first author affiliation is the same as corresponding author affiliation), 25% for
the next author affiliation, and 10% for other author affiliations. Only publications
of “Article” and “Proceedings Paper” types are considered.
PUB Total number of papers indexed in Science Citation Index-Expanded and Social
Science Citation Index in 2010. Only publications of “Article” and “Proceedings
Paper” types are considered. When calculating the total number of papers of an
institution, a special weight of two was introduced for papers indexed in Social
Science Citation Index.
PCP The weighted scores of the above five indicators divided by the number of full-
time equivalent academic staff. If the number of academic staff for institutions
of a country cannot be obtained, the weighted scores of the above five indicators
is used. For ARWU 2011, the numbers of full-time equivalent academic staff are
obtained for institutions in Australia, Austria, Belgium, Canada, China, Czech,
France, Italy, Japan, Netherlands, New Zealand, Norway, Saudi Arabia, Slovenia,
South Korea, Spain, Sweden, Switzerland, UK, USA etc.
3
3 Nobel Prizes and Field’s Medals
To analyze the results on the indicators related to Nobel Prizes and Field’s Medals, let
us first point out the three major differences between the Award and Alumni indicators:
1. To compute the Award indicator, only Nobel Prizes in the Sciences are taken into
account, while Literature and Peace prizes are accounted for the Alumni indicator
as well.
2. The year of the award is what counts for the Award indicator, while the year of
graduation is what counts for the Alumni indicator. In case of multiple graduation
from an institution (e.g. BS and PhD from the same university), it is the latest
graduation time that counts.
3. In the computation of the Award indicator all the Field’s Medalists get three points
to their institution while Nobel Laureates get three points only when they do not
share the prize. When a Nobel Prize is shared, the three points are distributed ac-
cording to the partition of the prize. Points are shared in case of multiple affiliations.
In the computation of the Alumni indicator, however, all the Nobel Laureates and
Field Medalists get one point for all the institutions from which they graduated.
Harvard University achieves the maximum score on the two indicators in the 2011
edition of ARWU, with 37.88 points in the Award indicator and 28.90 points in the
indicator Alumni. Let Hbe the number of points of Harvard, and Xthe number of
points of any other institution. In both cases, Alumni and Award, estimated scores are
computed through the same formula:
ESTSCORE = 100rX
H(1)
In the case of the Award indicator I have been able to match the results of the
Shanghai ranking, as Table 1 shows. The column “pts” reflects the raw data after the
computation of the total number of points from each institution is completed. The
“EST” column reflects the values of formula 1, while the “Award” column shows the
actual score on the indicator Award from the ARWU web page.
In Table 1 I have included universities with different scores; in the case of univer-
sities with the same score, only one of the institutions is shown. Besides, since I am
not exactly sure about the way the legacy of Paris-La Sorbonne should be distributed
among its heirs, I have decided not to include Universities Paris 5, 6, 7, 9 and 11 in the
table; inasmuch as inaccuracies in those cases would be due to the difficulty in properly
identifying the institutions, their inclusion would not be useful to test the estimation
procedure.
In the case of the Alumni indicator I have been able to match almost all the
results of the Shanghai ranking. Table 2 shows the results of the 25 highest scoring
institutions. The acronyms 1st, 2nd, 3rd, used in Table 2 stand for the cases in which
the Field’s Medalist or Nobel Laureate was awarded his or her first, second or third
degree, respectively. The column “pts” reflects the addition of the previous two columns
to compute the raw data. Again, the “EST” column reflects my estimation once the
square root law is applied while the column “Alumni” shows the actual score on the
indicator Alumni from the ARWU web page.
Table 2 shows only a few cases of clear inaccuracies, undoubtedly pointing out
my difficulties in correctly identifying some graduates from those institutions, not a
shortcoming of the estimation procedure. Excel files containing the results of all the
4
Institution pts EST Award Institution pts EST Award
Harvard 37.93 100.0 100.0 Irvine 3.25 29.3 29.3
Cambridge 35.45 96.7 96.7 Jerusalem 3.00 28.1 28.1
Princeton 28.80 87.1 87.1 Karolinska 2.80 27.2 27.2
Chicago 26.70 83.9 83.9 Heidelberg 2.75 26.9 27.0
MIT 25.45 81.9 81.9 Wash. St. Louis 2.55 25.9 25.9
Berkeley 23.85 79.3 79.3 Buenos Aires 2.40 25.2 25.2
Stanford 23.33 78.4 78.4 ENS Paris 2.25 24.4 24.4
Caltech 17.98 68.8 68.8 Copenhagen 2.20 24.1 24.1
Columbia 17.21 67.4 67.4 Pisa 2.10 23.5 23.5
Rockefeller 12.93 58.4 58.4 Purdue 2.05 23.2 23.2
Oxford 12.60 57.6 57.6 Technion 2.00 23.0 23.0
Cornell 9.90 51.1 51.1 Munich 1.98 22.8 22.8
Yale 7.65 44.9 44.9 Sussex 1.95 22.7 22.7
Los Angeles 6.88 42.6 42.6 Strasbourg 1.90 22.4 22.4
San Francisco 6.10 40.1 40.1 Aarhus 1.85 22.1 22.1
Imperial College 5.25 37.2 37.2 Rice 1.80 21.8 21.8
Illinois 5.05 36.5 36.5 Freiburg 1.65 20.9 20.9
SFIT Zurich 4.95 36.1 36.1 Tsukuba 1.50 19.9 19.9
San Diego 4.85 35.8 35.8 Toronto 1.40 19.2 19.2
Wisconsin 4.75 35.4 35.4 Free Brussels 1.35 18.9 18.9
Santa Barbara 4.68 35.1 35.1 Helsinki 1.20 17.8 17.8
Kyoto 4.58 34.7 34.7 Tufts 1.05 16.6 16.6
Pennsylvania 4.45 34.3 34.3 Hokkaido 1.00 16.2 16.2
Moscow State 4.40 34.1 34.1 Roma 0.90 15.4 15.4
Manchester 4.35 33.9 33.9 Melbourne 0.75 14.1 14.1
Oslo 4.20 33.3 33.3 Louvain 0.70 13.6 13.6
TSMC at Dallas 4.15 33.1 33.1 Tech. Denmark 0.60 12.6 12.6
Carnegie Mellon 4.05 32.7 32.7 Utah 0.50 11.5 11.5
Uppsala 3.90 32.1 32.1 Innsbruck 0.45 10.9 10.9
Washington 3.80 31.7 31.7 Lisbon 0.30 8.9 8.9
Colorado 3.58 30.7 30.7 Mainz 0.25 8.1 8.1
Stockholm 3.30 29.5 29.5 Toulouse 0.15 6.3 6.3
Table 1 Scores on the Award Indicator
universities included in the Shanghai ranking in both the Alumni and Award Indicators
are available upon request.
4 Computing the HiCi indicator
The major difficulties in computing the scores on this indicator arise from the inaccu-
racies of the official information provided by Thomsom Reuters on the affiliations of
the highly cited authors. When searching for the information about the highly cited
authors from an institution we have to deal with the following problems:
Outdated information. Researchers move and the new affiliation is not always regis-
tered in the official web page.
Some of the authors have unfortunately passed out in the past few years, but there
names are still on the list.
Mistaken identities: a few of the authors have been mistakenly assigned to a different
institution due to the difficulties in recognizing researchers with the same last name
and initials.
5
Institution 1st 2nd 3rd pts EST Alumni
Harvard 10.10 16.20 2.60 28.9 100.0 100.0
Cambridge 12.70 6.90 1.20 20.8 84.8 87.1
MIT 5.00 9.70 0.60 15.3 72.8 72.8
Columbia 6.10 7.00 0.90 14.0 69.6 69.6
Berkeley 4.70 7.50 1.30 13.5 68.3 68.3
Chicago 5.60 5.50 1.10 12.2 65.0 65.0
Princeton 2.30 7.00 0.00 9.3 56.7 56.7
Oxford 5.20 2.90 0.80 8.9 55.5 55.5
Caltech 2.60 5.10 0.30 8.0 52.6 52.6
Yale 3.80 2.80 0.50 7.1 49.6 49.6
Moscow State 6.40 0.00 0.00 6.4 47.1 47.4
Johns Hopkins 0.40 3.20 1.40 5.0 41.6 43.2
Cornell 1.90 3.20 0.00 5.1 42.0 42.0
Stanford 1.30 3.10 0.50 4.9 41.2 41.2
Tech Univ Munich 3.40 1.20 0.00 4.6 39.9 39.9
Jerusalem 3.90 0.00 0.00 3.9 36.7 36.7
Michigan 2.60 0.50 0.80 3.9 36.7 36.7
Pennsylvania 1.70 2.10 0.00 3.8 36.3 36.3
Carnegie Mellon 1.30 1.30 1.20 3.8 36.3 36.3
Wisconsin 1.20 2.40 0.00 3.6 35.3 35.3
Tokyo 3.30 0.30 0.00 3.6 35.3 35.3
Case Western Reserve 1.20 2.30 0.00 3.5 34.8 34.8
Frankfurt 2.50 0.90 0.00 3.4 34.3 34.8
Illinois 1.40 2.00 0.00 3.4 34.3 34.3
ETH Zurich 2.50 0.80 0.00 3.3 33.8 33.8
Table 2 Scores on the Alumni Indicator
The information about the institution is missing in a great deal of cases. We find the
name of the research unit, hospital or institute, but not the institution to which
they are affiliated.
When a reference to a hospital is made, it is not clear whether the author belongs to
an academic institution as well: sometimes they do, sometimes they do not.
The trickiest problem is how to deal with fractional appointments, adjunct professor-
ships, double affiliation for consulting purposes (particularly with new universities
in the Middle East with a generous budget to purchase in the new market of the
highly cited authors), and positions in external units associated with academic
institutions.
The best showcase for all these difficulties is Harvard University. It is of paramount
importance to come up with the correct figure for the raw score of Harvard University,
since the scores of all the other institutions will be related to it. A quick look at the
Thomson Reuters site results in a list of 225 possible highly cited authors related
to Harvard University or to its associated teaching or research units. After a careful
evaluation of all the cases I have arrived at the final figure of 192 for the number of
highly cited researchers currently affiliated with Harvard University in the year 2011.
Therefore, to proceed further, the value of 100 is allocated by ARWU to Harvard in
correspondence with the square root of 192, as the yardstick to measure the scores of
the other institutions.
Computing the indicator for all the institutions would be a very tedious exercise,
since we would have to prune the results from the database of highly cited authors, and
we are talking about a number in excess of 7,000 researchers. Hence, I have selected
6
a sample of institutions, showing different numbers of highly cited authors, and have
checked the accuracy of the information provided by the official web page of highly
cited authors before computing the estimated scores on the HiCi indicator.
The results are shown in Table 3. The acronym NHiCi stands for the number
of highly cited errors from an institution (accounting for shared affiliations in some
cases); “HiCi” stands for the actual score on the indicator HiCi in ARWU 2011 of the
institutions selected for the table; the column “EST” shows the estimated HiCi score
using the procedure explained in this section, with H= 192:
ESTHiCi = 100rNHiCi
H(2)
The table confirms that this indicator can be computed with absolute accuracy once
the highly cited authors data are appropriately checked and corrected when needed.
Institution NHiCi EST HiCi
Harvard University 192 100 100
Princeton University 74 62.1 62.1
Yale University 68 59.5 59.5
University of Washington 58 55.0 55.0
University of Texas at Austin 40 45.6 45.6
University of Tokyo 34 42.1 42.1
University College London 30 39.5 39.5
University of Toronto 29 38.9 38.9
Karolinska Institute 20 32.3 32.3
Utrecht University 17 29.8 29.8
Leiden University 15 28.0 28.0
Osaka University 14 27.0 27.0
University of Zurich 12.5 25.5 25.5
Lund University 12 25.0 25.0
Tel Aviv University 11 23.9 23.9
University of Basel 10.5 23.4 23.4
University of Delaware 10 22.8 22.8
University of Wuerzburg 9.5 22.2 22.2
Tohoku University 9 21.7 21.7
University of Sheffield 8.5 21.0 21.0
University of Queensland 8 20.4 20.4
University of Frankfurt 7 19.1 19.1
University of Milan 6.5 18.4 18.4
Ghent University 6 17.7 17.7
Paris Sud University 5 16.1 16.1
University of Freiburg 4.5 15.3 15.3
Indian Institute of Science 4 14.4 14.4
City University of Hong Kong 3.5 13.5 13.5
Autonomous University of Madrid 3 12.5 12.5
University of Lausanne 2.5 11.4 11.4
University of Cape Town 2 10.2 10.2
University of Western Ontario 1.5 8.8 8.8
Seoul National University 1 7.2 7.2
Table 3 Scores on the HiCi Indicator
7
5 Computing the N&S indicator
In the case of the N&S indicator the methodology appears to be precise and clear, we
can proceed by giving one point to the institution of the corresponding author, 0.5
points to the institution of the first author, 0.25 points to the institution of the next
author, and finally 0.1 points to the remaining institutions.
In order to test the square root hypothesis the first task is to evaluate the score of
Harvard University. However, there is a number of papers during the period 2006-2010
coauthored by authors from that institution in excess of 700; hence, an attempt to
compute the points awarded to Harvard by looking up all those papers is a doomed
endeavor. Assuming the square root hypothesis holds, there is another way to arrive
at the needed result by triangulating through the scores of universities with just a few
papers in the period 2006-2010. To do so, I will first compute the number of points for
universities with a score below 7.0 in ARWU 2011. Since the scores are rounded up or
down to the first decimal digit, we can estimate the bounds for the number of points
of Harvard university in the following way.
Suppose we get xpoints for an institution with a score yin ARWU. Let Hbe the
number of points of Harvard University. If the square root rule applies we would then
have y= 100px/H, so we could estimate Hby reversing the formula, H=x100
y2
.
Now, since we know that yhas been rounded to the nearest first decimal digit,
what we actually have is two bounds for H, namely:
Hl=x100
y+ 0.05 2
Hu=x100
y0.05 2
By computing the bounds of all the universities with a score below 7.0 in ARWU 2011
we arrive at an interval of values in which Hmust lie. The lower endpoint of the interval
would be the upper value of all the Hl,L, and the upper one would be lower value of all
the Hu,U. Provided that L < U we can then use the center of the interval (L, U) as an
accurate estimation of the value of H. We can then test the hypothesis by computing
the number of points of a number of universities in ARWU and evaluating the scores
according to the square root formula. Depending on the results, we will know whether
the rule remains correct.
Table 4 shows the number of points of institutions covering all the scores from 1.5
to 6.9 in ARWU. The first column, “pts”, shows the results of the computation once all
the papers from those institutions have been looked up. Second column, “N&S”, shows
the true scores in the N&S indicator in ARWU 2011. Columns LOWB and UPRB show
the bounds for the estimation of H, the number of points of Harvard University, as
discussed above. The final row contains the tightest bounds for H,Land U, maximum
value of column LOWB and minimum value of column UPRB, respectively.
Since L<Uwe can proceed further and check the validity of the square root
hypothesis. Let first Hbe the value of the the center of the interval (L, U), H= 436.9.
We shall use that value to obtain the scores of a number of institutions and check
whether the estimates of the scores match the true values in ARWU 2011.
The validity of the square root rule will be checked for institutions scoring between
7.0 and 9.5 in ARWU 2011 along with universities from Spain.
8
Institution Points N&S LOWB UPRB
Univ Zaragoza 0.10 1.5 416.2 475.6
King Saud Univ 0.20 2.1 432.7 475.9
Cairo Univ 0.25 2.4 416.5 452.7
Sichuan Univ 0.30 2.6 427.2 461.4
Yamaguchi Univ 0.35 2.8 430.9 462.8
Univ Federal Sao Paulo 0.45 3.2 426.0 453.5
Univ Malaya 0.50 3.4 420.1 445.5
Univ Greifswald 0.55 3.5 436.4 462.1
Massey Univ 0.70 4.0 426.8 448.6
Kyungpook Natl Univ 0.80 4.3 422.8 442.9
Shandong Univ 0.90 4.5 434.7 454.5
Texas Tech Univ 0.95 4.7 421.1 439.4
Huazhong UnivSci and Technol 1.00 4.8 425.1 443.2
Xian Jiao Tong Univ 1.10 5.0 431.3 448.9
Univ Surrey 1.20 5.2 435.4 452.4
Univ Parma 1.25 5.3 436.7 453.5
Jilin Univ 1.30 5.5 422.0 437.7
Medical Univ S Carolina 1.35 5.6 422.9 438.3
Univ canterbury 1.40 5.7 423.4 438.6
Univ Lisbon 1.45 5.8 423.7 438.6
Univ Vigo 1.50 5.9 423.7 438.3
Erasmus University 1.55 6.0 423.5 437.8
Georgetown Univ 1.60 6.1 423.0 437.1
Indian Inst Sci 1.70 6.2 435.2 449.5
Univ Ljubljana 1.80 6.4 432.7 446.4
Lanzhou Univ 1.85 6.5 431.2 444.7
Griffith Univ 1.90 6.6 429.6 442.9
Hahyan Univ 1.95 6.7 430.4 443.5
Szeged Univ 2.00 6.8 430.5 443.4
Univ Twente 2.10 6.9 430.7 443.3
L U
Final Bounds for Harvard University 436.7 437.1
Table 4 Bounds for the number of points of Harvard University
Let “pts” be the number of points obtained by an institution. To compute scores
on the N&S indicator, the formula will again be
ESTN&S = 100rpts
H(3)
A total of 23 institutions have been analyzed to test the estimation methodology. All
the articles in Science or Nature with at least an author from one of those institutions
have been identified through the Web of Knowledge. Table 5 shows the number of
papers of the universities under analysis, and the position of the institution affiliation
within the authors list.
The acronyms of the columns of Table 5 stand for:
NP: Number of articles in Science or Nature between 2006 and 2010.
CA: Number of articles as corresponding author.
FA: Number of articles as first author.
NA: Number of articles as next author.
OA: Number of remaining articles.
pts: Total number of points of each institution on the indicator N&S.
9
EST: Estimated score on N&S of each institution.
N&S: True value of indicator N&S of each institution in ARWU 2011.
Institution NP CA FA NA OA pts EST N&S
Korea Univ 4 2 0 0 2 2.20 7.1 7.1
Univ Jyvaskyla 7 0 4 0 3 2.30 7.3 7.3
Univ Turin 7 1 2 1 3 2.55 7.6 7.6
Univ Buenos Aires 4 2 1 0 1 2.60 7.7 7.7
Univ Bari 16 1 0 1 14 2.65 7.8 7.8
Univ KwaZulu-Natal 7 1 2 2 2 2.70 7.9 7.9
Univ Vermont 6 2 1 0 3 2.80 8.0 8.0
Lehigh Univ 8 1 2 3 2 2.95 8.2 8.2
Univ Chile 11 1 1 4 5 3.00 8.3 8.3
Univ Ferrara 9 2 1 0 6 3.10 8.4 8.4
Univ Granada 9 2 1 1 5 3.25 8.6 8.6
Ben-Gurion Univ 5 3 0 1 1 3.35 8.8 8.8
Univ Tokushima 8 2 1 3 2 3.45 8.9 8.9
Complutense Madrid 14 1 2 3 8 3.55 9.0 9.0
Rensselaer Polytech Inst 8 3 0 1 4 3.65 9.1 9.1
Univ Manitoba 12 1 3 3 5 3.75 9.3 9.3
Yonsei Univ 10 2 2 2 4 3.90 9.4 9.4
Univ Valencia 14 2 4 0 8 4.80 10.5 10.5
Autonomous Barcelona 21 2 2 5 12 5.45 11.2 11.2
Univ Politecn Valencia 11 6 1 1 3 7.05 12.7 12.7
Univ Pompeu Fabra 24 5 1 6 12 8.20 13.4 13.4
Univ Barcelona 30 4 3 4 19 8.40 13.8 13.8
Autonomous Madrid 24 4 6 4 10 9.00 14.4 14.4
Table 5 Estimation of N&S scores of selected institutions
As Table 5 shows, the estimated score coincides with the one assigned by ARWU in
all the cases. We can safely state that the N&S indicator is indeed reproducible using
the procedure described in this section.
6 Computing the PUB indicator
It is obvious, by just taking a quick look at the number of articles published every year
by each institution, that the method used for assigning scores to the PUB indicator
follows the square root rule as well. Using a rough approach we could then get a
first estimation of the points awarded by ARWU. There are, however, other problems
associated with the computation of the PUB indicator. The problems mainly arise
when searching the Web of Knowledge, since a great deal of universities show a variety
of affiliation names, and I am quite aware of the demanding task of taking them all
into account.
Besides the difficulties that arise when trying to properly identify affiliations in
the web of knowledge, which constitute a disturbing source of noise in the process of
assigning papers to institutions, it is not easy to interpret the meaning of the “special”
weight of two introduced by the authors of the ranking for papers indexed in Social
Science Citation Index (SSCI), since we encounter papers indexed in the SSCI that
are indexed in the Science Citation Index Expanded (SCIE) as well, and papers listed
only in the SSCI.
10
To overcome the first hurdle I have selected a large sample of easily identifiable
institutions. To come up with an answer for the meaning of the “special” weight I will
use a strategy based upon a very useful technique: linear regression analysis.
Let’s first split the papers from an institution (articles and proceeding papers only)
into three different sets, namely:
oc: Papers that are listed only in the SCIE.
cs: Papers that are listed in both the SCIE and the SSCI.
os: Papers that are listed only in the SSCI.
There are two extreme approaches to assign a special weight to the papers listed
in the Social Science Citation Index: either a weight or 2 for just the os papers, or a
weight of 2 for all the SSCI listed papers, (os plus cs). There is a third possibility, one
that results in a different weight for the papers included in os and the papers included
in cs. To explore the likelihood of the different hypotheses, a regression analysis will
be carried out in the case of the two extreme approaches mentioned before.
According to Tabachnick and Fidell (2007, page 123), the minimum sample size
requirement for a linear regression analysis depends on the number of independent
variables. Those authors recommend a sample size in excess of 50 + 8N(where N
stands for the number of independent variables). Since we have three variables (oc,
cs,os), at least 74 institutions will have to be included in the analysis. All those
institutions are listed below.
1: Aarhus; 2: Beijing Normal; 3: Brandeis; 4: Caltech; 5: Leuven; 6: Columbia; 7: Dalian
Tech; 8: Dartmouth; 9: Duke; 10: Erasmus; 11: Jilin; 12: Kanazawa; 13: King Fahd;
14: Kobe; 15: Kyushu; 16: Med Vienna; 17: Michigan State; 18: MIT; 19: Nankai; 20:
Nanyang Tech; 21: Natl Cheng Kung; 22: Natl Tsing Hua; 23: Natl Singapore ; 24: New
York; 25: Oxford; 26: Peking; 27: Princeton; 28: Shandong ; 29: Sichuan; 30: Stanford;
31: Aberdeen; 32: Antwerp ; 33: Birmingham; 34: Bologna; 35: British Columbia; 36:
Buenos Aires; 37: Calgary; 38: Calif Davis; 39: Calif Irvine; 40: Calif Los Angeles; 41:
Calif Santa Barbara; 42: Cambridge; 43: Geneva; 44: Helsinki; 45: Kiel; 46: Koln; 47:
Leeds; 48: Liverpool; 49: Manchester; 50: Melbourne; 51: Michigan; 52: Milan; 53: Mun-
ster; 54: New Mexico; 55: Nottingham; 56: Oslo; 57: Padua; 58: Pisa; 59: Queensland;
60: Rochester; 61: Sao Paulo; 62: Siena; 63: Southern California; 64: Stockholm; 65:
Texas Austin; 66: Tubingen; 67: Warwick; 68: Washington; 69: Wuerzburg; 70: Zurich;
71: Uppsala; 72: Vanderbilt; 73: Washington St Louis; 74: Xiamen.
To test the two extreme cases I will proceed as follows:
1. Compute the points awarded to Harvard University according to the selected pro-
cedure; let again Hbe the result of that operation.
2. Evaluate, according to their ARWU score, the points that should accrue to all the
universities in the sample were the selected procedure the true one. Hence, and
assuming the square root rule is in place,
P U B = 100rpts
Hpts = HPUB
100 2
3. Perform a linear regression analysis using the three variables (oc,cs,os) to predict
the points computed in step 2.
4. Test the validity of the procedure by looking up the confidence intervals for the
regression coefficients.
11
Table 6 shows the data gathered from the Web of Knowledge on the three vari-
ables (oc,cs,os) corresponding to the institutions included in the regression analysis.
Column I points to the number assigned to the university in the list of institutions
provided before.
I oc cs os I oc cs os I oc cs os
1 : 2481 205 220 26 : 4018 153 137 51 : 5323 776 762
2 : 1118 72 44 27 : 2390 103 271 52 : 2692 85 48
3 : 259 39 74 28 : 2694 39 8 53 : 1411 60 87
4 : 2760 40 43 29 : 1187 48 7 54 : 1145 107 134
5 : 2669 255 312 30 : 1558 441 568 55 : 1968 241 377
6 : 3784 665 658 31 : 2535 147 152 56 : 2047 311 257
7 : 1862 20 6 32 : 1118 88 114 57 : 2731 129 137
8 : 751 112 142 33 : 1103 211 290 58 : 1669 54 37
9 : 3658 513 403 34 : 1669 144 142 59 : 2859 423 497
10 : 499 208 389 35 : 4918 498 548 60 : 1664 233 161
11 : 2239 9 5 36 : 2731 82 62 61 : 6166 445 194
12 : 872 22 8 37 : 2512 237 189 62 : 866 31 53
13 : 454 3 10 38 : 872 278 310 63 : 2476 320 399
14 : 1134 20 44 39 : 2335 200 255 64 : 1326 122 216
15 : 2769 49 9 40 : 1862 665 690 65 : 2468 292 568
16 : 1239 48 15 41 : 2476 81 210 66 : 1933 87 91
17 : 2227 242 486 42 : 2468 372 413 67 : 1103 109 244
18 : 4110 187 254 43 : 2692 125 109 68 : 4918 684 528
19 : 1597 27 13 44 : 1968 254 201 69 : 1404 60 58
20 : 2535 62 207 45 : 4240 45 67 70 : 2547 230 275
21 : 2298 171 94 46 : 1239 98 135 71 : 2512 202 172
22 : 1409 35 37 47 : 1404 184 270 72 : 2337 232 361
23 : 3651 207 273 48 : 2547 132 136 73 : 2840 292 319
24 : 2326 305 642 49 : 4018 323 459 74 : 1352 18 28
25 : 4621 364 637 50 : 1411 548 476
Table 6 Scientific Production in 2010 of the institutions included in the regression analysis
Preliminary analyses were conducted, before the multiple regression analysis, to
ensure no violation of the assumptions of normality, linearity and homoscedasticity.
In the two extreme approaches already mentioned, the Normal P-P plots show points
closely aligned to a straight diagonal line, suggesting no major violations from normal-
ity. Besides, no outliers were found, and the residuals obey to no systematic pattern
and conform to a rectangular distribution. We are now in a position to explore the
results of the multiple regression analysis in both cases.
Let’s begin by assigning the weight of 2 to all the papers included in the SSCI
index. In that case, the predictor coefficients for the three variables (oc,cs,os) should
be (1,2,2).
The three variables explain a 100% of the variance of the sample, F(3,70) =
300,563, p 0.001. The values of the predictor coefficients were (1.050,1.556,2.106),
and the confidence intervals were:
oc: (1.045,1.055); cs: (1.500,1.612); os: (2.064,2.148).
These results are not in line with the assumptions of the procedure that are based on
a (1,2,2) weighting scheme. In fact, they are very close to a (1,1.5,2) weighting
scheme.
12
Let’s now assign the special weight of 2 just to the papers listed only in the SSCI
index. In that case, the predictor coefficients for the three variables (oc,cs,os) should
be (1,1,2).
The three variables again explain a 100% of the variance of the sample, F(3,70) =
300,562, p 0.001. The values of the predictor coefficients were (0.956,1.417,1.917),
and the confidence intervals were:
oc: (0.951,0.960); cs: (1.365,1.468); os: (1.879,1.955).
Again, these results are not in line with the assumptions of the procedure that are
based on a (1,1,2) weighting scheme. The predictors remain very close to a (1,1.5,2)
weighting scheme.
We are now entitled to check the possibility of a very special weighting scheme, in
which the papers listed only in the SSCI would receive a weight of 2, while the papers
listed in both the SSCI and the SCIE would just get a weight of 1.5.
Using those weights to recalculate the value of H, the three variables again explain
a 100% of the variance of the sample, F(3,70) = 300,564, p 0.001. The values of the
predictor coefficients were (1.003,1.486,2.012), and the 95.0% confidence intervals
were:
oc: (0.998,1.007); cs: (1.433,1.540); os:(1.972,2.052).
This time, the results are in line with the assumptions of the procedure that are based
on a (1,1.5,2) weighting scheme.
The solution to our problem is then to assign a weight of 2 to papers listed only
in the SSCI and a weight of 1.5 to papers listed in both the SCIE and the SSCI. Let
now Hbe the number of points of Harvard University using those weights, and pts the
number of points accrued to any other institution. Given the values of the variables
(oc,cs,os) from an institution, to compute scores, follow the already well known path:
pts = oc + 1.5cs + 2os; ESTPUB = 100rpts
H(4)
Table 7 shows the results of the estimation procedure for the sample of universities
under analysis. The results of the columns “pts” and “EST” have been computed by
means of equation 4; the column “PUB” shows the values of the indicator taken from
the ARWU web site. The results obtained are very accurate, with a mean square error
of just 0.014, although small and non systematic errors in the computation may arise
as Table 7 shows. Those errors can be caused by a number of reasons, ranging from
the already mentioned difficulty in checking all the possible affiliations linked to an
institution, to the fact that results in the WOK do not remain constant in time but
depend on the day the searching takes place.
7 Computing the PCP indicator
There are a number of issues here that make the computation very difficult. First
of all, there are two ways of computing the indicator, depending on the countries:
“the weighted scores of the above five indicators divided by the number of full-time
equivalent academic staff. If the number of academic staff for institutions of a country
cannot be obtained, the weighted scores of the above five indicators is used”. A number
13
Institution pts EST PUB Institution pts EST PUB
Aarhus 3228.5 49.23 49.3 Davis 4834.0 60.24 60.2
Beijing N 1314.0 31.41 31.5 Irvine 3145.0 48.59 48.7
Brandeis 465.5 18.69 18.5 Los Angeles 7198.5 73.52 73.6
Caltech 2906.0 46.71 46.5 Santa Barbara 2175.5 40.41 40.6
Leuven 3675.5 52.53 52.5 Cambridge 5624.0 64.98 65.1
Columbia 6097.5 67.66 67.5 Geneva 1777.5 36.53 36.6
Dalian T 1904.0 37.81 37.7 Helsinki 3467.0 51.02 51.2
Dartmouth 1203.0 30.05 29.9 Kiel 1388.5 32.29 32.2
Duke 5233.5 62.68 62.5 Koln 1945.0 38.21 38.4
Erasmus 1589.0 34.54 34.6 Leeds 2592.0 44.11 44.3
Jilin 2262.5 41.21 41.0 Liverpool 2146.0 40.14 40.1
Kanazawa 921.0 26.30 26.4 Manchester 4258.5 56.54 56.6
King Fahd 478.5 18.95 18.9 Melbourne 5127.0 62.04 62.1
Kobe 1252.0 30.66 30.7 Michigan 8011.0 77.55 77.7
Kyushu 2860.5 46.34 46.5 Milan 2915.5 46.79 46.9
Med Vienna 1341.0 31.73 31.6 Munster 1675.0 35.46 35.6
Michigan S 3562.0 51.71 51.5 New Mexico 1573.5 34.37 34.3
MIT 4898.5 60.64 60.6 Nottingham 3083.5 48.11 48.0
Nankai 1663.5 35.34 35.3 Oslo 3027.5 47.68 47.5
Nanyang T 3042.0 47.79 47.7 Padua 3198.5 49.00 49.1
N Cheng Kung 2742.5 45.38 45.2 Pisa 1824.0 37.01 37.1
N Tsing Hua 1535.5 33.95 33.8 Queensland 4487.5 58.04 58.1
N Singapore 4507.5 58.17 58.1 Rochester 2335.5 41.87 41.7
New York 4067.5 55.26 55.4 Sao Paulo 7221.5 73.63 73.7
Oxford 6441.0 69.54 69.5 Siena 1018.5 27.65 27.6
Peking 4521.5 58.26 58.4 South Calif 3754.0 53.09 53.2
Princeton 2528.5 43.57 43.4 Stockholm 1941.0 38.17 38.1
Shandong 2464.5 43.02 43.1 Texas Austin 4042.0 55.09 55.2
Sichuan 2780.0 45.69 45.6 Tubingen 2245.5 41.06 41.0
Stanford 6567.5 70.22 70.3 Warwick 1754.5 36.29 36.2
Aberdeen 1554.5 34.16 34.0 Washington 7000.0 72.49 72.4
Antwerp 1512.0 33.69 33.5 Wuerzburg 1610.0 34.77 34.9
Birmingham 2801.5 45.86 45.8 Zurich 3442.0 50.83 50.7
Bologna 3039.0 47.77 47.7 Uppsala 3159.0 48.70 48.8
Br Columbia 5758.0 65.75 65.9 Vanderbilt 3407.0 50.58 50.5
Buenos Aires 1805.0 36.81 36.9 Washington 3916.0 54.22 54.4
Calgary 2823.5 46.04 46.2 Xiamen 1435.0 32.82 32.9
Table 7 Estimated and true PUB scores in ARWU 2011
of caveats are in place. First, the values chosen for the “weighted scores”. Second, the
number of full-time academic staff from an institution. Third, whether the square root
rule applies or not in this case.
As of the weights used to compute the scores, it is not difficult to produce them in
the case of the countries in which the authors do not make use of the full-time equivalent
academic staff data. Multiple regression analysis was carried out to assess the weights
applied to the first five indicators to produce the scores on the PCP indicator. The uni-
versities available for this regression analysis were the ones from Argentina, Brazil (all
universities but Universidade de Sao Paulo), Chile, Croatia, Germany, Egypt, Finland,
Greece, Hungary, India, Ireland, Israel, Mexico, Malaysia, Russia, Poland, Portugal,
Singapore, and Turkey. They conform a sample of 90 universities. Preliminary anal-
yses were conducted to ensure no violation of the assumptions of normality, linearity
and homoscedasticity. The Normal P-P plots show points closely aligned to a straight
diagonal line, suggesting no major violations from normality. Besides, no outliers were
14
found, and the residuals obey to no systematic pattern and conform to a rectangular
distribution.
Minimum sample size requirement for a linear regression analysis in this case is
50 + 8 5 = 90, since we have five predictors. So we have the appropriate sample size
to carry out the regression analysis. My hypothesis is that the square root rule does
apply, but in a special way: the weights are applied to the squares of the scores not to
the scores themselves. After that, the square root of the result is taken. Using the 90
universities in the sample, the multiple regression analysis shows that the five variables
explain a 100% of the variance in the PCP indicator, F(5,84) = 535,840, p 0.001.
The values of the predictor coefficients were (0,105,0,213,0,213,0,213,0,213), and
the 95.0% confidence intervals were:
Alumni: (0.104,1.006); Award: (0.211,0.215); HiCi:(0.211,0,216);
N&S:(0.211,0.216); PUB: (0.212,0.214)
The results are in line with the assumptions of the procedure, although an extra
factor must be introduced to correct for the slight deviation from the nominal 0.1 and
0.2 weights; that enables us to forecast the results in the PCP indicator with complete
accuracy, in the case of universities countries for which the authors do not make use
of the full-time equivalent academic staff data, using the following formula:
ESTPCP2=1
0.94 0.1Alumni2+ 0.2(Award2+ Hici2+ N&S2+ PUB2)(5)
In the sample, the PCP indicator was predicted correctly in 80 out of the 90 cases.
The square error in the other 10 cases was less than 0.005, and the average square error
was 0.001. Undoubtedly those errors are caused by the rounding operation to the first
decimal unit in the ARWU web site. The authors of the ranking use the true values
but only publish the rounded ones.
By and large, there is no easy access to the number of equivalent full-time faculty
of all the institutions listed in ARWU. I have chosen Australia as a showcase, since we
have public and reliable data published by the Department of Education, Employment
and Workplace Relations of the Australian Government (2011). I have used the data
of Full Time equivalence from Selected Higher Education Statistics: Staff 2010.
In the case of universities from Australia, the number of full equivalent academic
staff used in ARWU seems to be the number of Full Time Equivalent Faculty at the
levels above Senior Lecturer and Lecturer (Level C).
The number of Faculty from Australian universities is shown in Table 8, where
the acronyms of the three columns represent the FTE number of Faculty above Senior
Lecturer (ASL), the FTE number of Faculty at Lecturer Level C (LLC), and the Full
Time Academic Staff for the calculations of the ARWU data (FTE), respectively.
To get the actual PCP values we first compute the weighting sum of the squares
of the first five indicators, let’s call that value WSS,
WSS = 0.1Alumni2+ 0.2(Award2+ Hici2+ N&S2+ PUB2)
Let’s call WSSCT the value of WSS for Caltech, the University with the highest score
in the indicator PCP. Let FTECT be the Full Time Equivalent Staff of Caltech. To
compute the PCP indicator for the university X we will then have to perform the
following operation:
PCP = 100sWSSX
FTEX
WSSCT
FTECT
= 100rFTECT
WSSCT rWSSX
FTEX (6)
15
FTE Faculty
Institution ASL LLC FTE
Australian National University 521 303 824
Curtin University of Technology 371 272 643
Flinders University of South Australia 213 190 403
Griffith University 346 332 678
James Cook University 185 169 354
La Trobe University 225 264 489
Macquarie University 288 213 501
Monash University 676 684 1360
Swinburne University of Technology 142 140 282
University of Adelaide 373 279 652
University of Melbourne 799 497 1296
University of New South Wales 712 675 1387
University of Newcastle 221 228 449
University of Queensland 685 481 1166
University of Sydney 792 628 1420
University of Tasmania 192 227 419
University of Technology, Sydney 235 299 534
University of Western Australia 433 312 745
University of Wollongong 265 212 477
Table 8 Full Time Equivalent Staff of Australian universities in ARWU 2011. Source: Aus-
tralian Government, Department of Education, Employment and Workplace Relations.
Institution Alu Awd HiCi N&S PUB FTE WSS PCP EST
Australian 15.6 12.6 33.9 26.3 43.5 824 802.7 29.2 29.2
Curtin Tech 0 0.0 0.0 10.5 29.4 643 194.9 16.3 16.3
Flinders 17.6 0.0 10.2 5.2 25.9 403 191.4 20.4 20.4
Griffith 0 0.0 0.0 6.6 30.3 678 192.3 15.7 15.7
James Cook 0 0.0 10.2 13.4 24.8 354 179.7 21.1 21.0
La Trobe 0 0.0 7.2 4.3 26.5 489 154.5 16.6 16.6
Macquarie 0 0.0 17.7 13.6 28.8 501 265.5 21.5 21.5
Monash 0 0.0 14.4 18.6 51.7 1360 645.2 20.4 20.3
Swinburne Tech 0 0.0 10.2 10.5 18.6 282 112.1 18.7 18.6
Adelaide 16.6 0.0 10.2 12.2 38.3 652 371.5 22.3 22.3
Melbourne 19.5 14.1 25.0 21.1 62.1 1296 1063.1 26.8 26.8
New South Wales 0 0.0 20.4 13.3 53.1 1387 682.5 20.7 20.7
Newcastle 0 0.0 12.5 6.1 29.3 449 210.4 20.3 20.2
Queensland 14.4 0.0 20.4 24.4 58.1 1166 898.2 25.9 25.9
Sydney 16.6 0.0 19.1 19 60.5 1420 904.8 23.6 23.6
Tasmania 0 0.0 7.2 11.8 26.0 419 173.4 19.0 19.0
Sydney Tech 0 0.0 12.5 2.4 22.5 534 133.7 14.8 14.8
Western Australia 15.6 14.1 22.8 13.9 42.2 745 562.9 25.7 25.7
Wollongong 0 0.0 7.2 8 27.5 477 174.4 17.9 17.9
Table 9 Values of the PCP indicator (actual and predicted) of Australian Universities in
ARWU 2011.
Table 9 shows, for all the Australian universities in ARWU 2011, the values on the
five indicators (Alumni, Award, HiCi, N&S, and PUB), the values of FTE Staff, the
values of WSS sums, the actual values of the PCP indicator taken from the ARWU web
site, and the estimates of the PCP indicator according to formula 6, in which I have
made use of the following value of the FTE Academic Staff for Caltech: FTECT = 276.
16
Since it follows from the 2011 ranking data that WSSCT = 3157.47, we have that
rFTECT
WSSCT = 0.29565 ESTPCP = 29.565rW SS X
F T EX
Results from Table 9 show that the PCP indicator was computed correctly in 17
out of 19 cases. The square error in the other 2 cases was less than 0.005, and the
average square error was 0.001. It is apparent again that those errors are caused by
the rounding operation to the first decimal unit in the ARWU web site. The accuracy
of the computation has been proved, so by reversing formula 6 one can get the value
of the full-time academic staff of an institution that was used by the authors of the
ranking.
8 Conclusions
In this paper I have presented a complete methodology to compute scores of universities
on all the indicators that compose the ARWU ranking.
The accuracy of the computed scores attest to the reproducibility of the results of
the Shanghai academic ranking of world universities.
I hope that the findings of this paper will help university officials around the world
to monitor the results in the six indicators from the ARWU ranking, regardless of
whether their institutions are listed among the five hundred world universities.
Methods Summary
ARWU data on academic institutions were gathered directly from the Shanghai Jiao
Tong University ARWU website, http://www.shanghairanking.com; data on the scien-
tific production of the institutions analyzed in the paper were taken from the Web of
Knowledge in October 2011. Estimation of scores were computed using different Excel
files containing all the data from the ARWU and WOK websites. The multiple regres-
sion analyses were performed using SPSS Statistics 18.0. All Excel and SPSS files are
available upon request.
References
Australian Government, Department of Education, Employment and Workplace Rela-
tions: Staff 2010 Full Time Equivalence. Web address: http://www.deewr.gov.au/
HigherEducation/Publications/HEStatistics/Publications/Pages/Staff.aspx
Docampo D (2008) International rankings and quality of the university systems. Re-
vista de Educaci´on, Special Issue, 149–176.
Docampo D (2011) On using the Shanghai ranking to assess the research performance
of university systems. Scientometrics, 86(1), 77–92.
Docampo D (2011) Adjusted sum of institutional scores as an indicator of the pres-
ence of university systems in the ARWU ranking. Accepted. Scientometrics, DOI :
10.1007/s11192-011-0490-y.
Florian RV (2007) Irreproducibility of the results of the Shangai academic ranking of
world universities. Scientometrics, 72(1), 25–32.
17
Liu NC, Cheng Y (2005) Academic ranking of world universities: Methodologies and
problems. Higher Education in Europe, 30(2), 127–136.
Tabachnick BG, Fidell LS (2007) Using Multivariate Statistics (5th edition). Boston:
Pearson Education, Inc. / Allyn and Bacon.
Additional Data sources:
ARWU Scores http://www.shanghairanking.com/ARWU2011.html
Nobel laureates http://nobelprize.org/
Fields Medals http://www.mathunion.org/index.php?id=prizewinners
HiCi http://www.isihighlycited.com
N&S http://www.webofknowledge.com
PUB http://www.webofknowledge.com
... Una primera aproximación a ARWU puede encontrarla el lector en Liu and Cheng (2005). En Docampo (2013) y Docampo and Cram (2014) encontrará, además, una explicación pormenorizada de los cálculos necesarios para obtener las puntuaciones de ARWU. Finalmente, en Docampo et al (2022) se presenta dicha información actualizada a la edición de ARWU en 2022. ...
Technical Report
Full-text available
El Academic Ranking of World Universities (ARWU) se ha convertido en la clasificación académica global de mayor influencia. Ha ampliado, además, el número de instituciones que tienen cabida en su lista anual, ya que desde la edición de 2018 dicha lista recoge 1000 universidades de talla global. En el presente informe se parte de una versión expandida de ARWU para particularizar los datos del desempeño de las universidades de Latinoamérica, España y Portugal, incluyendo la práctica totalidad de las universidades con producción científica deámbito internacional.
... The Academic Ranking of World Universities (ARWU) a.k.a the Shanghai Ranking, evaluates the research performance of academic institutions on the basis of numerical measures related to individual outstanding achievements and institutional research throughput. A first-hand account of the Shanghai ranking indicators can be found in Liu and Cheng (2005), and is summarized in Docampo (2013). A reliable account of the ranking procedures can be found in Docampo and Cram (2014). ...
Technical Report
Full-text available
The emergence of international academic rankings is one of the most interesting phenomena in the field of comparative analysis of higher education. Once we have found a way to accurately replicate the results of the ranking, we are in a position to shed light on the performance of whole University systems. This report presents the results of 200 universities from Japan in ARWU 2022.
... The Academic Ranking of World Universities (ARWU) a.k.a the Shanghai Ranking, evaluates the research performance of academic institutions on the basis of numerical measures related to individual outstanding achievements and institutional research throughput. A first-hand account of the Shanghai ranking indicators can be found in Liu and Cheng (2005), and is summarized in Docampo (2013). A reliable account of the ranking procedures can be found in Docampo and Cram (2014). ...
Technical Report
Full-text available
The emergence of international academic rankings is one of the most interesting phenomena in the field of comparative analysis of higher education. Once we have found a way to accurately replicate the results of the ranking, we are in a position to shed light on the performance of whole University systems. This report presents the results of 600 universities from Eastern European countries in ARWU 2022.
... The Academic Ranking of World Universities (ARWU) a.k.a the Shanghai Ranking, evaluates the research performance of academic institutions on the basis of numerical measures related to individual outstanding achievements and institutional research throughput. A first-hand account of the Shanghai ranking indicators can be found in Liu and Cheng (2005), and is summarized in Docampo (2013). A reliable account of the ranking procedures can be found in Docampo and Cram (2014). ...
Preprint
Full-text available
The emergence of international academic rankings is one of the most interesting phenomena in the field of comparative analysis of higher education. Once we have found a way to accurately replicate the results of the ranking, we are in a position to shed light on the performance of whole University systems. This report presents the results of 260 universities from African countries in ARWU 2022.
... A research paper that has not been published in a visible journal is practically non-existent. Based on the achieved scientific results, universities are ranked on the prestigious ARWU (Academic Ranking of World Universities) list (Docampo, 2013;Živković et al., 2017). ...
... monograph about rankings as engines of anxiety (Espeland et al. 2016), a report on elite-making higher education policies of China since the 1990s (Allen 2017) and observations about the place of ARWU in Chinese diplomacy and the projection of soft power (Lo 2011;Charroin, 2015). The third cluster of works relates to the methodology of ARWU , including discussions and criticisms of its reproducibility (Florian 2007;Docampo 2013), measurement error (Ioannidis et al. 2007), believability (Billaut et al. 2010), robustness (Saisana et al. 2011), consistency (Jeremic et al. 2011), validity (Selten et al. 2020, and discriminatory power (Claassen 2015) as well as the non-linear transformation of raw scores (Alaşehir et al. 2014;Docampo and Cram 2014). Most of these works cite or are cited by the reviews by Moed (2017) or Fernández-Cano et al. (2018). ...
Article
Full-text available
International academic rankings of research universities are widely applied and heavily criticised. Amongst the many international rankings, the Shanghai ranking has been particularly influential. Although this ranking’s primary data are generally accessible and its methods are published in outline format, it does not follow that its outputs are predictable or straightforward. In practice, the annual and time series Shanghai rankings rely on data and rules that are complex, variable, and not fully revealed. Patterns and changes in the ranking may be misinterpreted as intrinsic properties of institutions or systems when they are actually beyond the influence of any university or nation. This article dissects the rules that connect raw institutional data to the published ranking, using the 2020 edition as a reference. Analysing an ARWU review of ranking changes over 2004–2016, we show how exogenous or methodological changes have often driven changes in ranking. Stakeholders can be misled if they believe that changes are intrinsic to institutions’ performance. We hope to inform and warn the media, governments, and institutions about the merits and risks of using the Shanghai ranking to evaluate relative institutional performance and its evolution.
... Moreover, they can promote sustainability culture through numerous actions such as the inclusion of sustainability in curricula, or the promotion of research and transfer of environmental issues [17,19,20] In addition to analyzing only research and teaching, the global university rankings are criticized for the methodology used, especially the weighting of the criteria [21,22]. Generally, this process is very subjective and the methodology does not make explicit who has decided the weighting of each of the criteria in the final weighting or how it has been calculated [23], so the results obtained are not reproducible [24]. As a result, the rankings differ substantially in their orderings, although the top places are often occupied by the same universities [25]. ...
Article
Full-text available
University rankings assess the performance of universities in various fields and aggregate that performance into a single value. In this way, the aggregate performance of universities can be easily compared. The importance of rankings is evident, as they often guide the policy of Higher Education Institutions. The most prestigious multi-criteria rankings use indicators related to teaching and research. However, many stakeholders are now demanding a greater commitment to sustainable development from universities, and it is therefore necessary to include sustainability criteria in university rankings. The development of multi-criteria rankings is subject to numerous criticisms, including the subjectivity of the decision makers when assigning weights to the criteria. In this paper we propose a methodology based on goal programming that allows objective, transparent and reproducible weighting of the criteria. Moreover, it avoids the problems associated with the existence of correlated criteria. The methodology is applied to a sample of 718 universities, using 11 criteria obtained from two prestigious university rankings covering sustainability, teaching and research. A sensitivity analysis is carried out to assess the robustness of the results obtained. This analysis shows how the weights of the criteria and the universities’ rank change depending on the λ parameter of the goal programming model, which is the only parameter set by the decision maker.
... Para resolver la ponderación tienen en cuenta los premios recibidos en los dos años previos, siguiendo una metodología similar a la que calcula el Journal Impact Factor de Garfield (2006). Finalmente, se elabora el ranking haciendo una normalización próxima a la que utiliza el "Ranking de Shangai" (Docampo, 2013). ...
Conference Paper
Full-text available
International hotel industry has been playing a dominant role in developing country. Although the rate of female population in international hospitality industry is considerably low, several women workforces in such industry are increasing. However, women working in this international hospitality industry face many obstacles due to misperception about women workforce. As a result, this empirical research studies how organizational culture plays a powerful role in multinational workplace and its importance in determining the relationship between the organizational culture and the perception of overall workers in term of women workforce in international five-star hotels in Turkey. Regarding methodology, to assess research validity and to measure demographic and women workforce perception scale, authors took the scale from Trauth et al. (2004) and Öneren et al. (2014), since it had been used in Turkish context. For measuring organizational culture, authors took Corporate Value Scale by Cameron & Quinn (2006), consists of 20 questionnaire items measuring four types of organizational culture. The survey took place in Istanbul where there are five-star hotels, which most of them are well-known international hotel chains. The self-administrative questionnaires were distributed to the hotel staffs. Finally, authors provide the research findings and finalize with the conclusion of the research.
Data
Full-text available
The emergence of international academic rankings is one of the most interesting phenomena in the �eld of comparative analysis of higher education. The growing influence of the Shanghai ranking led its many critics to show strong reluctance in using it as a source of analysis and improvement, mainly because it was generally thought that its results were irreproducible. Once we have found a way to accurately replicate the results of the ranking, we are in a position to shed light into the performance of whole Higher Education systems. This technical report presents the results of French universities in the 2013 edition of the Shanghai ranking.
Article
Full-text available
This paper draws on the results of an international survey of HE leaders and senior managers, which was supported by the OECD Programme on Institutional Management for Higher Education (IMHE) and the International Association of Universities (IAU). It focuses on how HEIs are responding to league tables and rankings (LTRS), and what impact or influence — positive or perverse — they are having on institutional behaviour, decision-making and actions. The growing body of academic research and journalist reportage is referenced to contextualize this international experience. The paper shows that while HE leaders are concerned about the impact of rankings, they are also increasingly responsive and reactive to them. In addition, key stakeholders use rankings to influence their decisions: students use rankings to ‘shortlist’ university choice, and others make decisions about funding, sponsorship and employee recruitment. Rankings are also used as a ‘policy instrument’ to underpin and quicken the pace of HE reform.Higher Education Policy (2008) 21, 193–215. doi:10.1057/hep.2008.1
Article
Shanghai Jiao Tong University1 1. Read about this key university in China at ⟨http://www.sjtu.edu.cn/www/english/⟩. View all notes has published on the Internet an Academic Ranking of World Universities that has attracted worldwide attention. Institutions are ranked according to academic or research performance and ranking indicators include major international awards, highly cited researchers in important fields, articles published in selected top journals and/or indexed by major citation indexes, and performance per capita. Methodological problems discussed here include quantitative versus qualitative evaluation, assessing research versus education, the variety of institutions, the language of publications, selection of awards, etc. Technical problems such as the definition and naming of institutions, the merging and splitting of institutions, and the search for and attribution of publications are discussed.
Article
"The evaluation and ranking of our universities and their departments is here to stay. Should we oppose them, denounce them, sabotage them as much as we can? Or can and should we use them, refashion them, expand them, in such a way that our universities end up fulfilling their various functions better than before, without worsening our lives or those of our students in the process?" These were the questions put to the keynote speakers and over one hundred participants at the 7th Ethical Forum of the University Foundation. As usual, the speakers presented contrasting viewpoints and the discussion was lively. The text below is a much expanded version of the personal conclusions formulated at the end of the Forum by its coordinator.(1)