ArticlePDF Available

Assessing the impact of biomedical research in academic institutions of disparate sizes

Springer Nature
BMC Medical Research Methodology
Authors:

Abstract and Figures

The evaluation of academic research performance is nowadays a priority issue. Bibliometric indicators such as the number of publications, total citation counts and h-index are an indispensable tool in this task but their inherent association with the size of the research output may result in rewarding high production when evaluating institutions of disparate sizes. The aim of this study is to propose an indicator that may facilitate the comparison of institutions of disparate sizes. The Modified Impact Index (MII) was defined as the ratio of the observed h-index (h) of an institution over the h-index anticipated for that institution on average, given the number of publications (N) it produces i.e. MII = h/10alphaNbeta (alpha and beta denote the intercept and the slope, respectively, of the line describing the dependence of the h-index on the number of publications in log10 scale). MII values higher than 1 indicate that an institution performs better than the average, in terms of its h-index. Data on scientific papers published during 2002-2006 and within 36 medical fields for 219 Academic Medical Institutions from 16 European countries were used to estimate alpha and beta and to calculate the MII of their total and field-specific production. From our biomedical research data, the slope beta governing the dependence of h-index on the number of publications in biomedical research was found to be similar to that estimated in other disciplines ( approximately 0.4). The MII was positively associated with the average number of citations/publication (r = 0.653, p < 0.001), the h-index (r = 0.213, p = 0.002), the number of publications with > or = 100 citations (r = 0.211, p = 0.004) but not with the number of publications (r = -0.020, p = 0.765). It was the most highly associated indicator with the share of country-specific government budget appropriations or outlays for research and development as % of GDP in 2004 (r = 0.229) followed by the average number of citations/publication (r = 0.153) whereas the corresponding correlation coefficient for the h-index was close to 0 (r = 0.029). MII was calculated for first 10 top-ranked European universities in life sciences and biomedicine, as provided by Times Higher Education ranking system, and their total and field-specific performance was compared. The MII should complement the use of h-index when comparing the research output of institutions of disparate sizes. It has a conceptual interpretation and, with the data provided here, can be computed for the total research output as well as for field-specific publication sets of institutions in biomedicine.
Content may be subject to copyright.
BioMed Central
Page 1 of 10
(page number not for citation purposes)
BMC Medical Research
Methodology
Open Access
Correspondence
Assessing the impact of biomedical research in academic
institutions of disparate sizes
Vana Sypsa and Angelos Hatzakis*
Address: Department of Hygiene and Epidemiology, Athens University Medical School, Athens, Greece
Email: Vana Sypsa - vsipsa@cc.uoa.gr; Angelos Hatzakis* - ahatzak@med.uoa.gr
* Corresponding author
Abstract
Background: The evaluation of academic research performance is nowadays a priority issue. Bibliometric
indicators such as the number of publications, total citation counts and h-index are an indispensable tool
in this task but their inherent association with the size of the research output may result in rewarding high
production when evaluating institutions of disparate sizes. The aim of this study is to propose an indicator
that may facilitate the comparison of institutions of disparate sizes.
Methods: The Modified Impact Index (MII) was defined as the ratio of the observed h-index (h) of an
institution over the h-index anticipated for that institution on average, given the number of publications
(N) it produces i.e. (
α
and
β
denote the intercept and the slope, respectively, of the line
describing the dependence of the h-index on the number of publications in log10 scale). MII values higher
than 1 indicate that an institution performs better than the average, in terms of its h-index. Data on
scientific papers published during 2002–2006 and within 36 medical fields for 219 Academic Medical
Institutions from 16 European countries were used to estimate
α
and
β
and to calculate the MII of their
total and field-specific production.
Results: From our biomedical research data, the slope
β
governing the dependence of h-index on the
number of publications in biomedical research was found to be similar to that estimated in other disciplines
(0.4). The MII was positively associated with the average number of citations/publication (r = 0.653, p <
0.001), the h-index (r = 0.213, p = 0.002), the number of publications with 100 citations (r = 0.211, p =
0.004) but not with the number of publications (r = -0.020, p = 0.765). It was the most highly associated
indicator with the share of country-specific government budget appropriations or outlays for research and
development as % of GDP in 2004 (r = 0.229) followed by the average number of citations/publication (r
= 0.153) whereas the corresponding correlation coefficient for the h-index was close to 0 (r = 0.029). MII
was calculated for first 10 top-ranked European universities in life sciences and biomedicine, as provided
by Times Higher Education ranking system, and their total and field-specific performance was compared.
Conclusion: The MII should complement the use of h-index when comparing the research output of
institutions of disparate sizes. It has a conceptual interpretation and, with the data provided here, can be
computed for the total research output as well as for field-specific publication sets of institutions in
biomedicine.
Published: 29 May 2009
BMC Medical Research Methodology 2009, 9:33 doi:10.1186/1471-2288-9-33
Received: 12 September 2008
Accepted: 29 May 2009
This article is available from: http://www.biomedcentral.com/1471-2288/9/33
© 2009 Sypsa and Hatzakis; licensee BioMed Central Ltd.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
MII =h
N10
αβ
BMC Medical Research Methodology 2009, 9:33 http://www.biomedcentral.com/1471-2288/9/33
Page 2 of 10
(page number not for citation purposes)
Background
Bibliometric indices are an indispensable tool in evaluat-
ing the research output of individuals and institutions.
Recently, novel indicators have been proposed with the
aim to overcome deficiencies of the "traditional" biblio-
metric indices (e.g. number of publications, total citation
count, average number of citations per publication) and
to combine more efficiently information on both the
quantity and the quality of the research output [1-4]. H-
index is the most known example of such an indicator [1]
and is now routinely provided by Thomson Scientific Web
of Science and other bibliometric databases. This indica-
tor is defined as the number h of papers of an individual
or an institution with number of citations higher or equal
to h. As a result, it combines information on both the
number of papers and the number of citations. However,
due to its inherent association with the size of the research
output it may result in rewarding institutions with high
production [2]. Thus, when comparing institutions, a
proper calibration of the h-index for the size of the output
may provide additional information.
Recenlty, it has been shown that when evaluating sets of
publications ranging from several hundreds to 105 papers,
the dependence of the h-index on the size of the set is
characterised by a "universal" growth rate [2]. This was
shown for interdisciplinary, mechanics and materials sci-
ence data [2] as well as for nonbiomedical research data
[5]. Thus, the h-index can be decomposed into the prod-
uct of a factor depending on the population size and of an
impact index. This impact index can be used to compare
the research output of institutions of disparate number of
publications. However, as most bibliometric indicators,
the impact index of an institution is not informative on its
own, unless it is compared to the corresponding indices of
other institutions. Furthermore, Molinari and Molinari
[2] have provided parameter estimates to calculate this
index only for a large number of papers and therefore, it
cannot be extended to assess the impact in e.g. specific
fields where the sets of publications range on a much
lower scale.
In the present study we aim to extend the interpretation of
the h-index by proposing a size-corrected, h-index based
indicator (Modified Impact Index – MII). The concept of
this index is to assess whether the h-index of an institution
deviates from the average h-index, as estimated for a par-
ticular number of publications. MII shares all the merits of
the impact index. Additionally, we will show that it has a
more informative numerical interpretation and, with the
data that we will provide in the following sections, it may
be used also in the case of smaller publication sets. We
will illustrate the use of this index in biomedical research
and explore its application within specific biomedical dis-
ciplines.
Methods
The Academic Medical Institutions located in 16 Euro-
pean countries (Austria, Belgium, Denmark, Finland,
France, Germany, Greece, Ireland, Italy, Netherlands,
Norway, Portugal, Spain, Sweden, Switzerland, United
Kingdom) were identified from the database of medical
schools provided by the Institute for International Medi-
cal Education [6]. Once the final list of 219 institutions
was compiled, all publications affiliated to the corre-
sponding universities (excluding meeting abstracts) and
classified into any of the 36 pre-specified medical subjects
(Table 1) were identified using Thomson Scientific Web of
Science (WoS). The number of papers published during
2002–2006 and the corresponding h-index have been
recorded for each institution. Two databases have been
constructed; one with data on all publications within the
36 medical fields and a second with data on publications
from each medical field separately. The intercept
α
and
slope
β
of the line describing the dependence of h-index
on the number of publications (log10 scale) were obtained
through least-squares estimation.
The impact index of each institution was calculated as
where h: h-index and N: number of publica-
tions. As Molinari and Molinari have shown in their paper
[2], the slope
β
of 0.4 estimated when accumulating data
on h-index over time is similar to the slope of the regres-
sion line obtained from cross-sectional data (e.g. in their
paper: h-index per country as calculated in 2006 vs. the
corresponding number of publications). Thus, we used
the latter approach and estimated the impact index of
papers published within 2002–2006 using the slope
β
obtained from our data on 219 institutions.
To illustrate our findings, we used the rankings provided
by Times Higher Education to select top-ranked European
universities in life sciences and biomedicine [7].
Results
Modified Impact Index (MII) in biomedical research
When the h-index of each institution was plotted against
the corresponding number of papers from 36 medical
fields on a log-log plot, the resulting points were fitted by
a regression line (Figure 1):
where hi and Ni the h-index and the number of publica-
tions of the ith institution, respectively,
α
and
β
the inter-
cept and the slope of the regression line and
ε
i the ith
residual. The estimated
α
and
β
were 0.207 and 0.445,
respectively. The parameter
β
= 0.445 governing the
dependence of h-index on the number of publications in
hm
h
N
=
β
log log
10 10
hN
iii
=+ +
αβ ε
(1)
BMC Medical Research Methodology 2009, 9:33 http://www.biomedcentral.com/1471-2288/9/33
Page 3 of 10
(page number not for citation purposes)
Table 1: List of 36 medical subjects included in the evaluation along with the estimated
α
s and
β
s (as obtained from data on
publications of 219 European Academic Medical Institutions within 2002–2006) for the calculation of the modified impact index
( where h: h-index, N: number of publications)
Subject Intercept
α
Slope
β
1 Allergy -0.033 0.668
2 Anatomy & Morphology -0.058 0.623
3 Anesthesiology -0.016 0.554
4 Cardiac & Cardiovascular Systems -0.004 0.600
5 Chemistry, Medicinal 0.067 0.563
6 Clinical Neurology 0.027 0.545
7 Critical Care Medicine 0.053 0.594
8 Dermatology -0.031 0.560
9 Emergency Medicine -0.016 0.498
10 Endocrinology & Metabolism 0.098 0.560
11 Gastroenterology & Hepatology 0.028 0.592
12 Geriatrics & Gerontology 0.049 0.558
13 Health Care Sciences & Services -0.022 0.538
14 Hematology 0.046 0.614
15 Immunology 0.162 0.528
16 Infectious Diseases 0.075 0.566
17 Medicine, General & Internal -0.124 0.644
18 Medicine, Research & Experimental -0.008 0.621
19 Obstetrics & Gynecology 0.040 0.521
20 Oncology 0.205 0.500
21 Ophthalmology -0.055 0.581
22 Orthopedics -0.053 0.555
23 Otorhinolaryncology 0.004 0.488
24 Pathology -0.042 0.621
25 Pediatrics -0.027 0.528
26 Peripheral Vascular Disease 0.022 0.616
MII =h
N10
αβ
BMC Medical Research Methodology 2009, 9:33 http://www.biomedcentral.com/1471-2288/9/33
Page 4 of 10
(page number not for citation purposes)
biomedical research was found to be similar to that esti-
mated in other disciplines (0.4). The number of publica-
tions ranged from 102 to 104 papers, with the exception of
one institution with very low number of publications. The
exclusion of this institution did not alter the estimated
slope. Our estimate for
β
in biomedical sciences was con-
sistent among different countries (Figure 2).
The fitted regression line of equation (1) provides the
average h-index for a particular number of publications.
Thus, points above the regression line correspond to insti-
tutions with h-index higher than the average. Similarly,
points below the regression line correspond to institu-
tions with h-index lower than the average. The difference
log10 hi - (
α
+
β
log10 Ni) between the observed log10 hi
(denoted as circles in the Figure 1 and 2) and the corre-
sponding fitted value
α
+
β
log10 Ni (superimposed regres-
sion line) expresses the deviation
ε
i of the observed h-
index of the ith institution from the average estimate for
the number of publications it produces. In the original
scale, this difference is transformed into the ratio .
This ratio expresses how many times the observed h-index
is higher than that estimated by the regression model
based on the number of publications. Thus, a value higher
than 1 indicates that the particular institution performs
better in terms of h-index than it would be expected for
the number of publications it produces. Similarly, a value
lower than 1 indicates that the particular institution per-
forms worse in terms of h-index than it would be expected
for the number of publications it produces. The ratio
was found to be equivalent to the impact index
proposed by Molinari and Molinari [2] multi-
plied by the constant and was therefore named Mod-
hi
Ni
10
αβ
h
N10
αβ
hm
h
N
=
β
1
10
α
27 Physiology 0.062 0.565
28 Psychiatry -0.012 0.566
29 Public, Environmental & Occupational Health 0.020 0.535
30 Radiology, Nuclear Medicine & Medical Imaging 0.001 0.560
31 Respiratory System -0.025 0.607
32 Rheumatology -0.006 0.638
33 Surgery 0.070 0.490
34 Transplantation 0.006 0.572
35 Tropical Medicine 0.003 0.595
36 Urology & Nephrology -0.012 0.594
Table 1: List of 36 medical subjects included in the evaluation along with the estimated
α
s and
β
s (as obtained from data on
publications of 219 European Academic Medical Institutions within 2002–2006) for the calculation of the modified impact index
( where h: h-index, N: number of publications) (Continued)
MII =h
N10
αβ
Log-log plot of h-index versus the total number of results found in 219 Medical schools from 16 European countriesFigure 1
Log-log plot of h-index versus the total number of
results found in 219 Medical schools from 16 Euro-
pean countries. The solid line indicates the fitted regres-
sion line and
β
indicates its slope.
Number of publications 2002-2006 (log
10
scale)
10 100 1000 10000
5
20
40
60
80
100
h-index ((log
10
scale)
All 219 Academic
Medical Institutions
E=0.445
BMC Medical Research Methodology 2009, 9:33 http://www.biomedcentral.com/1471-2288/9/33
Page 5 of 10
(page number not for citation purposes)
ified Impact Index (MII). The variance of the MII can be
computed as follows. In log10 scale:
In the original scale and thus it follows the
lognormal distribution. From standard theory,
. Based on the data col-
lected from 219 European Medical Institutions, Var(MII)
was estimated to be equal to 0.013475.
We explored the validity of the MII by examining its asso-
ciation with other indices. The MII was positively associ-
ated with the average number of citations/publication
(Spearman's r = 0.653, p < 0.001), the h-index (r = 0.213,
p = 0.002), the number of publications with 100 cita-
tions (r = 0.211, p = 0.004) but not with the number of
publications (r = -0.020, p = 0.765). We further examined
MII =10
ε
i
Var e e() ( )
(ln ) (ln )
MII =−
10 10
22
1
σσ
Log-log plot of h-index versus the total number of results by country (including countries with more than 10 Academic Medical Institutions)Figure 2
100 1000 10000
10
20
40
60
80
100
120
Number of publicat ions 2002-2006 (log10 scale)
h-index ((log
10
scale)
France
E=0.477
100 1000 10000
10
20
40
60
80
100
120
Number of publicat ions 2002-2006 (log10 scale)
h-index ((log
10
scale)
Germany
E=0.439
100 1000 10000
10
20
40
60
80
100
120
Number of publicat ions 2002-2006 (log10 scale)
h-index ((log
10
scale)
Italy
E=0.431
Number of publications 2002-2006 (log10 scale)
100 1000 10000
10
20
40
60
80
100
120
Spain
E=0.434
h-index ((log
10
scale)
100 1000 10000
10
20
40
60
80
100
120
Number of publicat ions 2002-2006 (log10 scale)
h-index ((log
10
scale)
UK
E=0.433
BMC Medical Research Methodology 2009, 9:33 http://www.biomedcentral.com/1471-2288/9/33
Page 6 of 10
(page number not for citation purposes)
whether the country-specific modified impact indices
(calculated as the median of the MIIs of the institutions
for each country) correlated with the share of government
budget appropriations or outlays for research and devel-
opment (GBAORD) as % of GDP in 2004 (GBAORD are
a way of measuring government support to R&D activi-
ties) [8]. The MII was the most highly associated indicator
(r = 0.229) followed by the average number of citations/
publication (r = 0.153) whereas the corresponding corre-
lation coefficient for the h-index was close to 0 (r =
0.029).
MII in specific medical subfields
When evaluating the MII or the impact index of an insti-
tution within a specific medical field, the value of the
slope
β
may not be necessarily equal to 0.445 as the range
of the evaluated sets of publications lies on a much lower
scale. The evaluated sets of publications per field for the
years 2002–2006 ranged from less than 10 papers to sev-
eral hundreds (on average up to 500 papers) as opposed
to the range of 102–104 papers for all 36 subjects. Some
fields were characterized by a small range in the number
of publications (e.g. Anatomy with range over 219 institu-
tions up to 76 papers) and others reached thousands (e.g.
Medicine General and Internal with range up to 1063
papers).
We used the database with the number of publications
and corresponding h-indices per subfield. Plots similar to
Figure 1 were constructed for each one of the 36 medical
fields and the parameters
α
and
β
were estimated (Table
1). These parameters can be used to estimate the field-spe-
cific MII of an institution or a department. The field-spe-
cific slopes had a mean (SD) of 0.571 (0.045) and ranged
from 0.488 (subfield: Otorhinolaryncology) to 0.668
(subfield: Allergy). There was a slight negative association
between the slopes and the number of publications per
field (i.e. higher slopes in sub fields with few publica-
tions), which was not statistically significant (r = -0.126, p
= 0.465).
MII for selected top-ranked universities
To illustrate our findings, we compared the first 10 top-
ranked European universities in life sciences and biomed-
icine, as provided by Times Higher Education [7]. In Table
2, the number of publications, h-index, impact index pro-
posed by Molinari [2] and MII are presented for all 36
medical fields (publication years: 2002–2006). All univer-
sities had a MII higher than 1 (range: 1.027–1.403) i.e.
their performance based on the h-index was higher than
or around that expected based on the number of papers
they produced. In terms of h-index, the two most produc-
tive institutions (Imperial College and UCL) occupied the
Table 2: Number of publications (N), h-index, impact index ( ) and modified impact index ( ) for the top 10 European
universities in life sciences and biomedicine according to Times Higher Education1 (based on publications occurring during 2002–2006
from 36 medical fields).
University Country Rank according to THE1N h-index Impact index Modified impact index
Oxford UK 2 5578 105 2.259 1.403
Edinburgh UK 5 3318 77 2.088 1.296
Cambridge UK 1 5605 96 2.061 1.280
Bristol UK 9 3309 72 1.955 1.214
Imperial UK 3 10624 118 1.906 1.184
Uppsala Sweden 7 4073 77 1.906 1.183
Heidelberg Germany 8 5785 86 1.821 1.131
Louis Pasteur Strasbourg I France 10 1315 43 1.760 1.093
UCL UK 4 12662 115 1.718 1.067
King's College UK 6 8980 95 1.654 1.027
MII values > 1 indicate that an institution performs better in terms of its h-index that it would be expected, given the number of publications it
produces.
1 Times Higher Education – QS World University Rankings 2007 – Life Sciences and Biomedicine (rank among European Universities)
h
N
β
h
N10
αβ
BMC Medical Research Methodology 2009, 9:33 http://www.biomedcentral.com/1471-2288/9/33
Page 7 of 10
(page number not for citation purposes)
two first places. According to MII, Oxford ranked first
(1.403), followed by Edinburgh (1.296) and Cambridge
(1.280).
A higher heterogeneity was observed in the estimated MIIs
for selected subfields such as e.g. in "Cardiac and Cardio-
vascular Systems" where MII was found to range within
0.842–1.720 (Table 3). Uppsala, Cambridge and Edin-
burgh ranked first according to MII in the subfields "Med-
icine, General and Internal", "Cardiac and Cardiovascular
Systems" and "Infectious Diseases", respectively.
Discussion
The h-index is a valuable bibliometric indicator that com-
bines information on both the quantity and the quality of
the research output. Moreover, the findings of a recent
paper indicate that it is better in predicting researchers'
future scientific achievement than other indicators (total
citation count, average number of citations per paper,
total paper count) [9]. However, the h-index has various
shortcomings, in particular when comparing individual
scientists, discussed in detail by others [10-13]; it cannot
differentiate between active and inactive scientists, it
depends on the scientific age, it is affected by different dis-
cipline-dependent citation patterns etc. Numerous vari-
ants have been proposed that aim to overcome some of
these disadvantages. For example, the m quotient allows
to compare different lengths of scientific career [1], the g
and h(2) indices give more weight to highly cited papers
[14,15], the impact index hm provides an evaluation of the
impact of the production [2] and the contemporary h-
index [13] gives more weight to newer articles.
The proposed index deals with the fact that the inherent
association of the h-index with the size of the research
output may result in rewarding high production when
evaluating institutions of disparate sizes. By definition,
the h-index cannot exceed the number of publications.
Thus, as noted by Glanzel [12] "it puts small but highly-
cited paper sets at a disadvantage ('small is not beauti-
ful')". An institution with a moderate-size production will
not reach the h-index of a very large institution even if the
quality of its publications are of similar or even better
quality simply because its total production may be even
less than h.
An application of the proposed modified impact index
was presented using biomedical data. In biomedical
research, the parameter
β
that characterises the depend-
ence of h-index on the number of publications was
approximately 0.4 and similar to that estimated in other
disciplines (interdisciplinary, mechanics and materials
science data [2], nonbiomedical research data [5] and
chemical research data [16]). These estimates were based
on publications ranging from a few hundreds to several
thousands. When the number of publications ranges from
a few papers up to approximately 500, as e.g. when evalu-
ating the research output within specific subfields, the
parameter
β
was higher than the overall estimate of 0.445.
This was also noted by Molinari & Molinari [2] who have
shown that the slope of the line describing the depend-
ence of the h-index on the number of publications is
higher when the number of evaluated papers is small. For
example, in the field "Medicine, General & Internal" Upp-
sala had 223 papers with an h-index of 40, so using the
appropriate field-specific values for the intercept α who
have shown that the slope of the line describing the
dependence of the h-index on the number of publications
is higher when the number of evaluated papers is small.
In our biomedical data, the field-specific slopes ranged
from 0.488 to 0.668. For example, in the field "Medicine,
General & Internal" Uppsala had 223 papers with an h-
index of 40, so using the appropriate field-specific values
for the intercept a and slope
β
the corresponding MII was
calculated to be .
The proposed index correlated with the share of govern-
ment budget appropriations or outlays for research and
development as % of GDP in 2004 (r = 0.229) whereas the
corresponding correlation coefficient for the h-index was
close to 0. Additionally, it was positively associated with
the average number of citations/publication, the h-index
and the number of highly cited papers. Furthermore, for a
given
β
the MII provides the same ranking as the impact
index proposed by Molinari and Molinari [2]. Actually,
the estimates of
β
provided here can be used to calculate
the impact index of institutions in biomedical research
and within specific biomedical disciplines. Both indices
have the advantage that they can be well estimated by
using a representative subset of the publications rather
than the total set of publications produced by an institu-
tion [2]. The advantage of MII over the impact index is its
conceptual interpretation.
The estimates of
α
s and
β
s were based on data from Euro-
pean Medical Institutions. In order to assess whether these
estimates can be used to calculate the MII for non-Euro-
pean institutions too, we performed a preliminary analy-
sis to check whether the slope based on data from top-
ranked US universities is similar to that obtained from the
top-ranked European ones. We observed that these slopes
were similar unless universities with number of publica-
tions outside the evaluated range were included (e.g. Har-
vard and Johns Hopkins). Thus, we advocate that the
estimates provided here can be used to calculate the MII
40
10 0 1242230 644 1 636
=
...
BMC Medical Research Methodology 2009, 9:33 http://www.biomedcentral.com/1471-2288/9/33
Page 8 of 10
(page number not for citation purposes)
Table 3: Field-specific impact index and modified impact index for the top 10 European universities in life sciences and biomedicine
according to Times Higher Education (7)
University Country N h-index Impact index MII
Medicine, General & Internal
Uppsala Sweden 223 40 1.230 1.636
Cambridge UK 428 50 1.010 1.344
Oxford UK 601 60 0.974 1.296
Bristol UK 434 47 0.941 1.252
Imperial UK 1041 74 0.843 1.122
Edinburgh UK 391 38 0.814 1.082
Heidelberg Germany 204 24 0.781 1.039
UCL UK 1385 76 0.721 0.959
Louis Pasteur Strasbourg I France 98 12 0.626 0.833
King's College UK 1063 54 0.607 0.808
Cardiac and Cardiovascular Systems
Cambridge UK 133 32 1.704 1.720
Edinburgh UK 101 25 1.570 1.585
Louis Pasteur Strasbourg I France 51 14 1.325 1.337
Oxford UK 198 29 1.216 1.228
UCL UK 452 46 1.176 1.187
Uppsala Sweden 214 29 1.161 1.172
Bristol UK 159 23 1.100 1.111
Heidelberg Germany 323 31 0.970 0.979
King's College UK 452 35 0.895 0.903
Imperial UK 860 48 0.834 0.842
Infectious Diseases
Edinburgh UK 149 24 1.413 1.189
Oxford UK 238 31 1.400 1.178
Bristol UK 141 21 1.276 1.073
King's College UK 212 26 1.254 1.055
BMC Medical Research Methodology 2009, 9:33 http://www.biomedcentral.com/1471-2288/9/33
Page 9 of 10
(page number not for citation purposes)
for non-European institutions, as long as their number of
publications falls within the evaluated range (102–104
papers for the 36 fields).
Bibliometric methods have been criticised due to techni-
cal and methodological problems generally encountered
when they are employed to assess the research output of a
university (17,18). Furthermore, the bibliometric indices
currently used appear to be related to the size of research
output and thus they probably tend to favour large insti-
tutions. The proposed index presents some clear advan-
tages compared to existing bibliometric indices: it is not
associated with the size of the publication output and
thus can be used to compare institutions of disparate size,
it has a conceptual interpretation (performance below or
above the average) and can be computed by using a repre-
sentative subset of the publications rather than the total
set of publications produced by an institution. However,
its computation requires estimates for the
α
s and
β
s and
thus is not as straightforward as in the case of usual bibli-
ometric indices. As mentioned before, the parameter
β
has
a "universal" estimate of 0.4 independent of the discipline
but dependent on the size of the publication set. As a
result, the estimates for the α as a "universal" estimate of
0.4 independent of the discipline but dependent on the
size of the publication set. As a result, the estimates for the
a s and
β
s, as e.g. those provided here for biomedicine,
can be applied to compute the MII of an institution as
long as the number of its publications falls within the
evaluated range (e.g. 102–104 papers in our case). Thus, it
would not be safe to use them for outliers, i.e. for institu-
tions with productivity outside the evaluated range.
Conclusion
In conclusion, there is a growing demand for transparent
and valid evaluation of universities but any ranking is
bound to give rise to controversy. The assessment of med-
ical research performance, in particular, is a challenging
task. Peer-review, the currently thought gold standard of
research evaluation is usually not feasible for large-scale
evaluations. For large-scale evaluative purposes, we advo-
cate the use of a combination of bibliometric indices that
will include an indicator not associated with the size of
the research output. The proposed modified impact index
is such an indicator that has a conceptual interpretation
and with the data provided here can be computed for
large as well as for small field-specific publication sets in
biomedicine.
Abbreviations
MII: Modified Impact Index
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
VS oversaw the data collection and advised on the search
strategy, analysed the data and co-wrote the first and sub-
sequent drafts. AH conceived of the study, advised on the
search strategy, oversaw data analysis and co-wrote the
first and subsequent drafts. All authors read and approved
the final manuscript.
Acknowledgements
We thank Maria Petrodaskalaki, MSc (Athens University Medical School,
Greece) and Alexandros Hatzakis, BSc (Athens University Medical School,
Greece) for their help in data collection
References
1. Hirsch JE: An index to quantify an individual's scientific
research output. Proc Natl Acad Sci USA 2005, 102:16569-72.
2. Molinari JF, Molinari A: A new methodology for ranking scien-
tific institutions. Scientometrics 2008, 75:163-174.
3. Van Raan AFJ: The use of bibliometric analysis in research per-
formance assessment and monitoring of interdisciplinary
scientific developments. Technikfolgenabschätzung, Theorie und
Praxis 2003, 12:20-29.
4. Moed HF: Bibliometric Rankings of World Universities. CWTS
Report 2006-01 [http://www.cwts.nl/hm/bibl_rnk_wrld_univ_full.pdf].
5. Kinney AL: National scientific facilities and their science
impact on nonbiomedical research. Proc Natl Acad Sci USA 2007,
104:17943-7.
6. Institute for International Medical Education: [http://www.iime.org/
iime.htm]. Retrieved on 19 April 2007
7. Times Higher Education: World University Rankings 2007. The
top 50 in Life Sciences and Biomedicine 2007 [http://www.timeshighere
ducation.co.uk/hybrid.asp?typeCode=147].
8. Eurostat: [http://epp.eurostat.ec.europa.eu].
Uppsala Sweden 100 16 1.181 0.993
Cambridge UK 145 19 1.136 0.956
Heidelberg Germany 85 14 1.133 0.953
UCL UK 709 42 1.023 0.861
Louis Pasteur Strasbourg I France 49 9 0.994 0.837
Imperial UK 739 40 0.951 0.801
Table 3: Field-specific impact index and modified impact index for the top 10 European universities in life sciences and biomedicine
according to Times Higher Education (7) (Continued)
Publish with BioMed Central and every
scientist can read your work free of charge
"BioMed Central will be the most significant development for
disseminating the results of biomedical research in our lifetime."
Sir Paul Nurse, Cancer Research UK
Your research papers will be:
available free of charge to the entire biomedical community
peer reviewed and published immediately upon acceptance
cited in PubMed and archived on PubMed Central
yours — you keep the copyright
Submit your manuscript here:
http://www.biomedcentral.com/info/publishing_adv.asp
BioMedcentral
BMC Medical Research Methodology 2009, 9:33 http://www.biomedcentral.com/1471-2288/9/33
Page 10 of 10
(page number not for citation purposes)
9. Hirsch JE: Does the H index have predictive power? Proc Natl
Acad Sci USA 2007, 104:19193-8.
10. Bornmann L, Daniel HD: What do we know about the h index?
Journal of the American Society for Information Science and Technology
2007, 58:1381-1385.
11. Moed HF: Hirsch Index is a creative and appealing construct
but be cautious when using it to evaluate individual scholars.
[http://www.cwts.nl/hm/
Comments_on_Hirsch_Index_2005_12_16.pdf].
12. Glänzel W: On the opportunities and limitations of the H-
index. Science Focus 2006, 1:10-11.
13. Sidiropoulos A, Katsaros C, Manolopoulos Y: Generalized h-index
for disclosing latent facts in citation networks. Scientometrics
2007, 72:253-280.
14. Egghe L: An improvement of the H-index: the G-index. ISSI
Newsletter 2006, 2:8-9.
15. Kosmulski M: A new Hirsch-type index saves time and works
equally well as the original h-index. ISSI Newsletter 2006, 2:4-6.
16. van Raan AFJ: Comparison of the Hirsch-index with standard
bibliometric indicators and with peer judgement for 147
chemistry research groups. Scientometrics 2006, 67:491-502.
17. Van Raan AFJ: Fatal Attraction: Conceptual and methodologi-
cal problems in the ranking of universities by bibliometric
methods. Scientometrics 2005, 62:133-143.
18. van Raan AFJ: Challenges in ranking of universities. In Proceed-
ings of the First International Conference on World Class Universities Edited
by: Liu N. Shanghai: Shanghai Jiao Tong University Press; 2005.
Pre-publication history
The pre-publication history for this paper can be accessed
here:
http://www.biomedcentral.com/1471-2288/9/33/prepub
... The dependency of the h-index at the level of units which size is identified by the number of publications was discussed already over decade ago [10,11] in the context of journals [12], institutions [13,14], disciplines [15] or countries [16,17]. In each case, the growth of h-index with number of publications N was found to be: ...
... The quantity β > 0 was found to be similar for different levels of aggregations (different units such as journals, institutions, etc.) [13,15,14,12]. The reasons for such universality have been widely discussed, with the presumption that the distribution of citations falls in the family of Paretian distributions with power-law exponent α, leading to an estimate β ≈ 1 1+α , see [10,18]. ...
... • by considering independent research units of disparate sizes [14,19]. ...
Preprint
Full-text available
The size-dependent nature of the so-called group or departmental h-index is reconsidered in this paper. While the influence of unit size on such collective measures was already demonstrated a decade ago, institutional ratings based on this metric can still be found and still impact on the reputations and funding of many research institutions. The aim of this paper is to demonstrate the fallacy of this approach to collective research-quality assessment in a simple way, focusing on the h-index in its original form. We show that randomly reshuffling real scientometric data (varying numbers of citations) amongst institutions of varying size, while maintaining the volume of their research outputs, has little effect on their departmental h-index. This suggests that the relative position in ratings based on the collective h-index is determined not only by quality (impact) of particular research outputs but by their volume. Therefore, the application of the collective h-index in its original form is disputable as a basement for comparison at aggregated levels such as to research groups, institutions or journals. We suggest a possible remedy for this failing which is implementable in a manner that is as simple and understandable as the h-index itself.
... Individual factors include the researcher's gender, age, academic rank, salary, years of experience, teaching load, and confidence in writing research works. Institutional factors include the allocation of research funds, the size of the institution, the presence of research groups, institutional and departmental support, access to journals, the availability of research facilities, and the availability of information technology [7][8][9]. ...
Article
Full-text available
Background Researchers in universities and academic institutions must be in a leading position in generating research evidence to inform and direct national policies and strategies, improve service delivery, and achieve the main objectives. This study aimed to determine the factors that promote or hinder research productivity and quality among university academics in Kurdistan Region of Iraq. Methods A cross-sectional study was conducted on 949 university academics from all public universities in the Kurdistan Region of Iraq. The authors developed a questionnaire that included sociodemographic data, challenges, satisfaction, and motivation for conducting research. Data were collected using a Google form. Frequencies, percentages, and the Chi-square test were used to analyze the data. Results Most university academics (94.6%) believed that research was part of their job, but only 51.6% were satisfied with their role as academic researchers. The lack of financial motivation was the main reason for dissatisfaction, while the main incentive to conduct research was the passion for science. Around 21% of the university academics had not published any research, while 53.1% published 1–5 articles. Half of the participants (49.7%) lacked training in writing research proposals, and the majority (86.1%) have not applied for international grants. Approximately half of university academics (46.9%) shared their research findings with stakeholders, and the primary method was by sharing their published papers (59.4%), followed by seminars (42.2%). One of the important challenges in conducting research was the lack of funding (62.8%). Conclusions The academics at universities in the Kurdistan Region of Iraq are passionate about their role as researchers, but face many challenges in conducting effective research. A strategic plan is needed to provide an encouraging environment for university academics regarding infrastructure, financial, and technical support. More studies are needed to identify the root factors of academic staff needs and challenges.
... Варто зауважити, що груповий індекс Гірша може використовуватися для аналізу діяльності окремих дослідницьких груп чи установ, проте цей інструментарій потребує модифікації, коли мова йде про порівняння між собою груп чи установ різного розміру, див. [21], [22]. У Табл. 3 також приведено значення усіх вищезгаданих показників, але вже для випадку, коли не враховуються статті великих колективів співавторів. ...
Preprint
Full-text available
One of the features of modern science is the formation of stable large collaborations of researchers working together within the projects that require the concentration of huge financial and human resources. Results of such common work are published in scientific papers by large co-authorship teams that include sometimes thousands of names. The goal of this work is to study the influence of such publications on the values of scientometric indicators calculated for individuals, research groups and science of Ukraine in general. Bibliometric data related to Ukraine, some academic institutions and selected individual researchers were collected from Scopus database and used for our study. It is demonstrated that while the relative share of publications by collective authors is comparatively small, their presence in a general pool can lead to statistically significant effects. The obtained results clearly show that traditional quantitative approaches for research assessment should be changed in order to take into account this phenomenon. Keywords: collective authorship, scientometrics, group science, Ukraine.
... Розподіл за кількістю співавторів для цієї, чи не найбільшої інституції НАН України доволі подібний до залежності на рис. 4. У табл. 3 наведено деякі з базових показників, які з тією чи іншою метою традиційно розраховують для наукової установи, зокрема середньорічну кількість публікацій та цитувань, середню кількість цитувань на одну публікацію, число та/або відсоток нецитованих робіт, груповий індекс Гірша [20]. Варто зауважити, що груповий індекс Гірша можна використовувати для аналізу діяльності окремих дослідницьких груп чи установ, проте цей інструментарій потребує модифікації, якщо йдеться про порівняння груп чи установ різного розміру [21,22] також значення всіх вищезгаданих показників для випадку, коли не враховуються статті великих колективів співавторів. Як бачимо з табл. ...
Article
Full-text available
КОЛЕКТИВНЕ АВТОРСТВО В УКРАЇНСЬКІЙ НАУЦІ: МАРГІНАЛЬНИЙ ЕФЕКТ ЧИ НОВЕ ЯВИЩЕ? Однією з ознак сучасної науки стало формування великих стійких колаборацій науковців, які працюють в рамках проектів, що вимагають концентрації значних матеріальних і людських ресурсів. Результати їхніх досліджень публікуються за авторства колективів, що можуть включати до кількох тисяч імен. Метою нашої роботи є дослідити вплив присутності таких статей на наукоментричні індикатори публікаційної діяльності окремих вчених, дослідницьких центрів та усього сегменту української науки. Проведено аналіз бібліометричних даних, що отримані з наукометричної бази Scopus. Показано, що, попри незначну відносну частку публікацій за авторством великих колективів, їхній вплив на наукометричні показники може бути статистично значимим. Отримані результати свідчать про необхідність внесення змін в усталені наукометричні методики та підходи для адекватного врахування цього явища. Ключові слова: колективний автор, наукометрія, групова наука, Україна
... Licenciamiento Institucional (18) Licenciamiento Medicina ( Medición del impacto de la producción científica. Existen varios indicadores de impacto basados en citas (29)(30)(31) , para el licenciamiento institucional se usó el impacto normalizado, sin embargo, este dato es poco estable cuando el número de artículos es menor a 100 (25) , como sucede en la mayoría de universidades peruanas. Por ello, la decisión de usar el Índice H del periodo evaluado es acertada. ...
Article
Full-text available
The new university law 30220 of 2014 introduced the mandatory institutional licensing of all Peruvian universities by the National Superintendence of Higher University Education (SUNEDU, in Spanish). The first undergraduate program to go through this process will be medicine. The licensing of medical programs is necessary to ensure that the conditions in which the program is taught in Peru are adequate, with a high probability of closing some medical schools. Once a medical school has demonstrated that it meets the basic conditions of quality, a qualitative and quantitative evaluation is carried out that includes three criteria: scientific production in the Web of Science, impact measured through the H index, and results of the national medical exam, to determine the years of licensing. This article evaluates the quantitative indicators linked to research using Web of Science and Scopus, in addition to making technical and methodological revisions of them. Suggestions for the other indicators are also covered by this article.
... Licenciamiento Institucional (18) Licenciamiento Medicina ( Medición del impacto de la producción científica. Existen varios indicadores de impacto basados en citas (29)(30)(31) , para el licenciamiento institucional se usó el impacto normalizado, sin embargo, este dato es poco estable cuando el número de artículos es menor a 100 (25) , como sucede en la mayoría de universidades peruanas. Por ello, la decisión de usar el Índice H del periodo evaluado es acertada. ...
Article
Full-text available
The new university law 30220 of 2014 introduced the mandatory institutional licensing of all Peruvian universities by the National Superintendence of Higher University Education (SUNEDU, in Spanish). The first undergraduate program to go through this process will be medicine. The licensing of medical programs is necessary to ensure that the conditions in which the program is taught in Peru are adequate, with a high probability of closing some medical schools. Once a medical school has demonstrated that it meets the basic conditions of quality, a qualitative and quantitative evaluation is carried out that includes three criteria: scientific production in the Web of Science, impact measured through the H index, and results of the national medical exam, to determine the years of licensing. This article evaluates the quantitative indicators linked to research using Web of Science and Scopus, in addition to making technical and methodological revisions of them. Suggestions for the other indicators are also covered by this article.
Article
Purpose: Quantitative bibliometrics are increasingly used to evaluate faculty research productivity. This study benchmarks publication rates for Radiation Oncologists from highly ranked NCI designated cancer centers and reveals how productivity changes over the arc of a career and of the field over time. Methods: Peer-reviewed articles from 1970-2022 were obtained using Scopus for the 348 Radiation Oncologists listed as faculty for the top-ten cancer hospitals ranked by US News and World Report in 2022. Bibliometrics were analyzed for authorships, (A˙), authorships where the individual was first or last author (F˙L), the monograph equivalent of authorships, (M˙E), h-index, and ha-index (an analog to h-index using M˙E in place of publications). Career start was defined as the year of first publication. Bibliometric inflation was explored by analyzing authorship and bibliometric changes between 1990-2022. Results: Publication rates peak, with as much as a 500% increase, 20-25 years from the start of a career before declining until retirement. At career ages of 1, 10, 20, and 30 years, the median bibliometrics were A˙ = [1.5, 4.1, 6.5, 7.0] year-1, F˙L = [0.5, 0.9, 1.2, 0.6] year-1, M˙E = [0.2, 0.5, 0.7, 0.8] year-1, h-index=[1, 12, 22, 47], and ha-index=[0.4, 4.4, 6.9, 18.4]. With regards to authorship patterns across eras, the median number of authors listed per paper increased by 240% between 1990 and 2022. Meanwhile, research productivity per individual as measured by F˙L and M˙E was unchanged. Conclusions: The research publication rates of the median Radiation Oncologist change substantially over the course of their career. Productivity improves steadily for more than two decades before peaking and declining. The culture of authorship has also changed between 1990 and 2022. The number of authors listed per paper has trended upwards, which has an inflationary effect on the number of authorships and h-index. Meanwhile, the rate of manuscripts published per faculty has not changed.
Article
The size-dependent nature of the so-called group or departmental h-index is reconsidered in this paper. While the influence of unit size on such collective measures was already demonstrated a decade ago, institutional ratings based on this metric can still be found and still impact on the reputations and funding of many research institutions. The aim of this paper is to demonstrate the fallacy of this approach to collective research-quality assessment in a simple way, focusing on the h-index in its original form. We show that randomly reshuffling real scientometric data (varying numbers of citations) amongst institutions of varying size, while maintaining the volume of their research outputs, has little effect on their departmental h-index. This suggests that the relative position in ratings based on the collective h-index is determined not only by quality (impact) of particular research outputs but by their volume. Therefore, the application of the collective h-index in its original form is disputable as a basement for comparison at aggregated levels such as to research groups, institutions or journals. We suggest a possible remedy for this failing which is implementable in a manner that is as simple and understandable as the h-index itself.
Chapter
Full-text available
The ability to evaluate researchers' academic performance (productivity and impact) is highly beneficial, especially for science policy making. A simple evaluation tool analyzes the number of articles published by a researcher and the number of citations received by these publications. However, these metrics alone fail to capture the manifold aspects contributing to a researcher's scientific performance. In this chapter, we will provide an overview of the relevant literature concerning the evaluation of researchers, including how individual research performance can be quantified using bibliometric data (the statistical analysis of written publications). The focus of this chapter is on the popular h-index (and its numerous variants), which is favored by statisticians as the best single metric for assessing and validating the publication/citation output of researchers. In addition, we will discuss the limitations of such indicators and the general criticism surrounding the arbitrary use of single metrics for the comparison of researchers' performances. There is an ongoing debate concerning the appropriateness of single metrics to measure such complex activities.
Article
Ordinal rankings of schools of nursing by research funding in total dollars awarded by the National Institutes of Health (NIH) is a common metric for demonstrating research productivity; however, these data are not based on the number of doctorally prepared faculty eligible to apply for funding. Therefore, we examined an alternative method for measuring research productivity which accounts for size differences in schools: NIH funding ranked "per capita." We extracted data on total average funding and compared them with average funding secured per faculty member across top-ranked schools of nursing in the United States from 2013 to 2017. When examining data by number of doctorally prepared faculty, 4 of 12 (33%) schools that ranked lower in total average funding ranked higher in average funding per faculty member. School size is an important but neglected factor in current funding rankings; therefore, we encourage schools to use multiple approaches to track their research productivity.
Article
Full-text available
In this paper we discuss recent developments in rankings of universities and the impact of these rankings on academia in the context of international benchmarking and evaluation. We focus on technical and methodological problems behind these rankings, particularly those based on bibliometric methods, with special attention to the social sciences and humanities. We criticize the recent expert-based rankings by showing that the correlation of expert score and bibliometric outcomes is practically zero. This finding casts severe doubts on the reliability of these expert- based rankings. New approaches are proposed on the basis of advanced bibliometric methods. It is argued on the basis of preliminary results that, probably due to 'finite size' considerations, a league of outstanding universities worldwide will not have much more than around 200 members. Finally, we discuss the challenges for further research and practical applications.
Article
Full-text available
This paper 1 presents an overview of ad-vanced bibliometric methods for (1) objec-tive and transparent assessment of strengths and weaknesses in research per-formance, and (2) monitoring interdiscipli-nary scientific developments. In the first application, we focus on the detailed analy-sis of research performance in an interna-tional comparative perspective. We demon-strate that advanced bibliometric methods are, particularly at the level of research groups, university departments and insti-tutes, an indispensable element next to peer review in research evaluation proce-dures. We address specific problems for the social sciences. In the second application, monitoring of scientific (basic and applied) develop-ments, recent advances in bibliometric mapping techniques are promising. They are unique instruments to discover patterns in the structure of scientific fields, to iden-tify processes of knowledge dissemination, and to visualize the dynamics of scientific developments. We discuss briefly their po-tential for unraveling interdisciplinary de-velopments and interfaces between science and technology.
Article
Full-text available
Summary A system of input, output, and efficiency indicators is sketched out, with each indicator related to basic research, applied research, and experimental development. Mainly, this scheme is inspired by empirical innovation economics (represented in Germany, e.g., by H. Grupp) and by “advanced bibliometrics' and scientometrics (profiled by van Raan and others). After considering strengths and weaknesses of some of the indicators, possible additional “entry points' for institutions of information delivery are examined, such contributing to an enrichment of existing indicators. And to a “Nationalökonomik des Geistes', requested from librarians in the twenties of the last century by A. von Harnack.
Article
Full-text available
What is the value of a scientist and its impact upon the scientific thinking? How can we measure the prestige of a journal or a conference? The evaluation of the scientific work of a scientist and the estimation of the quality of a journal or conference has long attracted significant interest, due to the benefits by obtaining an unbiased and fair criterion. Although it appears to be simple, defining a quality metric is not an easy task. To overcome the disadvantages of the present metrics used for ranking scientists and journals, J. E. Hirsch proposed a pioneering metric, the now famous h-index. In this article we demonstrate several inefficiencies of this index and develop a pair of generalizations and effective variants of it to deal with scientist ranking and publication forum ranking. The new citation indices are able to disclose trendsetters in scientific research, as well as researchers that constantly shape their field with their influential work, no matter how old they are. We exhibit the effectiveness and the benefits of the new indices to unfold the full potential of the h-index, with extensive experimental results obtained from the DBLP, a widely known on-line digital library.
Article
Full-text available
We extend the pioneering work of J. E. Hirsch, the inventor of the h-index, by proposing a simple and seemingly robust approach for comparing the scientific productivity and visibility of institutions. Our main findings are that i) while the h-index is a sensible criterion for comparing scientists within a given field, it does not directly extend to rank institutions of disparate sizes and journals, ii) however, the h-index, which always increases with paper population, has an universal growth rate for large numbers of papers; iii) thus the h-index of a large population of papers can be decomposed into the product of an impact index and a factor depending on the population size, iv) as a complement to the h-index, this new impact index provides an interesting way to compare the scientific production of institutions (universities, laboratories or journals).
Article
For a set of papers, ranked in decreasing order of the number of citations that they received, the h-index is the (unique) highest number of papers that received h or more citations. In the references (1,2,7,9) one describes some advantages of this new scientometric indicator: It is a simple single number incorporating both publication (quantitiy) and citation (quality or visibility) scores and hence has an advantage over these single separate measures and over measures such as "number of significant papers" (which is arbitrary) or "number of citations to each of the (say) q most cited papers" (which again is not a single number). The h-index is also robust in the sense that it is insensitive to an accidental set of uncited (or lowly cited) papers and also to one or several outstandingly highly cited papers. This last point is the subject of my criticism on this measure: although I certainly agree that the insensitivity to the "tail" of lowly cited papers is an advantage for the h- index, it should be sensitive to the level of the highly cited papers. Indeed, as the h- index is defined now, once an article belongs to the h top class (defining h) it is totally unimportant whether or not these papers continue to be cited or not and, if cited, it is unimportant whether these papers receive 10, 100 or 1000 more citations! We feel that a measure which should indicate the overall quality of a scientist or of a journal should deal with the performance of the top articles and hence their number of citations should be counted, even when they are declared to be in the top class. This can be accomplished by modifying the h-index a little bit (called the g-index) so that the above described disadvantage has disappeared while keeping all advantages of the h-index and, at the same time, the calculation of the new index is as simple as the one of the h-index. Note that it is a consequence of the definition of the h-index that the top-h papers have at least h2 citations but that the actual number can be much higher (this is what is missing in the h-index). We therefore define the g-index as the highest number g of papers that together received g2 or more citations. From this definition it is already clear that g 0 h. So for all authors or journals, the g-score will be higher than the h- score but, what is interesting in this, the higher the number of citations in the top- class (in other words, the skewer the citation distribution) the higher the g-score will be. Let us give two real author examples: the comparison of L. Egghe and H. Small. In the Tables below, TC denotes the total number of citations to a paper on rank r and Σ TC denotes the cumulative TC scores up to rank r.
Article
Jorge Hirsch (2005a, 2005b) recently proposed the h index to quantify the research output of individual scientists. The new index has attracted a lot of attention in the sci- entific community. The claim that the h index in a single number provides a good representation of the scien- tific lifetime achievement of a scientist as well as the (supposed) simple calculation of the h index using com- mon literature databases lead to the danger of improper use of the index. We describe the advantages and disad- vantages of the h index and summarize the studies on the convergent validity of this index. We also introduce corrections and complements as well as single-number alternatives to the h index.