Archives of Environmental & Occupational Health (2015) 70, 67–69
Taylor & Francis Group, LLC
ISSN: 1933-8244 print / 2154-4700 online
Emerging Topics in EOH Research
“Platinum H”: Reﬁning the H-Index to More Realistically
Assess Career Trajectory and Scientiﬁc Publications
The ongoing drive for accountability in research has led au-
thorities to increasingly assess research performance, most of-
ten by using a single index to allow comparisons and rank-
ings.1These measures have gained increasing importance in
budgetary decisions, as indicator-supported scores are more
easily compared than peer opinion and are usually faster to
produce.2Although on the surface it may appear to be a sim-
ple concept, deﬁning a quality metric to assess research per-
formance is neither an easy or straightforward task.3In the
ﬁeld of Environmental and Occupational Health (EOH), as else-
where, there are numerous options from which one can choose.
A recent review published in the journal Scientometrics,for
example, reported that there are now over 100 bibliometric
indicators for assessing research performance at the author
level.4Many of these are based on a measure that celebrates
its 10th anniversary this year.
Most readers would be aware of the H-Index, which was
ﬁrst proposed by Jorge Hirsch in 2005 as a method to quan-
tify an individual’s scientiﬁc research output by considering
both their citations and publications.5Hirsch’s index was pro-
posed as a favorable alternative to many existing bibliometric
measures of individual performance such as raw publication
output, article citation counts, and total citations,6many of
which were relatively simple calculations and were often the
norm up to that time.7Several inherent advantages of the H-
Index were recognized early on, chief among them being that it
helps combine research productivity with impact, is relatively
insensitive to extreme values, and is difﬁcult to artiﬁcially in-
ﬂate.8Another reason for its success is that the publication
count and maximum citation rate of top scientists are usually
in the same order of magnitude.9
The H-Index is not without controversy, however, with at
least 50 variants having been proposed to correct or at least
consider some of its alleged disadvantages.6Hirsch’s original
article has now been cited over 800 times10 and it attracts al-
most 100 new citations each year,11 suggesting that this mea-
sure and its associated concepts are well known across the
scientiﬁc community. On the other hand, and somewhat para-
doxically, this plethora of H-Index variants suggests that there
is little agreement on the best possible way to reﬁne or improve
Hirsch’s original concept. Furthermore, there is also a dearth
of information regarding the most appropriate measure for
citation-based assessment in EOH.12
Accessing bibliometric databases and undertaking at least
basic analyses of citation data has become more common
among researchers, administrators, and evaluation bodies in
recent years. Expansion of the main bibliometric databases,
advances in computing power, and improvements in the user
interface have all facilitated increasing access to this type of
data.13 Most scientiﬁc and academic organizations now have
access to these resources, making it relatively straightforward
to undertake various citation-based assessments of journals,
groups, and individuals. Such analyses usually include a few
key facets such as the number of articles published by a partic-
ular researcher, the number of citations made to those articles,
and the raw citation ratio of those articles and that researcher.
Bibliometric databases also make it easy to calculate H-Index
scores for individuals, a practice that is being increasingly
adopted worldwide. In 2011, for example, the Italian National
Agency for the Evaluation of Universities and Research In-
stitutes identiﬁed 3 core aspects when considering the perfor-
mance of individual academics: (1) number of publications,
(2) number of citations, and (3) their H-Index.14
Utilizing simple H-Index scores to rank individuals is not
without inherent limitations, however. One major issue is the
fact that the H-Index does not (and cannot) capture complete
information on the citation distribution of an author’s entire
publication list.15 A theoretical example could occur among 2
scientists with identical H-Index scores of, let’s say, 10. Each
person must have published at least 10 articles that attracted
at least 10 citations. However, one hypothetical author could
have published an additional 90 articles that attracted 9 cita-
tions each and this would not affect his/her score. Similarly,
one author could have published 10 articles with 10 citations
each, whereas the other published exactly 10 articles that at-
tracted 100 citations each. Their H-Index scores would still
be identical at 10, but would the performance of these re-
searchers be considered equivalent?15 At a structural level,
this occurs because an individual’s H-Index score is derived
solely by considering the intersection between an author’s ci-
tations and their rank-ordered publications, otherwise known
as the h2or h-core component.16 Although the H-Index is
no doubt a simple and elegant solution, an individual’s entire
citation distribution actually comprises 3 separate areas com-
monly known as the h-core (denoted by a shaded box), the
excess, and the h-tail citations,17 as indicated in Figure 1.
As Figure 1 suggests, there will always be a certain amount
of “wasted effort” that a researcher contributes by publishing
articles that ultimately remain “invisible” in their H-Index as-
sessment. In the modern era of increasingly scarce resources
and the drive towards maximizing cost-effective research, one
of the most important considerations is to promote an optimal
68 Archives of Environmental & Occupational Health
Fig. 1. Components of the H-Index curve. (Adapted from refer-
balance between effort and reward. By carefully considering
the angle of the curve that intersects an individual’s publica-
tion output verses their ranked citations, the H-Index provides
a novel way for research managers to assess how close indi-
vidual researchers are to achieving an optimal return for effort
in their publication activities. As indicated by the dotted line
in Figure 1, an optimal model (where an equal number of
articles are being published that are each attracting many cita-
tions) would result in a 45◦angle for an individual’s H-Index
curve. This would not only infer the most optimal reward for
effort ratio among the “H-Index-assessed” unit, but also of-
fer ideal improvement targets for those whose current citation
proﬁles incorporate less-than-ideal curves. In the same way, it
may also help to elucidate researchers who are on an upward
trajectory, particularly in the early career stages where their
citation proﬁles and raw H-Index scores may not yet be large
The concept itself draws on the work of various mathe-
matical evaluations and proposed solutions for considering
excess citations within the H-Index system.15–20 Theoretical
and real-world examples of actual H-Index curves have been
described elsewhere15,18; with Zhang,20 for example, propos-
ing 3 main angles as follows: 60◦indicating a perfectionist
(an author attracting many citations but not publishing many
articles), 30◦indicating a mass producer (an author publish-
ing many articles but not attracting many citations), or 45◦
indicating an author who is somewhere in between.20 An ex-
ample of these different curves using real-world data can be
found elsewhere.15 My revised metric also follows on from
the suggestion by others1that combining the information of
single H-Index scores with other bibliometric measures can
signiﬁcantly improve the validity of results.
In proposing any kind of revised metric for EOH and else-
where, a key consideration is that it be appropriate and (at
least scientiﬁcally) acceptable for those among whom it is
being applied. It should, ideally, help distinguish between a
“true” measure of performance versus an H-Index that inad-
vertently misses the excess e2and h2citations, as previously
described.19 The new method must be seen to be transparent
and easily understandable (ideally, reporting its output as a
single, simple number), it should incorporate some kind of
internationally accepted referent (beyond that of simple bib-
liometrics), and its implementation must not be too expensive
or time-consuming for the organization that intends to use it.
To help devise a more meaningful individual score that can
be compared and ranked against appropriate peers, I propose
the following simple calculation for use in EOH that I have
tentatively named Platinum H. In this calculation (Figure 2), H
represents the individual’s current H-Index score, CL is their
career length (a value calculated by subtracting the year of
their ﬁrst publication from the current year), Ctrepresents the
total citations they have received for all of their publications
combined, and Atis the total number of articles they have
published, all within the time frame CL.
The ﬁrst aspect of the calculation divides an individual’s
H-Index by their career length in years (or a proxy thereof)
to help adjust for the fact that citations tend to accrue over
time and an individual’s H-Index often continues to rise, even
if they stop publishing. Hirsch recognized this limitation in
his original article and proposed a value he termed m, which
would be an individual’s H-Index divided by the time elapsed
since the publication of their ﬁrst article (which Hirsch termed,
n). As such, it was considered appropriate to utilize a similar
concept in the current model to help adjust for relative career
length—albeit that I chose to term the “Career Length” value
simply as CL, rather than n.
Fig. 2. The “Platinum H” calculation.
The second aspect of the calculation focuses on an
individual’s raw citation ratio, that being the ratio between
the total number of articles published (At) and the total
number of citations received by those same articles (Ct). This
mean citation rate per article can also be described as the
author’s citation density.9The concept of citation ratios as
a bibliometric tool is by no means groundbreaking, albeit
that many of its early uses focused on journal, rather than
individual, assessment.21 Examining the relationship between
the number of citations received versus the number of articles
published in a particular journal has long been a cornerstone
Emerging Topics in EOH Research 69
of bibliometric calculations. One of the more well-known
examples was published by Raisig in 1960 with what he termed
the “index of Research Potential Realized” (RPR Index).22
The calculation of average citation ratios for individual
researchers naturally followed on from this. An early study of
Nobel Prize–winning physicists,23 for example, reported their
average number of citations was around 10 times higher than
that of non–Nobel Prize–winning scientists.
In recent years the assessment of published articles and
their associated citation ratios has comprised a key facet of the
Excellence in Research for Australia (ERA)24 and the United
Kingdom’s Research Assessment Exercise (RAE).25 A recent
study of articles published by Canadian academics has also
utilized citation ratios.26 In fact, there are now hardly any re-
search evaluation measures that do not count publications and
citations,2suggesting that these 2 components clearly form the
cornerstone of both “traditional” and contemporary biblio-
metric analysis.13 Similarly, the relationship between an indi-
vidual’s citation density and their H-Index is not new, either.27
As we reﬂect on the development of yet another vari-
ant/iteration/improvement of the H-Index, it is somewhat
ironic that Hirsch, a physicist who had never published an ar-
ticle in the ﬁeld of bibliometrics, would develop an indicator
that ultimately sparked a whole new research front in citation-
based assessment.11 The involvement of physicists may not be
entirely surprising, however, as one of the founding fathers of
scientometrics, Derek de Solla Price, was initially trained as
a physicist and later changed his specialty to the history of
science.28 One of the ﬁrst formal studies of reward systems in
science examined university physicists,23 whereas one of the
likely forerunners to the journal impact factor was a citation-
based study of the published literature in physics.29 Physicists
may therefore deserve a greater share of credit for the devel-
opment of metrics-based research assessment than previously
Derek R. Smith
Archives of Environmental & Occupational Health
1. Panaretos J, Malesios C. Assessing scientiﬁc research performance
and impact with single indices. Scientometrics. 2009;81:635–670.
2. Bornmann L, Leydesdorff L. Scientometrics in a changing re-
search landscape. EMBO Rep. 2014;15:1228–1232.
3. Smith DR. Assessing productivity among university academics and
scientiﬁc researchers. Arch Environ Occup Health. 2015;70:1–3.
4. Wildgaard L, Schneider J, Larsen B. A review of the character-
istics of 108 author-level bibliometric indicators. Scientometrics.
5. Hirsch JE. An index to quantify an individual’s scientiﬁc research
output. Proc Natl Acad Sci U S A. 2005;102:16569–16572.
6. Bornmann L, Mutz R, Hug SE, Daniel H-D. A multilevel meta-
analysis of studies reporting correlations between the h index and 37
different h index variants. J Informetrics. 2011;5:346–359.
7. Smith DR. Impact factors, scientometrics and the history of citation-
based research. Scientometrics. 2012;92:419–427.
8. Batista PD, Campiteli MG, Kinouchi O. Is it possible to com-
pare researchers with different scientiﬁc interests? Scientometrics.
9. Schubert A. Rescaling the h-index. Scientometrics.
10. Franco G. Research evaluation and competition for academic
positions in occupational medicine. Arch Environ Occup Health.
11. Bornmann L. H-Index research in scientometrics: a summary. J
12. Smith DR. Historical development of the journal impact factor
and its relevance for occupational health. Ind Health. 2007;45:
13. Smith DR. Highly cited articles in environmental and oc-
cupational health, 1919–1960. Arch Environ Occup Health.
14. Franco G. Scientiﬁc research of senior Italian academics of occu-
pational medicine: a citation analysis of products published during
the decade 2001–2010. Arch Environ Occup Health. 2015;70:110–
15. Bornmann L, Mutz R, Daniel H-D. The h index research output
measurement: two approaches to enhance its accuracy. J Informet-
16. Ye F, Rousseau R. Probing the h-core: an investigation
of the tail–core ratio for rank distributions. Scientometrics.
17. Zhang CT. A novel triangle mapping technique to study the h-index
based citation distribution. Sci Rep. 2013;3:1023.
agolewski M, Grzegorzewski P. A geometric approach to
the construction of scientiﬁc impact indices. Scientometrics.
19. Zhang CT. The e-Index, complementing the h-Index for excess cita-
tions. PLoS ONE. 2009;4:e5429.
20. Zhang CT. The h’-index, effectively improving the h-index based on
the citation distribution. PLoS ONE. 2013;8:e59912.
21. Smith DR. Citation analysis and impact factor trends of 5 core
journals in occupational medicine, 1975–1984. Arch Environ Occup
22. Raisig LM. Mathematical evaluation of the scientiﬁc serial: im-
proved bibliographic method offers new objectivity in select-
ing and abstracting the research journal. Science. 1960;131:1417–
23. Cole S, Cole JR. Scientiﬁc output and recognition: a study in
the operation of the reward system in science. Am Sociol Rev.
24. Excellence in Research for Australia (ERA) Web page. Available at:
http://www.arc.gov.au/era./ Accessed 30 January 2015.
25. UK Research Assessment Exercise (RAE) Web page. Available at:
http://www.rae.ac.uk./ Accessed 30 January 2015.
ere V, Gingras Y. Averages of ratios vs. ratios of averages:
an empirical analysis of four levels of aggregation. J Informetrics.
anzel W. On the h-index—a mathematical approach to a new
measure of publication activity and citation impact. Scientometrics.
28. Crawford S. Derek John de Solla Price (1922–1983): the
man and the contribution. Bull Med Libr Assoc. 1984;72:238–
29. Pinski G, Narin F. Citation inﬂuence for journal aggregates of scien-
tiﬁc publications: theory, with application to the literature of physics.
Inform Process Manage. 1976;12:297–312.