ArticlePDF Available

‘Platinum H’: Refining the H-Index to More Realistically Assess Career Trajectory and Scientific Publications



Content may be subject to copyright.
Archives of Environmental & Occupational Health (2015) 70, 67–69
Copyright C
Taylor & Francis Group, LLC
ISSN: 1933-8244 print / 2154-4700 online
DOI: 10.1080/19338244.2015.1016833
Emerging Topics in EOH Research
“Platinum H”: Refining the H-Index to More Realistically
Assess Career Trajectory and Scientific Publications
The ongoing drive for accountability in research has led au-
thorities to increasingly assess research performance, most of-
ten by using a single index to allow comparisons and rank-
ings.1These measures have gained increasing importance in
budgetary decisions, as indicator-supported scores are more
easily compared than peer opinion and are usually faster to
produce.2Although on the surface it may appear to be a sim-
ple concept, defining a quality metric to assess research per-
formance is neither an easy or straightforward task.3In the
field of Environmental and Occupational Health (EOH), as else-
where, there are numerous options from which one can choose.
A recent review published in the journal Scientometrics,for
example, reported that there are now over 100 bibliometric
indicators for assessing research performance at the author
level.4Many of these are based on a measure that celebrates
its 10th anniversary this year.
Most readers would be aware of the H-Index, which was
first proposed by Jorge Hirsch in 2005 as a method to quan-
tify an individual’s scientific research output by considering
both their citations and publications.5Hirsch’s index was pro-
posed as a favorable alternative to many existing bibliometric
measures of individual performance such as raw publication
output, article citation counts, and total citations,6many of
which were relatively simple calculations and were often the
norm up to that time.7Several inherent advantages of the H-
Index were recognized early on, chief among them being that it
helps combine research productivity with impact, is relatively
insensitive to extreme values, and is difficult to artificially in-
flate.8Another reason for its success is that the publication
count and maximum citation rate of top scientists are usually
in the same order of magnitude.9
The H-Index is not without controversy, however, with at
least 50 variants having been proposed to correct or at least
consider some of its alleged disadvantages.6Hirsch’s original
article has now been cited over 800 times10 and it attracts al-
most 100 new citations each year,11 suggesting that this mea-
sure and its associated concepts are well known across the
scientific community. On the other hand, and somewhat para-
doxically, this plethora of H-Index variants suggests that there
is little agreement on the best possible way to refine or improve
Hirsch’s original concept. Furthermore, there is also a dearth
of information regarding the most appropriate measure for
citation-based assessment in EOH.12
Accessing bibliometric databases and undertaking at least
basic analyses of citation data has become more common
among researchers, administrators, and evaluation bodies in
recent years. Expansion of the main bibliometric databases,
advances in computing power, and improvements in the user
interface have all facilitated increasing access to this type of
data.13 Most scientific and academic organizations now have
access to these resources, making it relatively straightforward
to undertake various citation-based assessments of journals,
groups, and individuals. Such analyses usually include a few
key facets such as the number of articles published by a partic-
ular researcher, the number of citations made to those articles,
and the raw citation ratio of those articles and that researcher.
Bibliometric databases also make it easy to calculate H-Index
scores for individuals, a practice that is being increasingly
adopted worldwide. In 2011, for example, the Italian National
Agency for the Evaluation of Universities and Research In-
stitutes identified 3 core aspects when considering the perfor-
mance of individual academics: (1) number of publications,
(2) number of citations, and (3) their H-Index.14
Utilizing simple H-Index scores to rank individuals is not
without inherent limitations, however. One major issue is the
fact that the H-Index does not (and cannot) capture complete
information on the citation distribution of an author’s entire
publication list.15 A theoretical example could occur among 2
scientists with identical H-Index scores of, let’s say, 10. Each
person must have published at least 10 articles that attracted
at least 10 citations. However, one hypothetical author could
have published an additional 90 articles that attracted 9 cita-
tions each and this would not affect his/her score. Similarly,
one author could have published 10 articles with 10 citations
each, whereas the other published exactly 10 articles that at-
tracted 100 citations each. Their H-Index scores would still
be identical at 10, but would the performance of these re-
searchers be considered equivalent?15 At a structural level,
this occurs because an individual’s H-Index score is derived
solely by considering the intersection between an author’s ci-
tations and their rank-ordered publications, otherwise known
as the h2or h-core component.16 Although the H-Index is
no doubt a simple and elegant solution, an individual’s entire
citation distribution actually comprises 3 separate areas com-
monly known as the h-core (denoted by a shaded box), the
excess, and the h-tail citations,17 as indicated in Figure 1.
As Figure 1 suggests, there will always be a certain amount
of “wasted effort” that a researcher contributes by publishing
articles that ultimately remain “invisible” in their H-Index as-
sessment. In the modern era of increasingly scarce resources
and the drive towards maximizing cost-effective research, one
of the most important considerations is to promote an optimal
68 Archives of Environmental & Occupational Health
Fig. 1. Components of the H-Index curve. (Adapted from refer-
ences 15–20.)
balance between effort and reward. By carefully considering
the angle of the curve that intersects an individual’s publica-
tion output verses their ranked citations, the H-Index provides
a novel way for research managers to assess how close indi-
vidual researchers are to achieving an optimal return for effort
in their publication activities. As indicated by the dotted line
in Figure 1, an optimal model (where an equal number of
articles are being published that are each attracting many cita-
tions) would result in a 45angle for an individual’s H-Index
curve. This would not only infer the most optimal reward for
effort ratio among the “H-Index-assessed” unit, but also of-
fer ideal improvement targets for those whose current citation
profiles incorporate less-than-ideal curves. In the same way, it
may also help to elucidate researchers who are on an upward
trajectory, particularly in the early career stages where their
citation profiles and raw H-Index scores may not yet be large
or impressive.
The concept itself draws on the work of various mathe-
matical evaluations and proposed solutions for considering
excess citations within the H-Index system.15–20 Theoretical
and real-world examples of actual H-Index curves have been
described elsewhere15,18; with Zhang,20 for example, propos-
ing 3 main angles as follows: 60indicating a perfectionist
(an author attracting many citations but not publishing many
articles), 30indicating a mass producer (an author publish-
ing many articles but not attracting many citations), or 45
indicating an author who is somewhere in between.20 An ex-
ample of these different curves using real-world data can be
found elsewhere.15 My revised metric also follows on from
the suggestion by others1that combining the information of
single H-Index scores with other bibliometric measures can
significantly improve the validity of results.
In proposing any kind of revised metric for EOH and else-
where, a key consideration is that it be appropriate and (at
least scientifically) acceptable for those among whom it is
being applied. It should, ideally, help distinguish between a
“true” measure of performance versus an H-Index that inad-
vertently misses the excess e2and h2citations, as previously
described.19 The new method must be seen to be transparent
and easily understandable (ideally, reporting its output as a
single, simple number), it should incorporate some kind of
internationally accepted referent (beyond that of simple bib-
liometrics), and its implementation must not be too expensive
or time-consuming for the organization that intends to use it.
To help devise a more meaningful individual score that can
be compared and ranked against appropriate peers, I propose
the following simple calculation for use in EOH that I have
tentatively named Platinum H. In this calculation (Figure 2), H
represents the individual’s current H-Index score, CL is their
career length (a value calculated by subtracting the year of
their first publication from the current year), Ctrepresents the
total citations they have received for all of their publications
combined, and Atis the total number of articles they have
published, all within the time frame CL.
The first aspect of the calculation divides an individual’s
H-Index by their career length in years (or a proxy thereof)
to help adjust for the fact that citations tend to accrue over
time and an individual’s H-Index often continues to rise, even
if they stop publishing. Hirsch recognized this limitation in
his original article and proposed a value he termed m, which
would be an individual’s H-Index divided by the time elapsed
since the publication of their first article (which Hirsch termed,
n). As such, it was considered appropriate to utilize a similar
concept in the current model to help adjust for relative career
length—albeit that I chose to term the “Career Length” value
simply as CL, rather than n.
Fig. 2. The “Platinum H” calculation.
The second aspect of the calculation focuses on an
individual’s raw citation ratio, that being the ratio between
the total number of articles published (At) and the total
number of citations received by those same articles (Ct). This
mean citation rate per article can also be described as the
author’s citation density.9The concept of citation ratios as
a bibliometric tool is by no means groundbreaking, albeit
that many of its early uses focused on journal, rather than
individual, assessment.21 Examining the relationship between
the number of citations received versus the number of articles
published in a particular journal has long been a cornerstone
Emerging Topics in EOH Research 69
of bibliometric calculations. One of the more well-known
examples was published by Raisig in 1960 with what he termed
the “index of Research Potential Realized” (RPR Index).22
The calculation of average citation ratios for individual
researchers naturally followed on from this. An early study of
Nobel Prize–winning physicists,23 for example, reported their
average number of citations was around 10 times higher than
that of non–Nobel Prize–winning scientists.
In recent years the assessment of published articles and
their associated citation ratios has comprised a key facet of the
Excellence in Research for Australia (ERA)24 and the United
Kingdom’s Research Assessment Exercise (RAE).25 A recent
study of articles published by Canadian academics has also
utilized citation ratios.26 In fact, there are now hardly any re-
search evaluation measures that do not count publications and
citations,2suggesting that these 2 components clearly form the
cornerstone of both “traditional” and contemporary biblio-
metric analysis.13 Similarly, the relationship between an indi-
vidual’s citation density and their H-Index is not new, either.27
As we reflect on the development of yet another vari-
ant/iteration/improvement of the H-Index, it is somewhat
ironic that Hirsch, a physicist who had never published an ar-
ticle in the field of bibliometrics, would develop an indicator
that ultimately sparked a whole new research front in citation-
based assessment.11 The involvement of physicists may not be
entirely surprising, however, as one of the founding fathers of
scientometrics, Derek de Solla Price, was initially trained as
a physicist and later changed his specialty to the history of
science.28 One of the first formal studies of reward systems in
science examined university physicists,23 whereas one of the
likely forerunners to the journal impact factor was a citation-
based study of the published literature in physics.29 Physicists
may therefore deserve a greater share of credit for the devel-
opment of metrics-based research assessment than previously
Derek R. Smith
Deputy Editor-in-Chief
Archives of Environmental & Occupational Health
1. Panaretos J, Malesios C. Assessing scientific research performance
and impact with single indices. Scientometrics. 2009;81:635–670.
2. Bornmann L, Leydesdorff L. Scientometrics in a changing re-
search landscape. EMBO Rep. 2014;15:1228–1232.
3. Smith DR. Assessing productivity among university academics and
scientific researchers. Arch Environ Occup Health. 2015;70:1–3.
4. Wildgaard L, Schneider J, Larsen B. A review of the character-
istics of 108 author-level bibliometric indicators. Scientometrics.
5. Hirsch JE. An index to quantify an individual’s scientific research
output. Proc Natl Acad Sci U S A. 2005;102:16569–16572.
6. Bornmann L, Mutz R, Hug SE, Daniel H-D. A multilevel meta-
analysis of studies reporting correlations between the h index and 37
different h index variants. J Informetrics. 2011;5:346–359.
7. Smith DR. Impact factors, scientometrics and the history of citation-
based research. Scientometrics. 2012;92:419–427.
8. Batista PD, Campiteli MG, Kinouchi O. Is it possible to com-
pare researchers with different scientific interests? Scientometrics.
9. Schubert A. Rescaling the h-index. Scientometrics.
10. Franco G. Research evaluation and competition for academic
positions in occupational medicine. Arch Environ Occup Health.
11. Bornmann L. H-Index research in scientometrics: a summary. J
Informetrics. 2014;8:749–750.
12. Smith DR. Historical development of the journal impact factor
and its relevance for occupational health. Ind Health. 2007;45:
13. Smith DR. Highly cited articles in environmental and oc-
cupational health, 1919–1960. Arch Environ Occup Health.
14. Franco G. Scientific research of senior Italian academics of occu-
pational medicine: a citation analysis of products published during
the decade 2001–2010. Arch Environ Occup Health. 2015;70:110–
15. Bornmann L, Mutz R, Daniel H-D. The h index research output
measurement: two approaches to enhance its accuracy. J Informet-
rics. 2010;4:407–414.
16. Ye F, Rousseau R. Probing the h-core: an investigation
of the tail–core ratio for rank distributions. Scientometrics.
17. Zhang CT. A novel triangle mapping technique to study the h-index
based citation distribution. Sci Rep. 2013;3:1023.
18. G
agolewski M, Grzegorzewski P. A geometric approach to
the construction of scientific impact indices. Scientometrics.
19. Zhang CT. The e-Index, complementing the h-Index for excess cita-
tions. PLoS ONE. 2009;4:e5429.
20. Zhang CT. The h’-index, effectively improving the h-index based on
the citation distribution. PLoS ONE. 2013;8:e59912.
21. Smith DR. Citation analysis and impact factor trends of 5 core
journals in occupational medicine, 1975–1984. Arch Environ Occup
Health. 2010;65:176–179.
22. Raisig LM. Mathematical evaluation of the scientific serial: im-
proved bibliographic method offers new objectivity in select-
ing and abstracting the research journal. Science. 1960;131:1417–
23. Cole S, Cole JR. Scientific output and recognition: a study in
the operation of the reward system in science. Am Sociol Rev.
24. Excellence in Research for Australia (ERA) Web page. Available at: Accessed 30 January 2015.
25. UK Research Assessment Exercise (RAE) Web page. Available at: Accessed 30 January 2015.
26. Larivi`
ere V, Gingras Y. Averages of ratios vs. ratios of averages:
an empirical analysis of four levels of aggregation. J Informetrics.
27. Gl¨
anzel W. On the h-index—a mathematical approach to a new
measure of publication activity and citation impact. Scientometrics.
28. Crawford S. Derek John de Solla Price (1922–1983): the
man and the contribution. Bull Med Libr Assoc. 1984;72:238–
29. Pinski G, Narin F. Citation influence for journal aggregates of scien-
tific publications: theory, with application to the literature of physics.
Inform Process Manage. 1976;12:297–312.
... Smith (2015) [127] proposed the Platinum H-index that covers the total citation count, total research career and the total publication count. Formally, it is defined as follows: ...
... Smith (2015) [127] mh-index "The mh-index of a scholar is the square root of the product of hindex at every level and the square of citation count of an article is divided by the age of the publication." These are defined as follows: ...
In recent years, several scientometrics and bibliometrics indicators were proposed to evaluate the scientific impact of individuals, institutions, colleges, universities and research teams. The h-index gives a breakthrough in the research community for assessing the scientific impact of an individual. It got a lot of attention due to its simplicity, and several other indicators were proposed to extend the properties of the h-index and to overcome its shortcomings. In this literature review, we have discussed the advantages and limitations of almost all scientometrics and bibliometrics indicators, which have been categorised into seven categories based on their properties: (1) complement of h-index, (2) based on total number of authors, (3) based on publication age, (4) combination of two indices, (5) based on excess citation count, (6) based on total publication count and (7) based on other variants. The primary objective of this article is to study all those indicators which have been proposed to evaluate the scientific impact of an individual researcher or a group of researchers.
... With this in mind, I recently proposed a refined version of the H-Index to help account for differences in career length versus the time needed to accrue citations in scientific research. 1 This citation phase lag, up to 10 years or more in Environmental and Occupational Health (EOH), 2 disadvantages those early in their careers if performance assessment is undertaken using raw metrics such as the number of publications, citation counts and the H-Index. There is also the relative disadvantage of publishing in a field that is not in itself, very highly cited. ...
... 5 To help more fairly assess the published work of Early Career Researchers (ECRs), therefore, I suggested that we adjust the widely-used H-Index score for career length (as originally proposed by Hirsch, but not included in his final calculation) 6 and multiply this by an author's overall citation density. 1 This represents one potential strategy arising from the increasingly common practice of examining and refining the main bibliometric components on which these metrics are based: namely, individual publications and article citations. Given that the H-Index is known to be more highly correlated with an individual's total citations rather than their total number of publications, 7 analyzing citation counts would appear to be a fairly promising start point for the revision process. ...
... This index helps in comparing the scientific impact of scholars having different research careers. Smith (2015) proposed the Platinum H -index that covers the total citation count, total research career and the total publication count. Formally, it is defined as: ...
Full-text available
In recent years, several Scientometrics and Bibliometrics indicators were proposed to evaluate the scientific impact of individuals, institutions, colleges, universities and research teams. The h-index gives a major breakthrough in the research community to evaluate the scientific impact of an individual. It got a lot of attention due to its simplicity and several other indicators were proposed to extend the properties of h-index as well as to overcome shortcomings of h-index. In this literature review, we have discussed the advantages and limitations of almost all Scientometrics as well as Bibliometrics indicators which have been categorized into seven categories :(i) Complement of h-index, (ii) Based on total number of authors, (iii) Based on publication age, (iv) Combination of two indices, (v) Based on excess citation count, (vi) Based on total publication count, (vii) Based on other variants. The main objective of this article is to study all those indicators which have been proposed to evaluate the scientific impact of an individual researcher or a group of researchers.
... By 2011, there were at least 37 of these (Bornmann, Mutz, Hug, & Daniel, 2011). This total had risen to more than 50 three years later (Bornmann, 2014;Smith, 2015). New specimens continue to be added to the "h-index zoo" on a regular basis (e.g. ...
This article reviews the debate within bibliometrics regarding the h-index. Despite its popularity as a decision-making tool within higher education, the h-index has become increasingly controversial among specialists. Fundamental questions remain regarding the extent to which the h-index actually measures what it sets out to measure. Unfortunately, many aspects of this debate are confined to highly technical discussions in specialised journals. This article explains in simple terms exactly why a growing number of bibliometricians are sceptical that the h-index is a useful tool for evaluating researchers. It concludes that librarians should be cautious in their recommendations regarding this metric, at least until better evidence becomes available.
Full-text available
An estimation of the h-index is proposed for cases when the original variable underlying the distribution for which the h-index had been determined was rescaled. Within its validity limits, the approximation can be usefully applied for field normalization, change of time frames or other changes of measurement scales.
Full-text available
Bibliometrics has become an integral component of quality assessment for science and funding decisions. The next challenge for scientometrics is to develop similarly reliable indicators for the social impact of research.
Full-text available
An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.
Full-text available
ABSTRACT This article analyzes 10 years of scientific publications among senior Italian academics in occupational medicine by means of citation analysis. Articles published during the decade 2001-2010 were analyzed by means of Elsevier's Scopus®. Scientific performance was assessed by means of 9 different indices (including total number of papers, total citations, h-index). Most papers were submitted to journals of allergy and respiratory medicine, biochemistry, and toxicology. Only 11.9% of the 1,689 papers were published in journals of occupational medicine. The authors' h-index was 10.1 (mean) and 9.5 (median) for the overall production. Productivity was associated with number of contributing authors. Most papers cover topics in the mainstream of other disciplines, evidencing that journals of occupational medicine do not play a primary role in the scientific panorama of medical sciences. This could imply consequences for the discipline.
Full-text available
Background Although being a simple and effective index that has been widely used to evaluate academic output of scientists, the h-index suffers from drawbacks. One critical disadvantage is that only h-squared citations can be inferred from the h-index, which completely ignores excess and h-tail citations, leading to unfair and inaccurate evaluations in many cases. Methodology /Principal Findings To solve this problem, I propose the h’-index, in which h-squared, excess and h-tail citations are all considered. Based on the citation data of the 100 most prolific economists, comparing to h-index, the h’-index shows better correlation with indices of total-citation number and citations per publication, which, although relatively reliable and widely used, do not carry the information of the citation distribution. In contrast, the h’-index possesses the ability to discriminate the shapes of citation distributions, thus leading to more accurate evaluation. Conclusions /Significance The h’-index improves the h-index, as well as indices of total-citation number and citations per publication, by possessing the ability to discriminate shapes of citation distribution, thus making the h’-index a better single-number index for evaluating scientific output in a way that is fairer and more reasonable.
A Letter to the Editor shortly summing up ten or so years of research into the h-index.
Few contemporary inventions have influenced academic publishing as much as journal impact factors. On the other hand, debates and discussion on the potential limitations of, and appropriate uses for, journal performance indicators are almost as long as the history of the measures themselves. Given that scientometrics is often undertaken using bibliometric techniques, the history of the former is inextricably linked to the latter. As with any controversy it is difficult to separate an invention from its history, and for these reasons, the current article provides an overview of some key historical events of relevance to the impact factor. When he first proposed the concept over half a century ago, Garfield did not realise that impact factors would one day become the subject of such widespread controversy. As the current Special Issue of Scientometrics suggests, this debate continues today.