The Altmetrics Collection
Jason Priem1, Paul Groth2*, Dario Taraborelli3
1School of Information & Library Science, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America, 2Department of Computer
Science and The Network Institute, VU University Amsterdam, Amsterdam, The Netherlands, 3The Wikimedia Foundation, San Francisco, California, United States of
What paper should I read next? Who
should I talk to at a conference? Which
research group should get this grant?
Researchers and funders alike must make
daily judgments on how to best spend their
limited time and money–judgments that
are becoming increasingly difficult as the
volume of scholarly communication in-
creases. Not only does the number of
scholarly papers continue to grow, it is
joined by new forms of communication
from data publications to microblog posts.
To deal with incoming information,
scholars have always relied upon filters.
At first these filters were manually com-
piled compendia and corpora of the
literature. But by the mid-20th century,
filters built on manual indexing began to
break under the weight of booming
postwar science production. Garfield 
and others pioneered a solution: automat-
ed filters that leveraged scientists own
impact judgments, aggregating citations
as ‘‘pellets of peer recognition.’’ .
These citation-based filters have dra-
matically grown in importance and have
become the tenet of how research impact
is measured. But, like manual indexing 60
years ago, they may today be failing to
keep up with the literature’s growing
volume, velocity, and diversity .
Citations are heavily gamed [4–6] and
are painfully slow to accumulate , and
overlook increasingly important societal
and clinical impacts . Most importantly,
they overlook new scholarly forms like
datasets, software, and research blogs that
fall outside of the scope of citable research
objects. In sum, citations only reflect formal
acknowledgment and thus they provide only a
partial picture of the science system .
Scholars may discuss, annotate, recom-
mend, refute, comment, read, and teach a
new finding before it ever appears in the
formal citation registry. We need new
mechanisms to create a subtler, higher-
resolution picture of the science system.
The Quest for Better Filters
The scientometrics community has not
been blind to the limitations of citation
measures, and has collectively proposed
methods to gather evidence of broader
impacts and provide more detail about the
science system: tracking acknowledge-
ments , patents , mentorships
, news articles , usage in syllabuses
, and many others, separately and in
various combinations . The emer-
gence of the Web, a ‘‘nutrient-rich space
for scholars’’ , has held particular
promise for new filters and lenses on
scholarly output. Webometrics researchers
have uncovered evidence of informal
impact by examining networks of hyper-
links and mentions on the broader Web
[16–18]. An important strand of webo-
metrics has also examined the properties
of article download data [7,19,20].
The last several years, however, have
presented a promising new approach to
gathering fine-grained impact data: track-
ing large-scale activity around scholarly
products in online tools and environments.
These tools and environments include,
N social media like Twitter and Facebook
N online reference managers like CiteU-
Like, Zotero, and Mendeley
N collaborative encyclopedias like Wiki-
N blogs, both scholarly and general-
N scholarly social networks, like Research-
Gate or Academia.edu
N conference organization sites like La-
Growing numbers of scholars are using
these and similar tools to mediate their
interaction with the literature. In doing so,
they are leaving valuable tracks behind
them–tracks with potential to show infor-
mal paths of influence with unprecedented
speed and resolution. Many of these tools
offer open APIs, supporting large-scale,
automated mining of online activities and
conversations around research objects
Altmetrics [22,23] is the study and use
of scholarly impact measures based on
activity in online tools and environments.
The term has also been used to describe
the metrics themselves–one could propose
in plural a ‘‘set of new altmetrics.’’
Altmetrics is in most cases a subset of
both scientometrics and webometrics; it is
a subset of the latter in that it focuses more
narrowly on scholarly influence as mea-
sured in online tools and environments, rather
than on the Web more generally.
Altmetrics may support finer-grained
maps of science, broader and more
equitable evaluations, and improvements
to the peer-review system . On the
other hand, the use and development of
altmetrics should be pursued with appro-
priate scientific caution. Altmetrics may
face attempts at manipulation similar to
what Google must deal with in web search
ranking. Addressing such manipulation
may, in-turn, impact the transparency of
altmetrics. New and complex measures
may distort our picture of the science
system if not rigorously assessed and
correctly understood. Finally, altmetrics
may promote an evaluation system for
scholarship that many argue has become
overly focused on metrics.
Scope of this Collection
The goal of this collection is to gather an
emerging body of research for the further
study and use of altmetrics. We believe it is
Citation: Priem J, Groth P, Taraborelli D (2012) The Altmetrics Collection. PLoS ONE 7(11): e48753. doi:10.1371/
Editor: Christos A. Ouzounis, The Centre for Research and Technology, Hellas, Greece
Received October 1, 2012; Accepted October 4, 2012; Published November 1, 2012
Copyright: ? 2012 Priem et al. This is an open-access article distributed under the terms of the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original author and source are credited.
Funding: These authors have no support or funding to report.
Competing Interests: The authors have declared that no competing interests exist.
* E-mail: email@example.com
PLOS ONE | www.plosone.org1 November 2012 | Volume 7 | Issue 11 | e48753
greatly needed, as important questions
regarding altmetrics’ prevalence, validity,
distribution, and reliability remain incom-
pletely answered. Importantly, the present
collection, which has the virtue of being
online and open access, allows altmetrics
researchers to experiment on themselves.
The collection’s scope includes:
N Statistical analysis of altmetrics data
sources, and comparisons to estab-
N Metric validation, and identification of
biases in measurements
N Validation of models of scientific
discovery/recommendation based on
N Qualitative research describing the
scholarly use of online tools and
N Empirically-supported theory guiding
N Other research relating to scholarly
impact in online tools and environ-
The current collection includes articles
that address many of these areas. It will
publish new research on an ongoing basis,
and we hope to see additional contribu-
tions appear in the coming months. We
look forward to building a foundation of
early research to support this new field.
Wrote the paper: PG JP DT.
1. Garfield E (1955) Citation indexes to science: a
new dimension in documentation through asso-
ciation of ideas. Science 123: 108–111.
2. Merton RK (1988) The Matthew Effect in
Science, II. ISIS 79: 606–623.
3. Tenopir C, King D (2008) Electronic journals and
changes in scholarly article seeking and reading
patterns. DLib Magazine 14. Available: http://
4. Falagas M, Alexiou V (2008) The top-ten in
journal impact factor manipulation. Archivum
Immunologiae et Therapiae Experimentalis 56:
5. Wilhite AW, Fong EA (2012) Coercive Citation in
Academic Publishing. Science 335: 542–543.
6. The PLoS Medicine Editors (2006) The Impact
Factor Game. PLoS Med 3: e291. doi:10.1371/
7. Brody T, Harnad S, Carr L (2006) Earlier Web
usage statistics as predictors of later citation
impact. Journal of the American Society for
Information Science and Technology 57: 1060–
8. Lewison G (2002) From biomedical research to
health improvement. Scientometrics 54: 179–192.
9. de Solla Price DJ, Beaver D (1966) Collaboration
in an invisible college. American Psychologist 21:
10. Cronin B, Overfelt K (1994) The scholar’s
courtesy: A survey of acknowledgement behav-
iour. Journal of Documentation 50: 165–196.
11. Pavitt K (1985) Patent statistics as indicators of
innovative activities: Possibilities and prob-
lems. Scientometrics 7: 77–99. doi:10.1007/
12. Marchionini G, Solomon P, Davis C, Russell T
(2006) Information and library science MPACT:
A preliminary analysis. Library and Information
Science Research 28: 480–500.
13. Kousha K, Thelwall M (2008) Assessing the
impact of disciplinary research on teaching: An
automatic analysis of online syllabuses. Journal of
the American Society for Information Science
and Technology 59: 2060–2069. doi:10.1002/
14. Martin BR, Irvine J (1983) Assessing basic
research?: Some partial indicators of scientific
progress in radio astronomy. Research Policy 12:
15. Cronin B, Snyder HW, Rosenbaum H, Martin-
son A, Callahan E (1998) Invoked on the Web.
Journal of the American Society for Information
Science 49: 1319–1328. doi:10.1002/(SICI)1097-
16. Almind TC, Ingwersen P (1997) Informetric
Analyses on the World Wide Web: Methodolog-
ical Approaches to ‘‘WEBOMETRICS.’’ Journal
of Documentation 53: 404–426.
17. Thelwall M, Vaughan L, Bjo ¨rneborn L (2005)
Webometrics. Annual Review of Information
Science and Technology 39.
18. Vaughan L, Shaw D (2005) Web citation data for
impact assessment: a comparison of four science
disciplines. Journal of the American Society for
Information Science 56: 1075–1087.
19. Bollen J, Van de Sompel H, Hagberg A, Chute R
(2009) A principal component analysis of 39
scientific impact measures. PLoS ONE 4.
20. Kurtz MJ, Eichhorn G, Accomazzi A, Grant CS,
Demleitner M, et al. (2005) The bibliometric
properties of article readership information.
Journal of the American Society for Information
Science 56: 111–128.
21. Priem J, Hemminger BH (2010) Scientometrics
2.0: Toward new metrics of scholarly impact on
the social Web. First Monday 15. Availa-
22. jasonpriem (2010) Iliketheterm#articlelevelmetrics,
but it fails to imply *diversity* of measures. Lately, I’m
liking #altmetrics. Available: https://twitter.com/#!/
jasonpriem/status/25844968813. Accessed 2012 Oct
23. Priem J, Taraborelli D, Groth P, Neylon C (2010)
alt-metrics: a manifesto. Available:http://altmetrics.
org/manifesto/. Accessed 2011 August 15.
24. Taraborelli D (2008) Soft peer review: Social
software and distributed scientific evaluation.
Proceedings of the 8th International Conference on the
Design of Cooperative Systems (COOP ’08). Carry-Le-
Rouet. Available: http://discovery.ucl.ac.uk/
The Altmetrics Collection
PLOS ONE | www.plosone.org2November 2012 | Volume 7 | Issue 11 | e48753