Content uploaded by Denny Borsboom
Author content
All content in this area was uploaded by Denny Borsboom on Jun 29, 2015
Content may be subject to copyright.
INSIGHTS |
PERSPECTIVES
1422 26 JUNE 2015 • VOL 348 ISSUE 6242 sciencemag.org SCIENCE
and neutral resource that supports and
complements efforts of the research enter-
prise and its key stakeholders.
Universities should insist that their fac-
ulties and students are schooled in the eth-
ics of research, their publications feature
neither honorific nor ghost authors, their
public information offices avoid hype in
publicizing findings, and suspect research
is promptly and thoroughly investigated.
All researchers need to realize that the
best scientific practice is produced when,
like Darwin, they persistently search for
flaws in their arguments. Because inherent
variability in biological systems makes it
possible for researchers to explore differ-
ent sets of conditions until the expected
(and rewarded) result is obtained, the need
for vigilant self-critique may be especially
great in research with direct application to
human disease. We encourage each branch
of science to invest in case studies identify-
ing what went wrong in a selected subset
of nonreproducible publications—enlisting
social scientists and experts in the respec-
tive fields to interview those who were
involved (and perhaps examining lab note-
books or redoing statistical analyses), with
the hope of deriving general principles for
improving science in each field.
Industry should publish its failed efforts
to reproduce scientific findings and join
scientists in the academy in making the
case for the importance of scientific work.
Scientific associations should continue to
communicate science as a way of know-
ing, and educate their members in ways to
more effectively communicate key scien-
tific findings to broader publics. Journals
should continue to ask for higher stan-
dards of transparency and reproducibility.
We recognize that incentives can backfire.
Still, because those such as enhanced social
image and forms of public recognition ( 10,
11) can increase productive social behavior
( 12), we believe that replacing the stigma of
retraction with language that lauds report-
ing of unintended errors in a publication will
increase that behavior. Because sustaining a
good reputation can incentivize cooperative
behavior ( 13), we anticipate that our pro-
posed changes in the review process will not
only increase the quality of the final product
but also expose efforts to sabotage indepen-
dent review. To ensure that such incentives
not only advance our objectives but above
all do no harm, we urge that each be scru-
tinized and evaluated before being broadly
implemented.
Will past be prologue? If science is to
enhance its capacities to improve our un-
derstanding of ourselves and our world,
protect the hard-earned trust and esteem
in which society holds it, and preserve its
role as a driver of our economy, scientists
must safeguard its rigor and reliability in
the face of challenges posed by a research
ecosystem that is evolving in dramatic and
sometimes unsettling ways. To do this, the
scientific research community needs to be
involved in an ongoing dialogue. We hope
that this essay and the report The Integrity
of Science ( 14), forthcoming in 2015, will
serve as catalysts for such a dialogue.
Asked at the close of the U.S. Consti-
tutional Convention of 1787 whether the
deliberations had produced a republic or
a monarchy, Benjamin Franklin said “A
Republic, if you can keep it.” Just as pre-
serving a system of government requires
ongoing dedication and vigilance, so too
does protecting the integrity of science. ■
REFERENCES AND NOTES
1. Trouble at the lab, The Economist, 19 October 2013;
www.economist.com/news/briefing/
21588057-scientists-think-science-self-correcting-
alarming-degree-it-not-trouble.
2. R. M erton , The Sociology of Science: Theoretical and
Empirical Investigations (University of Chicago Press,
Chicago, 1973), p. 276.
3. K. Popper, Conjectures and Refutations: The Growth of
Scientific Knowledge (Routledge, London, 1963), p. 293.
4. Editorial Board, Nature 511, 5 (2014); www.nature.com/
news/stap-retracted-1.15488.
5. B. A. Nose k et al., Science 348, 142 2 (20 15).
6. Institute of Medicine, Discussion Framework for Clinical
Trial Data Sharing: Guiding Principles, Elements, and
Activities (National Academies Press, Washington, DC,
2014) .
7. B. Nosek, J. Spies, M. Motyl, Perspect. Psychol. Sci. 7, 615
(201 2).
8. C. Franzoni, G. Scellato, P. Stephan, Science 333, 702
(201 1).
9. National Academy of Sciences, National Academy of
Engineering, and Institute of Medicine, Responsible
Science, Volume I: Ensuring the Integrity of the Research
Process (National Academies Press, Washington, DC,
1992).
10. N. Lacetera, M. Macis, J. Econ. Beh av. Organ. 76, 225
(2010 ).
11. D. Karlan, M. McConnell, J. Econ. Beha v. Organ. 106, 40 2
(2014 ).
12. R. Thaler, C. Sunstein, Nudge: Improving Decisions About
Health, Wealth and Happiness (Yale Univ. Press, New
Haven, CT, 2009).
13. T. Pfeiffer, L. Tran, C. Krumme, D. Rand, J. R. Soc. I nterf ace
2012, rsif20120332 (2012).
14. Committee on Science, Engineering, and Public Policy
of the National Academy of Sciences, National Academy
of Engineering, and Institute of Medicine, The Integrity
of Science (National Academies Press, forthcoming).
http://www8.nationalacademies.org/cp/projectview.
aspx?key=49387.
10.1126/science.aab3847
“Instances in which
scientists detect and
address flaws in work
constitute evidence
of success, not failure.”
Transparency, openness, and repro-
ducibility are readily recognized as
vital features of science ( 1, 2). When
asked, most scientists embrace these
features as disciplinary norms and
values ( 3). Therefore, one might ex-
pect that these valued features would be
routine in daily practice. Yet, a growing
body of evidence suggests that this is not
the case ( 4– 6).
A likely culprit for this disconnect is an
academic reward system that does not suf-
ficiently incentivize open practices ( 7). In the
present reward system, emphasis on innova-
tion may undermine practices
that support verification. Too
often, publication requirements
(whether actual or perceived) fail to encour-
age transparent, open, and reproducible sci-
ence ( 2, 4, 8, 9). For example, in a transparent
science, both null results and statistically
significant results are made available and
help others more accurately assess the evi-
dence base for a phenomenon. In the present
culture, however, null results are published
less frequently than statistically significant
results ( 10) and are, therefore, more likely
inaccessible and lost in the “file drawer” ( 11).
The situation is a classic collective action
problem. Many individual researchers lack
Promoting an
open research
culture
By B. A. Nosek ,* G. Alter, G. C. Banks,
D. Borsboom, S. D. Bowman,
S. J. Breckler, S. Buck, C. D. Chambers,
G. Chin, G. Christensen, M. Contestabile,
A. Dafoe, E. Eich, J. Freese,
R. Glennerster, D. Goroff, D. P. Green, B.
Hesse, M. Humphreys, J. Ishiyama,
D. Karlan, A. Kraut, A. Lupia, P. Mabry,
T. A . Madon, N. Malhotra,
E. Mayo-Wilson, M. McNutt, E. Miguel,
E. Levy Paluck, U. Simonsohn,
C. Soderberg, B. A. Spellman,
J. Tu rit to , G. VandenBos, S. Vazire,
E. J. Wagenmakers, R. Wilson, T. Yarkoni
Author guidelines for
journals could help to
promote transparency,
openness, and
reproducibility
SCIENTIFIC STANDARDS
POLI CY
Published by AAAS
on June 29, 2015www.sciencemag.orgDownloaded from on June 29, 2015www.sciencemag.orgDownloaded from on June 29, 2015www.sciencemag.orgDownloaded from on June 29, 2015www.sciencemag.orgDownloaded from
26 JUNE 2015 • VOL 348 ISSUE 6242 1423SCIENCE sciencemag.org
strong incentives to be more transparent,
even though the credibility of science would
benefit if everyone were more transparent.
Unfortunately, there is no centralized means
of aligning individual and communal incen-
tives via universal scientific policies and pro-
cedures. Universities, granting agencies, and
publishers each create different incentives
for researchers. With all of this complexity,
nudging scientific practices toward greater
openness requires complementary and coor-
dinated efforts from all stakeholders.
THE TRANSPARENCY AND OPENNESS
PROMOTION GUIDELINES. The Transpar-
ency and Openness Promotion (TOP) Com-
mittee met at the Center for Open Science
in Charlottesville, Virginia, in November
2014 to address one important element of
the incentive systems: journals’
procedures and policies for pub-
lication. The committee con-
sisted of disciplinary leaders,
journal editors, funding agency
representatives, and disciplin-
ary experts largely from the
social and behavioral sciences.
By developing shared standards
for open practices across jour-
nals, we hope to translate sci-
entific norms and values into
concrete actions and change the
current incentive structures to
drive researchers’ behavior to-
ward more openness. Although
there are some idiosyncratic is-
sues by discipline, we sought to
produce guidelines that focus
on the commonalities across
disciplines.
Standards. There are eight
standards in the TOP guidelines;
each moves scientific communi-
cation toward greater openness.
These standards are modular,
facilitating adoption in whole
or in part. However, they also
complement each other, in that
commitment to one standard
may facilitate adoption of oth-
ers. Moreover, the guidelines are sensitive
to barriers to openness by articulating, for
example, a process for exceptions to shar-
ing because of ethical issues, intellectual
property concerns, or availability of neces-
sary resources. The complete guidelines are
available in the TOP information commons
at http://cos.io/top, along with a list of
signatories that numbered 86 journals and
26 organizations as of 15 June 2015. The
table provides a summary of the guidelines.
First, two standards reward research-
ers for the time and effort they have spent
engaging in open practices. (i) Citation
standards extend current article citation
norms to data, code, and research materi-
als. Regular and rigorous citation of these
materials credit them as original intellec-
tual contributions. (ii) Replication stan-
dards recognize the value of replication
for independent verification of research
results and identify the conditions under
which replication studies will be published
in the journal. To progress, science needs
both innovation and self-correction; repli-
cation offers opportunities for self-correc-
tion to more efficiently identify promising
research directions.
Second, four standards describe what
openness means across the scientific pro-
cess so that research can be reproduced
and evaluated. Reproducibility increases
confidence in results and also allows schol-
ars to learn more about what results do
and do not mean. (i) Design standards in-
crease transparency about the research
process and reduce vague or incomplete
reporting of the methodology. (ii) Research
materials standards encourage the provi-
sion of all elements of that methodology.
(iii) Data sharing standards incentivize
authors to make data available in trusted
repositories such as Dataverse, Dryad, the
Interuniversity Consortium for Political and
Social Research (ICPSR), the Open Science
Framework, or the Qualitative Data Reposi-
tory. (iv) Analytic methods standards do the
same for the code comprising the statistical
models or simulations conducted for the re-
search. Many discipline-specific standards
for disclosure exist, particularly for clini-
cal trials and health research more gener-
ally (e.g., www.equator-network.org). Many
more are emerging for other disciplines,
such as those developed by Psychological
Science ( 12).
Finally, two standards address the values
resulting from preregistration. (i) Standards
for preregistration of studies facilitate the
discovery of research, even unpublished
research, by ensuring that the existence of
the study is recorded in a public registry.
(ii) Preregistration of analysis plans certify
the distinction between confirmatory and
exploratory research, or what is also called
hypothesis-testing versus hypothesis-gen-
erating research. Making transparent the
distinction between confirmatory and ex-
ploratory methods can enhance reproduc-
ibility ( 3, 13, 14).
Levels. The TOP Committee recognized
that not all of the standards are applicable
to all journals or all disciplines. Therefore,
rather than advocating for a single set of
guidelines, the TOP Committee defined
ILLUSTRATION: DAVIDE BONAZZI
*Corresponding author. E-mail: nosek@virginia.edu
A liations for the authors, all of whom are members of the
TOP Guidelines Committee, are given in the supplementary
materials.
Published by AAAS
INSIGHTS |
PERSPECTIVES
1424 26 JUNE 2015 • VOL 348 ISSUE 6242 sciencemag.org SCIENCE
Citation standards Journal encourages
citation of data, code,
and materials—or says
nothing.
Journal describes
citation of data in
guidelines to authors
with clear rules and
examples.
Article provides appropriate
citation for data and materials
used, consistent with journal's
author guidelines.
Article is not published until
appropriate citation for data
and materials is provided that
follows journal's author
guidelines.
Data transparency Journal encourages
data sharing—or says
nothing.
Article states whether
data are available and,
if so, where to access
them.
Data must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Data must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Analytic methods
(code) transparency
Journal encourages
code sharing—or says
nothing.
Article states whether
code is available and, if
so, where to access
them.
Code must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Code must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Research materials
transparency
Journal encourages
materials sharing—or
says nothing
Article states whether
materials are available
and, if so, where to
access them.
Materials must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Materials must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Design and analysis
transparency
Journal encourages
design and analysis
transparency or says
nothing.
Journal articulates
design transparency
standards.
Journal requires adherence to
design transparency standards
for review and publication.
Journal requires and enforces
adherence to design transpar-
ency standards for review and
publication.
Preregistration
of studies
Journal says nothing. Journal encourages
preregistration of
studies and provides
link in article to
preregistration if it
exists.
Journal encourages preregis-
tration of studies and provides
link in article and certication
of meeting preregistration
badge requirements.
Journal requires preregistration
of studies and provides link and
badge in article to meeting
requirements.
Preregistration
of analysis plans
Journal says nothing. Journal encourages
preanalysis plans and
provides link in article
to registered analysis
plan if it exists.
Journal encourages preanaly-
sis plans and provides link in
article and certication of
meeting registered analysis
plan badge requirements.
Journal requires preregistration
of studies with analysis plans
and provides link and badge in
article to meeting requirements.
Replication Journal discourages
submission of
replication studies—or
says nothing.
Journal encourages
submission of
replication studies.
Journal encourages submis-
sion of replication studies and
conducts blind review of
results.
Journal uses Registered
Reports as a submission option
for replication studies with peer
review before observing the
study outcomes.
LEVEL 0 LEVEL 1 LEVEL 2 LEVEL 3
Summary of the eight standards and three levels of the TOP guidelines
Levels 1 to 3 are increasingly stringent for each standard. Level 0 oers a comparison that does not meet the standard.
three levels for each standard. Level 1 is de-
signed to have little to no barrier to adop-
tion while also offering an incentive for
openness. For example, under the analytic
methods (code) sharing standard, authors
must state in the text whether and where
code is available. Level 2 has stronger ex-
pectations for authors but usually avoids
adding resource costs to editors or pub-
lishers that adopt the standard. In Level 2,
journals would require code to be deposited
in a trusted repository and check that the
link appears in the article and resolves to
the correct location. Level 3 is the strongest
standard but also may present some barri-
ers to implementation for some journals.
For example, the journals Political Analysis
and Quarterly Journal of Political Science
require authors to provide their code for
review, and editors reproduce the reported
analyses publication. In the table, we pro-
vide “Level 0” for comparison of common
journal policies that do not meet the trans-
parency standards.
Adoption. Defining multiple levels and
distinct standards facilitates informed
decision-making by journals. It also ac-
knowledges the variation in evolving norms
about research transparency. Depending on
the discipline or publishing format, some
of the standards may not be relevant for
a journal. Journal and publisher decisions
can be based on many factors—including
their readiness to adopt modest to stron-
ger transparency standards for authors,
internal journal operations, and disciplin-
ary norms and expectations. For example,
in economics, many highly visible journals
such as American Economic Review have
already adopted strong policies requiring
data sharing, whereas few psychology jour-
nals have comparable requirements.
In this way, the levels are designed to fa-
cilitate the gradual adoption of best prac-
tices. Journals may begin with a standard
that rewards adherence, perhaps as a step
toward requiring the practice. For example,
Psychological Science awards badges for
“open data,” “open materials,” and “prereg-
istration” ( 12), and approximately 25% of
accepted articles earned at least one badge
in the first year of operation.
The Level 1 guidelines are designed to
have minimal effect on journal efficiency
and workflow while also having a measur-
able impact on transparency. Moreover,
although higher levels may require greater
implementation effort up front, such efforts
may benefit publishers and editors and the
quality of publications by, for example, re-
Published by AAAS
26 JUNE 2015 • VOL 348 ISSUE 6242 1425SCIENCE sciencemag.org
In synthetic ecology, a nascent offshoot
of synthetic biology, scientists aim to
design and construct microbial com-
munities with desirable properties.
Such mixed populations of microor-
ganisms can simultaneously perform
otherwise incompatible functions ( 1).
Compared with individual organisms, they
can also better resist losses in function as
a result of environmental perturbation or
invasion by other species ( 2). Synthetic
ecology may thus be a promising approach
for developing robust, stable biotechno-
logical processes, such as the conversion
of cellulosic biomass to biofuels ( 3). How-
ever, achieving this will require detailed
knowledge of the principles that guide the
structure and function of microbial com-
munities (see the image).
Recent work with synthetic communities
is shedding light on microbial interactions
that may lead to new principles for commu-
nity design and engineering. In game the-
ory, cooperators provide publicly available
goods that benefit all, whereas cheaters
exploit those goods without reciprocation.
The tragedy of the commons predicts that
cheaters are more fit than cooperators,
eventually destroying the cooperation. Yet,
this is not borne out by observations. For
example, using a synthetic consortium of
genetically modified yeast to represent co-
operators and cheaters, Waite and Shou ( 4)
found that, although initially less fit than
cheaters, cooperators rapidly dominated in
a fraction of the cultures. The evolved coop-
erators harbored mutations allowing them
to grow at much lower nutrient concentra-
tions than their ancestor. This suggests that
the tragedy of the commons can be avoided
Ecological communities
by design
Learning from nature. Photomicrograph of cyanobacterial-heterotroph microbial consortia derived from a
phototrophic microbial mat community from a saline lake. Emerging understanding of cooperative mechanisms
in such communities may be helpful in the design of synthetic communities for use in biotechnology.
By Jame s K. Fredrickson
Synthetic ecology requires knowledge of how
microbial communities function
ECOLOGY
ducing time spent on communication with
authors and reviewers, improving standards
of reporting, increasing detectability of er-
rors before publication, and ensuring that
publication-related data are accessible for a
long time.
Evaluation and revision. An information
commons and support team at the Center
for Open Science is available (top@cos.io)
to assist journals in selection and adop-
tion of standards and will track adoption
across journals. Moreover, adopting jour-
nals may suggest revisions that improve
the guidelines or make them more flexible
or adaptable for the needs of particular
subdisciplines.
The present version of the guidelines is
not the last word on standards for openness
in science. As with any research enterprise,
the available empirical evidence will expand
with application and use of these guide-
lines. To reflect this evolutionary process,
the guidelines are accompanied by a version
number and will be improved as experience
with them accumulates.
Conclusion. The journal article is central
to the research communication process.
Guidelines for authors define what aspects
of the research process should be made
available to the community to evaluate,
critique, reuse, and extend. Scientists rec-
ognize the value of transparency, openness,
and reproducibility. Improvement of journal
policies can help those values become more
evident in daily practice and ultimately im-
prove the public trust in science, and sci-
ence itself. ■
REFERENCES AND NOTES
1. M. McNutt, Science 343, 229 (2014).
2. E. Miguel et al., Science 343, 30 (20 14).
3. M. S. Anderson, B. C. Martinson, R. De Vries, J. Empir. Res.
Hum. Res. Ethics 2, 3 (2 007).
4. J. P. A. Ioanni dis, M . R. Muna fò, P. Fus ar-Poli , B. A. Nose k, S.
P. D a v i d , Trends Cogn. Sci. 18, 235 (2014) .
5. L. K . John, G. Lo ewenst ein, D. Pre lec, Psychol. Sci. 23, 524
(201 2).
6. E. H. O ’Boyle Jr., G. C. Bank s, E. Gon zalez- Mule, J. Manage.
10.11 77/014920 63145 27133 (2014 ).
7. B. A. N osek , J. R. Spie s, M. Moty l, Perspect. Psychol. Sci. 7,
615 (2012).
8. J. B. Asendorpf et al., Eur. J. Pers. 27, 108 (2 013).
9. J. P. Simmons, L. D. Nelson, U. Simonsohn, Psychol. Sci. 22,
1359 ( 2011) .
10. A. Franco, N. Malhotra, G. Simonovits, Science 345, 1502
(2014 ).
11. R. Rosenthal, Psychol. Bull. 86, 638 (1979) .
12. E. Eich, Psychol. Sci. 25, 3 (2014).
13. E.-J. Wagenmakers, R. Wetzels, D. Borsboom, H. L. van der
Maas, R. A. Kievit, Perspect. Psychol. Sci. 7, 632 (2012 ).
14. C. D. Chambers, Cortex 49, 609 (2013).
ACKNOWLEDGMENTS
This work was supported by the Laura and John Arnold
Foundation.
SUPPLEMENTARY MATERIALS
www.sciencemag.org/content/348/6242/1422/suppl/DC1
PHOTO: ALICE DOHNALKOVA/PNNL
10.1126/science.aab2374
Published by AAAS