ArticlePDF Available

Abstract and Figures

Transparency, openness, and reproducibility are readily recognized as vital features of science (1, 2). When asked, most scientists embrace these features as disciplinary norms and values (3). Therefore, one might expect that these valued features would be routine in daily practice. Yet, a growing body of evidence suggests that this is not the case (4–6).
No caption available
… 
Content may be subject to copyright.
INSIGHTS |
PERSPECTIVES
1422 26 JUNE 2015 • VOL 348 ISSUE 6242 sciencemag.org SCIENCE
and neutral resource that supports and
complements efforts of the research enter-
prise and its key stakeholders.
Universities should insist that their fac-
ulties and students are schooled in the eth-
ics of research, their publications feature
neither honorific nor ghost authors, their
public information offices avoid hype in
publicizing findings, and suspect research
is promptly and thoroughly investigated.
All researchers need to realize that the
best scientific practice is produced when,
like Darwin, they persistently search for
flaws in their arguments. Because inherent
variability in biological systems makes it
possible for researchers to explore differ-
ent sets of conditions until the expected
(and rewarded) result is obtained, the need
for vigilant self-critique may be especially
great in research with direct application to
human disease. We encourage each branch
of science to invest in case studies identify-
ing what went wrong in a selected subset
of nonreproducible publications—enlisting
social scientists and experts in the respec-
tive fields to interview those who were
involved (and perhaps examining lab note-
books or redoing statistical analyses), with
the hope of deriving general principles for
improving science in each field.
Industry should publish its failed efforts
to reproduce scientific findings and join
scientists in the academy in making the
case for the importance of scientific work.
Scientific associations should continue to
communicate science as a way of know-
ing, and educate their members in ways to
more effectively communicate key scien-
tific findings to broader publics. Journals
should continue to ask for higher stan-
dards of transparency and reproducibility.
We recognize that incentives can backfire.
Still, because those such as enhanced social
image and forms of public recognition ( 10,
11) can increase productive social behavior
( 12), we believe that replacing the stigma of
retraction with language that lauds report-
ing of unintended errors in a publication will
increase that behavior. Because sustaining a
good reputation can incentivize cooperative
behavior ( 13), we anticipate that our pro-
posed changes in the review process will not
only increase the quality of the final product
but also expose efforts to sabotage indepen-
dent review. To ensure that such incentives
not only advance our objectives but above
all do no harm, we urge that each be scru-
tinized and evaluated before being broadly
implemented.
Will past be prologue? If science is to
enhance its capacities to improve our un-
derstanding of ourselves and our world,
protect the hard-earned trust and esteem
in which society holds it, and preserve its
role as a driver of our economy, scientists
must safeguard its rigor and reliability in
the face of challenges posed by a research
ecosystem that is evolving in dramatic and
sometimes unsettling ways. To do this, the
scientific research community needs to be
involved in an ongoing dialogue. We hope
that this essay and the report The Integrity
of Science ( 14), forthcoming in 2015, will
serve as catalysts for such a dialogue.
Asked at the close of the U.S. Consti-
tutional Convention of 1787 whether the
deliberations had produced a republic or
a monarchy, Benjamin Franklin said “A
Republic, if you can keep it.” Just as pre-
serving a system of government requires
ongoing dedication and vigilance, so too
does protecting the integrity of science.
REFERENCES AND NOTES
1. Trouble at the lab, The Economist, 19 October 2013;
www.economist.com/news/briefing/
21588057-scientists-think-science-self-correcting-
alarming-degree-it-not-trouble.
2. R. M erton , The Sociology of Science: Theoretical and
Empirical Investigations (University of Chicago Press,
Chicago, 1973), p. 276.
3. K. Pop per, Conjectures and Refutations: The Growth of
Scientific Knowledge (Routledge, London, 1963), p. 293.
4. Editorial Board, Nature 511, 5 (2014); www.nature.com/
news/stap-retracted-1.15488.
5. B. A. Nose k et al., Science 348, 142 2 (20 15).
6. Institute of Medicine, Discussion Framework for Clinical
Trial Data Sharing: Guiding Principles, Elements, and
Activities (National Academies Press, Washington, DC,
2014) .
7. B. Nosek, J. Spies, M. Motyl, Perspect. Psychol. Sci. 7, 615
(201 2).
8. C. Franzoni, G. Scellato, P. Stephan, Science 333, 702
(201 1).
9. National Academy of Sciences, National Academy of
Engineering, and Institute of Medicine, Responsible
Science, Volume I: Ensuring the Integrity of the Research
Process (National Academies Press, Washington, DC,
1992).
10. N. Lacetera, M. Macis, J. Econ. Beh av. Organ. 76, 225
(2010 ).
11. D. Karlan, M. McConnell, J. Econ. Beha v. Organ. 106, 40 2
(2014 ).
12. R. Th aler, C. Sun stein , Nudge: Improving Decisions About
Health, Wealth and Happiness (Yale Univ. Press, New
Haven, CT, 2009).
13. T. Pfeiffer, L. Tran, C. Krumme, D. Rand, J. R. Soc. I nterf ace
2012, rsif20120332 (2012).
14. Committee on Science, Engineering, and Public Policy
of the National Academy of Sciences, National Academy
of Engineering, and Institute of Medicine, The Integrity
of Science (National Academies Press, forthcoming).
http://www8.nationalacademies.org/cp/projectview.
aspx?key=49387.
10.1126/science.aab3847
“Instances in which
scientists detect and
address flaws in work
constitute evidence
of success, not failure.”
Transparency, openness, and repro-
ducibility are readily recognized as
vital features of science ( 1, 2). When
asked, most scientists embrace these
features as disciplinary norms and
values ( 3). Therefore, one might ex-
pect that these valued features would be
routine in daily practice. Yet, a growing
body of evidence suggests that this is not
the case ( 46).
A likely culprit for this disconnect is an
academic reward system that does not suf-
ficiently incentivize open practices ( 7). In the
present reward system, emphasis on innova-
tion may undermine practices
that support verification. Too
often, publication requirements
(whether actual or perceived) fail to encour-
age transparent, open, and reproducible sci-
ence ( 2, 4, 8, 9). For example, in a transparent
science, both null results and statistically
significant results are made available and
help others more accurately assess the evi-
dence base for a phenomenon. In the present
culture, however, null results are published
less frequently than statistically significant
results ( 10) and are, therefore, more likely
inaccessible and lost in the “file drawer” ( 11).
The situation is a classic collective action
problem. Many individual researchers lack
Promoting an
open research
culture
By B. A. Nosek ,* G. Alter, G. C. Banks,
D. Borsboom, S. D. Bowman,
S. J. Breckler, S. Buck, C. D. Chambers,
G. Chin, G. Christensen, M. Contestabile,
A. Dafoe, E. Eich, J. Freese,
R. Glennerster, D. Goroff, D. P. Green, B.
Hesse, M. Humphreys, J. Ishiyama,
D. Karlan, A. Kraut, A. Lupia, P. Mabry,
T. M ado n, N. Malhotra,
E. Mayo-Wilson, M. McNutt, E. Miguel,
E. Levy Paluck, U. Simonsohn,
C. Soderberg, B. A. Spellman,
J. Tu ri tt o, G. VandenBos, S. Vazire,
E. J. Wagenmakers, R. Wilson, T. Yarkoni
Author guidelines for
journals could help to
promote transparency,
openness, and
reproducibility
SCIENTIFIC STANDARDS
POLI CY
DA_0626PerspectivesR1.indd 1422 7/24/15 10:22 AM
Published by AAAS
on September 13, 2016http://science.sciencemag.org/Downloaded from
26 JUNE 2015 • VOL 348 ISSUE 6242 1423SCIENCE sciencemag.org
strong incentives to be more transparent,
even though the credibility of science would
benefit if everyone were more transparent.
Unfortunately, there is no centralized means
of aligning individual and communal incen-
tives via universal scientific policies and pro-
cedures. Universities, granting agencies, and
publishers each create different incentives
for researchers. With all of this complexity,
nudging scientific practices toward greater
openness requires complementary and coor-
dinated efforts from all stakeholders.
THE TRANSPARENCY AND OPENNESS
PROMOTION GUIDELINES. The Transpar-
ency and Openness Promotion (TOP) Com-
mittee met at the Center for Open Science
in Charlottesville, Virginia, in November
2014 to address one important element of
the incentive systems: journals’
procedures and policies for pub-
lication. The committee con-
sisted of disciplinary leaders,
journal editors, funding agency
representatives, and disciplin-
ary experts largely from the
social and behavioral sciences.
By developing shared standards
for open practices across jour-
nals, we hope to translate sci-
entific norms and values into
concrete actions and change the
current incentive structures to
drive researchers’ behavior to-
ward more openness. Although
there are some idiosyncratic is-
sues by discipline, we sought to
produce guidelines that focus
on the commonalities across
disciplines.
Standards. There are eight
standards in the TOP guidelines;
each moves scientific communi-
cation toward greater openness.
These standards are modular,
facilitating adoption in whole
or in part. However, they also
complement each other, in that
commitment to one standard
may facilitate adoption of oth-
ers. Moreover, the guidelines are sensitive
to barriers to openness by articulating, for
example, a process for exceptions to shar-
ing because of ethical issues, intellectual
property concerns, or availability of neces-
sary resources. The complete guidelines are
available in the TOP information commons
at http://cos.io/top, along with a list of
signatories that numbered 86 journals and
26 organizations as of 15 June 2015. The
table provides a summary of the guidelines.
First, two standards reward research-
ers for the time and effort they have spent
engaging in open practices. (i) Citation
standards extend current article citation
norms to data, code, and research materi-
als. Regular and rigorous citation of these
materials credit them as original intellec-
tual contributions. (ii) Replication stan-
dards recognize the value of replication
for independent verification of research
results and identify the conditions under
which replication studies will be published
in the journal. To progress, science needs
both innovation and self-correction; repli-
cation offers opportunities for self-correc-
tion to more efficiently identify promising
research directions.
Second, four standards describe what
openness means across the scientific pro-
cess so that research can be reproduced
and evaluated. Reproducibility increases
confidence in results and also allows schol-
ars to learn more about what results do
and do not mean. (i) Design standards in-
crease transparency about the research
process and reduce vague or incomplete
reporting of the methodology. (ii) Research
materials standards encourage the provi-
sion of all elements of that methodology.
(iii) Data sharing standards incentivize
authors to make data available in trusted
repositories such as Dataverse, Dryad, the
Interuniversity Consortium for Political and
Social Research (ICPSR), the Open Science
Framework, or the Qualitative Data Reposi-
tory. (iv) Analytic methods standards do the
same for the code comprising the statistical
models or simulations conducted for the re-
search. Many discipline-specific standards
for disclosure exist, particularly for clini-
cal trials and health research more gener-
ally (e.g., www.equator-network.org). Many
more are emerging for other disciplines,
such as those developed by Psychological
Science ( 12).
Finally, two standards address the values
resulting from preregistration. (i) Standards
for preregistration of studies facilitate the
discovery of research, even unpublished
research, by ensuring that the existence of
the study is recorded in a public registry.
(ii) Preregistration of analysis plans certify
the distinction between confirmatory and
exploratory research, or what is also called
hypothesis-testing versus hypothesis-gen-
erating research. Making transparent the
distinction between confirmatory and ex-
ploratory methods can enhance reproduc-
ibility ( 3, 13, 14).
Levels. The TOP Committee recognized
that not all of the standards are applicable
to all journals or all disciplines. Therefore,
rather than advocating for a single set of
guidelines, the TOP Committee defined
ILLUSTRATION: DAVIDE BONAZZI
*Corresponding author. E-mail: nosek@virginia.edu
A liations for the authors, all of whom are members of the
TOP Guidelines Committee, are given in the supplementary
materials.
Published by AAAS
on September 13, 2016http://science.sciencemag.org/Downloaded from
INSIGHTS |
PERSPECTIVES
1424 26 JUNE 2015 • VOL 348 ISSUE 6242 sciencemag.org SCIENCE
Citation standards Journal encourages
citation of data, code,
and materials—or says
nothing.
Journal describes
citation of data in
guidelines to authors
with clear rules and
examples.
Article provides appropriate
citation for data and materials
used, consistent with journal's
author guidelines.
Article is not published until
appropriate citation for data
and materials is provided that
follows journal's author
guidelines.
Data transparency Journal encourages
data sharing—or says
nothing.
Article states whether
data are available and,
if so, where to access
them.
Data must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Data must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Analytic methods
(code) transparency
Journal encourages
code sharing—or says
nothing.
Article states whether
code is available and, if
so, where to access
them.
Code must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Code must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Research materials
transparency
Journal encourages
materials sharing—or
says nothing
Article states whether
materials are available
and, if so, where to
access them.
Materials must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Materials must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Design and analysis
transparency
Journal encourages
design and analysis
transparency or says
nothing.
Journal articulates
design transparency
standards.
Journal requires adherence to
design transparency standards
for review and publication.
Journal requires and enforces
adherence to design transpar-
ency standards for review and
publication.
Preregistration
of studies
Journal says nothing. Journal encourages
preregistration of
studies and provides
link in article to
preregistration if it
exists.
Journal encourages preregis-
tration of studies and provides
link in article and certication
of meeting preregistration
badge requirements.
Journal requires preregistration
of studies and provides link and
badge in article to meeting
requirements.
Preregistration
of analysis plans
Journal says nothing. Journal encourages
preanalysis plans and
provides link in article
to registered analysis
plan if it exists.
Journal encourages preanaly-
sis plans and provides link in
article and certication of
meeting registered analysis
plan badge requirements.
Journal requires preregistration
of studies with analysis plans
and provides link and badge in
article to meeting requirements.
Replication Journal discourages
submission of
replication studies—or
says nothing.
Journal encourages
submission of
replication studies.
Journal encourages submis-
sion of replication studies and
conducts blind review of
results.
Journal uses Registered
Reports as a submission option
for replication studies with peer
review before observing the
study outcomes.
LEVEL 0 LEVEL 1 LEVEL 2 LEVEL 3
Summary of the eight standards and three levels of the TOP guidelines
Levels 1 to 3 are increasingly stringent for each standard. Level 0 oers a comparison that does not meet the standard.
three levels for each standard. Level 1 is de-
signed to have little to no barrier to adop-
tion while also offering an incentive for
openness. For example, under the analytic
methods (code) sharing standard, authors
must state in the text whether and where
code is available. Level 2 has stronger ex-
pectations for authors but usually avoids
adding resource costs to editors or pub-
lishers that adopt the standard. In Level 2,
journals would require code to be deposited
in a trusted repository and check that the
link appears in the article and resolves to
the correct location. Level 3 is the strongest
standard but also may present some barri-
ers to implementation for some journals.
For example, the journals Political Analysis
and Quarterly Journal of Political Science
require authors to provide their code for
review, and editors reproduce the reported
analyses publication. In the table, we pro-
vide “Level 0” for comparison of common
journal policies that do not meet the trans-
parency standards.
Adoption. Defining multiple levels and
distinct standards facilitates informed
decision-making by journals. It also ac-
knowledges the variation in evolving norms
about research transparency. Depending on
the discipline or publishing format, some
of the standards may not be relevant for
a journal. Journal and publisher decisions
can be based on many factors—including
their readiness to adopt modest to stron-
ger transparency standards for authors,
internal journal operations, and disciplin-
ary norms and expectations. For example,
in economics, many highly visible journals
such as American Economic Review have
already adopted strong policies requiring
data sharing, whereas few psychology jour-
nals have comparable requirements.
In this way, the levels are designed to fa-
cilitate the gradual adoption of best prac-
tices. Journals may begin with a standard
that rewards adherence, perhaps as a step
toward requiring the practice. For example,
Psychological Science awards badges for
“open data,” “open materials,” and “prereg-
istration” ( 12), and approximately 25% of
accepted articles earned at least one badge
in the first year of operation.
The Level 1 guidelines are designed to
have minimal effect on journal efficiency
and workflow while also having a measur-
able impact on transparency. Moreover,
although higher levels may require greater
implementation effort up front, such efforts
may benefit publishers and editors and the
quality of publications by, for example, re-
Published by AAAS
on September 13, 2016http://science.sciencemag.org/Downloaded from
26 JUNE 2015 • VOL 348 ISSUE 6242 1425SCIENCE sciencemag.org
In synthetic ecology, a nascent offshoot
of synthetic biology, scientists aim to
design and construct microbial com-
munities with desirable properties.
Such mixed populations of microor-
ganisms can simultaneously perform
otherwise incompatible functions ( 1).
Compared with individual organisms, they
can also better resist losses in function as
a result of environmental perturbation or
invasion by other species ( 2). Synthetic
ecology may thus be a promising approach
for developing robust, stable biotechno-
logical processes, such as the conversion
of cellulosic biomass to biofuels ( 3). How-
ever, achieving this will require detailed
knowledge of the principles that guide the
structure and function of microbial com-
munities (see the image).
Recent work with synthetic communities
is shedding light on microbial interactions
that may lead to new principles for commu-
nity design and engineering. In game the-
ory, cooperators provide publicly available
goods that benefit all, whereas cheaters
exploit those goods without reciprocation.
The tragedy of the commons predicts that
cheaters are more fit than cooperators,
eventually destroying the cooperation. Yet,
this is not borne out by observations. For
example, using a synthetic consortium of
genetically modified yeast to represent co-
operators and cheaters, Waite and Shou ( 4)
found that, although initially less fit than
cheaters, cooperators rapidly dominated in
a fraction of the cultures. The evolved coop-
erators harbored mutations allowing them
to grow at much lower nutrient concentra-
tions than their ancestor. This suggests that
the tragedy of the commons can be avoided
Ecological communities
by design
Learning from nature. Photomicrograph of cyanobacterial-heterotroph microbial consortia derived from a
phototrophic microbial mat community from a saline lake. Emerging understanding of cooperative mechanisms
in such communities may be helpful in the design of synthetic communities for use in biotechnology.
By Jame s K. Fredrickson
Synthetic ecology requires knowledge of how
microbial communities function
ECOLOGY
ducing time spent on communication with
authors and reviewers, improving standards
of reporting, increasing detectability of er-
rors before publication, and ensuring that
publication-related data are accessible for a
long time.
Evaluation and revision. An information
commons and support team at the Center
for Open Science is available (top@cos.io)
to assist journals in selection and adop-
tion of standards and will track adoption
across journals. Moreover, adopting jour-
nals may suggest revisions that improve
the guidelines or make them more flexible
or adaptable for the needs of particular
subdisciplines.
The present version of the guidelines is
not the last word on standards for openness
in science. As with any research enterprise,
the available empirical evidence will expand
with application and use of these guide-
lines. To reflect this evolutionary process,
the guidelines are accompanied by a version
number and will be improved as experience
with them accumulates.
Conclusion. The journal article is central
to the research communication process.
Guidelines for authors define what aspects
of the research process should be made
available to the community to evaluate,
critique, reuse, and extend. Scientists rec-
ognize the value of transparency, openness,
and reproducibility. Improvement of journal
policies can help those values become more
evident in daily practice and ultimately im-
prove the public trust in science, and sci-
ence itself.
REFERENCES AND NOTES
1. M. McNutt, Science 343, 229 (2014).
2. E. Miguel et al., Science 343, 30 (20 14).
3. M. S. Anderson, B. C. Martinson, R. De Vries, J. Empir. Res.
Hum. Res. Ethics 2, 3 (2 007).
4. J. P. A. Ioanni dis, M . R. Muna fò, P. Fus ar-Poli , B. A. Nose k, S.
P. D a v i d , Trends Cogn. Sci. 18, 235 (2014) .
5. L. K . John, G. Lo ewenst ein, D. Pre lec, Psychol. Sci. 23, 524
(201 2).
6. E. H. O ’Boyle Jr., G. C. Bank s, E. Gon zalez- Mule, J. Manage.
10.11 77/014920 63145 27133 (2014 ).
7. B. A. N osek , J. R. Spie s, M. Moty l, Perspect. Psychol. Sci. 7,
615 (2012).
8. J. B. Asendorpf et al., Eur. J. Pers. 27, 108 (2 013).
9. J. P. Simmons, L. D. Nelson, U. Simonsohn, Psychol. Sci. 22,
1359 ( 2011) .
10. A. Franco, N. Malhotra, G. Simonovits, Science 345, 1502
(2014 ).
11. R. Rosenthal, Psychol. Bull. 86, 638 (1979) .
12. E. Eich, Psychol. Sci. 25, 3 (2014).
13. E.-J. Wagenmakers, R. Wetzels, D. Borsboom, H. L. van der
Maas, R. A. Kievit, Perspect. Psychol. Sci. 7, 632 (2012 ).
14. C. D. Chambers, Cortex 49, 609 (2013).
ACKNOWLEDGMENTS
This work was supported by the Laura and John Arnold
Foundation.
SUPPLEMENTARY MATERIALS
www.sciencemag.org/content/348/6242/1422/suppl/DC1
PHOTO: ALICE DOHNALKOVA/PNNL
10.1126/science.aab2374
Published by AAAS
on September 13, 2016http://science.sciencemag.org/Downloaded from
(6242), 1422-1425. [doi: 10.1126/science.aab2374]348Science
Yarkoni (June 25, 2015)
VandenBos, S. Vazire, E. J. Wagenmakers, R. Wilson and T.
Simonsohn, C. Soderberg, B. A. Spellman, J. Turitto, G.
Mayo-Wilson, M. McNutt, E. Miguel, E. Levy Paluck, U.
Karlan, A. Kraut, A. Lupia, P. Mabry, T. Madon, N. Malhotra, E.
Goroff, D. P. Green, B. Hesse, M. Humphreys, J. Ishiyama, D.
M. Contestabile, A. Dafoe, E. Eich, J. Freese, R. Glennerster, D.
S. J. Breckler, S. Buck, C. D. Chambers, G. Chin, G. Christensen,
B. A. Nosek, G. Alter, G. C. Banks, D. Borsboom, S. D. Bowman,
Promoting an open research culture
Editor's Summary
This copy is for your personal, non-commercial use only.
Article Tools
http://science.sciencemag.org/content/348/6242/1422
article tools:
Visit the online version of this article to access the personalization and
Permissions http://www.sciencemag.org/about/permissions.dtl
Obtain information about reproducing this article:
is a registered trademark of AAAS. ScienceAdvancement of Science; all rights reserved. The title
Avenue NW, Washington, DC 20005. Copyright 2016 by the American Association for the
in December, by the American Association for the Advancement of Science, 1200 New York
(print ISSN 0036-8075; online ISSN 1095-9203) is published weekly, except the last weekScience
on September 13, 2016http://science.sciencemag.org/Downloaded from
... Members of the working group were recruited from respondents to a survey about guidelines for reporting about network data (Neal, 2023b), with the goal of ensuring representation from multiple disciplines, regions, and demographic groups. We began by reviewing existing principles and expectations for data sharing, including the TOP Guidelines (transparency and openness promotion; Nosek et al., 2015), the FAIR data principles (findability, accessibility, interoperability, and reuse; Wilkinson et al., 2016), the CARE principles for Indigenous Data Governance (collective benefit, authority to control, responsibility, and ethics; Carroll et al., 2020), and the expectations of network journals. We then identified gaps in, or opportunities to clarify, these principles and expectations in the case of network data. ...
... The TOP Guidelines (Nosek et al., 2015) address a range of issues, including citation standards and pre-registration. With respect to sharing data and materials, they specify three increasingly stringent levels of transparency and openness: ...
Article
Full-text available
One of the goals of open science is to promote the transparency and accessibility of research. Sharing data and materials used in network research is critical to these goals. In this paper, we present recommendations for whether, what, when, and where network data and materials should be shared. We recommend that network data and materials should be shared, but access to or use of shared data and materials may be restricted if necessary to avoid harm or comply with regulations. Researchers should share the network data and materials necessary to reproduce reported results via a publicly accessible repository when an associated manuscript is published. To ensure the adoption of these recommendations, network journals should require sharing, and network associations and academic institutions should reward sharing.
... For instance, infrastructure like the Open Science Framework (launched in 2012) allows researchers to more easily share their materials, data, and code, which can make it easier for researchers to conduct replications. Norms also appear to be shifting at the journal level-almost 5,000 journals across a range of fields have now signed the Transparency and Openness Promotion guidelines, which describe standards for open sciencerelated policies in eight categories, one of which is replication (Nosek et al., 2015). ...
... We examined how many and what percent of the top 100 ranking journals in psychology explicitly state on their website that they consider submissions of replications. With recent initiatives-such as the TOP guidelines (Nosek et al., 2015) and TOP Factor (Mayo-Wilson et al., 2021)-rapidly changing the research landscape and with limited prior evidence, we did not have a hypothesis as to how many journals would have a policy stating that they are open to publishing replications. ...
Article
Full-text available
Despite lip service about replication being a cornerstone of science, replications have historically received little real estate in the published literature. Following psychology’s recent replication crisis, we assessed the prevalence of one type of replication contribution: direct replication articles—articles where a direct or close replication of a previously published study is one of the main contributions of the article. This prevalence provides one indicator of how much the field values and incentivizes this type of self-correction. We used a keyword search combined with manual checking to identify direct replication articles that were published from 2010 to 2021 in the 100 highest impact psychology journals. In total, only 0.2% of articles (169 articles out of 84,834) were direct replication articles. There was a small suggestive increase in the prevalence of direct replication articles over time. Additionally, journals with a stated policy of considering replication submissions (31% of journals) were 7.85 times more likely to publish direct replication articles than those without such a policy. Fifty-four out of 88 journals did not publish any direct replication articles in the 11 years surveyed. Our estimate is not the same as the prevalence of direct replication studies overall (direct replication results can be shared in many ways other than as direct replication articles in top journals). Ultimately, direct replication articles are still rare, with a few journals doing most of the heavy lifting. Based on these findings, we argue it would be premature to declare that psychology’s replication crisis is over.
... To evaluate the status quo of Open Science implementation in agricultural economics, we assess the policies and guidelines of twelve agricultural economics journals, therewith also complementing the assessment of Arpinon and Lefebvre (2024) who focus on pre-registration and registered reports covering a wider range of journals, the theoretical assessment of Hüttel and Hess (2024) To evaluate the status quo of Open Science implementation in these journals, we apply the Transparency and Openness Promotion (TOP) guidelines and factors. The TOP factors build on the eight TOP guidelines criteria described in Nosek et al. (2015), and are a set of metrics to assess open science practices. The TOP factor rubric for evaluating the author guidelines lists ten categories of practices 9 and uses scores for the level of implementation (Mellor et al. 2024). ...
Article
Full-text available
We provide a ‘big picture’ of what Open Science is and what benefits, limitations, and risks it entails for agricultural economists. We show that Open Science comprises various aspects, such as, the accessibility of science, transparency of scientific processes, open and FAIR research data and code, and openness in teaching and education. We identify potential benefits of Open Science for individual researchers and the public, as well as adoption barriers. We highlight that public benefits of a wide-spread uptake of Open Science practices still remain unexplored. We share best practice examples for key aspects of agricultural economic research, i.e., primary data collection and analysis, optimization and simulation models, use of replication packages and an Open Science Community. Assessing the author guidelines of twelve Agricultural Economics journals for their Open Science practices, we find that data citation and transparency are considered important in many journals already, whereas replication, pre-registration or results-blind review are encouraged but rarely enforced. It also becomes evident that the journals differ in terms of how strictly they enforce their open-science guidelines. We close by providing recommendations for researchers, journal editors, policy makers, universities, research institutes and funding agencies to better align public benefits with private incentives.
... Resnik and Shamoo 2011;Titus, Wells, and Rhoades 2008), Responsible Research and Innovation (e.g. Von Schomberg 2012), Open Science (Nosek et al. 2015), and Reproducibility (e.g. Ioannidis 2005; Makel, Plucker, and Hegarty 2012). ...
Article
Full-text available
This perspective article explores the historical evolution of scandals related to academic integrity and their implications for the relationship between science and politics. We argue that there are three distinctive waves of scandalization since the postwar era: The first wave, starting in the 1970s, led to governance measures addressing public trust issues in science funding. The 1980s and 1990s witnessed a second wave centered on research misconduct, prompting the establishment of boundary organizations such as the Office of Research Integrity. Since the 2010s, the third wave shifted focus to concerns such as Open Science and reproducibility, giving rise to a mainly intra-scientific moral entrepreneurship that unfolds not along one-time scandals anymore, but as part of a continuous crisis discourse. This current wave of reform movements is met with considerably less intra-scientific resistance than its predecessors and hence may inadvertently achieve regulatory goals surpassing previous political intentions.
Preprint
Full-text available
Meta-analytic systematic reviews are crucial for advancing research and practice in Clinical Child and Adolescent Psychology (CCAP). Despite their importance, there has been no systematic investigation into transparency-and quality-related aspects of these reviews in leading CCAP journals. This study protocol (https://osf.io/qhrau/) proposes a meta-review to assess the transparency, methodological quality, and statistical consistency of recent meta-analytic systematic reviews (2022-2024) published in leading journals from CCAP, aiming to improve future practices in the field. We will include meta-analytic systematic reviews from seven leading journals publishing CCAP-related content between 2022-2024 (estimated sample size based on piloting = 60). Eligible systematic reviews need to have conducted a frequentist meta-analysis, define eligible populations as children or adolescents between 0-20 years (ideally based on primary study sample mean), may include a clinical psychological or psychotherapeutic intervention, and need to focus on clinical psychological outcomes (no comparators defined). We will search Web of Science (Core Collection) by combining journal names and systematic review-related keywords. Eligible meta-analytic systematic reviews will be assessed for transparency (PRISMA-adaptation; newly developed set of items for CCAP-related content), methodological quality (AMSTAR 2), and statistical consistency (statcheck). Descriptive analyses will include overall and domain-based scores, as well as exploratory analyses assessing associations with transparency-promoting factors on review and journal level. This meta-review can potentially shed light on and enhance the transparency, quality, and statistical consistency within meta-analytic systematic reviews from the field of CCAP. In doing so, it may provide guidance for researchers, reviewers, and editors, while laying the groundwork for future, large-scaled meta-studies in this field.
Article
Full-text available
Over the last decade, the framework of microaggressions has been adopted to examine subtle manifestations of discrimination from the perspective of socially disadvantaged groups. However, the microaggressions literature is strongly U.S.-centered, and very few studies have been conducted in other societal contexts, such as European countries characterized by distinct migration patterns and post-colonial intergroup relations. Moreover, foreign-born immigrant women have been overlooked in microaggression research. The present qualitative study draws upon post-colonial feminist theory and adopts an intracategor-ical intersectional approach to examine the experiences of foreign-born immigrant women in Portugal. Ten focus groups were conducted with 52 participants (M age = 34.2, SD = 10.2). Data were analyzed via a two-step process thematic analysis (TA) approach. First, codebook TA was used to produce nine themes building on previous microaggressions taxonomies. Then, reflexive TA was applied to generate four macro-themes related to post-colonial discourses. As a result, we propose a tax-onomy of Gendered Colonialist Microaggressions and a theoretical framework linking these subtle forms of discrimination to social representations of immigrant women, rooted in colonial legacies and systemic power disparities. This study highlights the understudied psychological and societal implications of microaggressions in post-colonial settings, raising new questions and providing directions for future action.
Book
Full-text available
La obra, Ciencia, Tecnología y Sociedad en Costa Rica: reflexiones, trayectorias y desafíos entre el siglo XX y el siglo XXI, constituye una contribución significativa al estudio de la intersección entre el desarrollo científico y tecnológico y su impacto social en el contexto costarricense y latinoamericano. El libro está conformado por una serie de investigaciones inéditas que iluminan la complejidad de estos procesos desde una perspectiva crítica y transdisciplinaria.
Article
Intelligence is polygenic, highly heritable, and predicts wide-ranging life outcomes. Here, we meta-analysed the predictive validity of polygenic scores for intelligence based on the largest available genome-wide association study (or GWAS; Savage et al., 2018) for tested, phenotypic intelligence to date. Across 32 estimates from 9 independent samples, which all came from WEIRD countries and were of European ancestry (Ntotal = 452,864), our meta-analytic estimate for the association between polygenic and phenotypic intelligence was ρ = 0.245 (p < .001, 95 % CI = 0.184–0.307), an effect of medium size. The meta-analytic estimate varied across samples, studies, and phenotypic measures of intelligence, and even after accounting for these moderators, polygenic score predictions remained significantly heterogenous. Our findings support claims that polygenic predictions of intelligence benefit and advance research but their utility in other contexts is yet to be demonstrated.
Article
Full-text available
The issue of a published literature not representative of the population of research is most often discussed in terms of entire studies being suppressed. However, alternative sources of publication bias are questionable research practices (QRPs) that entail post hoc alterations of hypotheses to support data or post hoc alterations of data to support hypotheses. Using general strain theory as an explanatory framework, we outline the means, motives, and opportunities for researchers to better their chances of publication independent of rigor and relevance. We then assess the frequency of QRPs in management research by tracking differences between dissertations and their resulting journal publications. Our primary finding is that from dissertation to journal article, the ratio of supported to unsupported hypotheses more than doubled (.82 to 1.00 versus 1.94 to 1.00). The rise in predictive accuracy resulted from the dropping of statistically non-significant hypotheses, the addition of statistically significant hypotheses, the reversing of predicted direction of hypotheses, and alterations to data. We conclude with recommendations to help mitigate the problem of an unrepresentative literature that we label, the Chrysalis Effect.
Article
Full-text available
There is growing appreciation for the advantages of experimentation in the social sciences. Policy-relevant claims that in the past were backed by theoretical arguments and inconclusive correlations are now being investigated using more credible methods. Changes have been particularly pronounced in development economics, where hundreds of randomized trials have been carried out over the last decade. When experimentation is difficult or impossible, researchers are using quasi-experimental designs. Governments and advocacy groups display a growing appetite for evidence-based policy-making. In 2005, Mexico established an independent government agency to rigorously evaluate social programs, and in 2012, the U.S. Office of Management and Budget advised federal agencies to present evidence from randomized program evaluations in budget requests (1, 2).
Article
Full-text available
Book
Full-text available
Every day, we make decisions on topics ranging from personal investments to schools for our children to the meals we eat to the causes we champion. Unfortunately, we often choose poorly. The reason, the authors explain, is that, being human, we all are susceptible to various biases that can lead us to blunder. Our mistakes make us poorer and less healthy; we often make bad decisions involving education, personal finance, health care, mortgages and credit cards, the family, and even the planet itself. Thaler and Sunstein invite us to enter an alternative world, one that takes our humanness as a given. They show that by knowing how people think, we can design choice environments that make it easier for people to choose what is best for themselves, their families, and their society. Using colorful examples from the most important aspects of life, Thaler and Sunstein demonstrate how thoughtful "choice architecture" can be established to nudge us in beneficial directions without restricting freedom of choice. Nudge offers a unique new take-from neither the left nor the right-on many hot-button issues, for individuals and governments alike. This is one of the most engaging and provocative books to come along in many years. © 2008 by Richard H. Thaler and Cass R. Sunstein. All rights reserved.
Article
We conduct a randomized field experiment with a Yale service club and find that the promise of public recognition increases giving. Some may claim that they give when offered public recognition in order to motivate others to give too, rather than for the more obvious expected private gain from increasing one's social standing. To tease apart these two theories, we also conduct a laboratory experiment with undergraduates. We find that patterns of giving are more consistent with a desire to improve social image than a purely altruistic desire to motivate others' contributions. We discuss the external validity of our lab findings for other settings.
Article
We studied publication bias in the social sciences by analyzing a known population of conducted studies—221 in total—in which there is a full accounting of what is published and unpublished. We leveraged Time-sharing Experiments in the Social Sciences (TESS), a National Science Foundation–sponsored program in which researchers propose survey-based experiments to be run on representative samples of American adults. Because TESS proposals undergo rigorous peer review, the studies in the sample all exceed a substantial quality threshold. Strong results are 40 percentage points more likely to be published than are null results and 60 percentage points more likely to be written up. We provide direct evidence of publication bias and identify the stage of research production at which publication bias occurs: Authors do not write up and submit null findings.
Article
Recent systematic reviews and empirical evaluations of the cognitive sciences literature suggest that publication and other reporting biases are prevalent across diverse domains of cognitive science. In this review, we summarize the various forms of publication and reporting biases and other questionable research practices, and overview the available methods for probing into their existence. We discuss the available empirical evidence for the presence of such biases across the neuroimaging, animal, other preclinical, psychological, clinical trials, and genetics literature in the cognitive sciences. We also highlight emerging solutions (from study design to data analyses and reporting) to prevent bias and improve the fidelity in the field of cognitive science research.
Article
Science advances on a foundation of trusted discoveries. Reproducing an experiment is one important approach that scientists use to gain confidence in their conclusions. Recently, the scientific community was shaken by reports that a troubling proportion of peer-reviewed preclinical studies are not reproducible. Because confidence in results is of paramount importance to the broad scientific community, we are announcing new initiatives to increase confidence in the studies published in Science . For preclinical studies (one of the targets of recent concern), we will be adopting recommendations of the U.S. National Institute of Neurological Disorders and Stroke (NINDS) for increasing transparency. * Authors will indicate whether there was a pre-experimental plan for data handling (such as how to deal with outliers), whether they conducted a sample size estimation to ensure a sufficient signal-to-noise ratio, whether samples were treated randomly, and whether the experimenter was blind to the conduct of the experiment. These criteria will be included in our author guidelines.
Article
Replicability of findings is at the heart of any empirical science. The aim of this article is to move the current replicability debate in psychology towards concrete recommendations for improvement. We focus on research practices but also offer guidelines for reviewers, editors, journal management, teachers, granting institutions, and university promotion committees, highlighting some of the emerging and existing practical solutions that can facilitate implementation of these recommendations. The challenges for improving replicability in psychological science are systemic. Improvement can occur only if changes are made at many levels of practice, evaluation, and reward.