ArticlePDF Available

Abstract and Figures

INSIGHTS Design principles for synthetic ecology p. 1425 ▶ Whacking hydrogen into metal p. 1429 PE R S PE C T IVE S SCIENTIFIC INTEGRITY Self-correction in science at work By Bruce Alberts, 1 Ralph J. Cicerone, 2 Stephen E. Fienberg, 3 Alexander Kamb, 4 Marcia McNutt, 5 * Robert M. Nerem, 6 Randy Schekman , 7 Richard Shiffrin, 8 Victoria Stodden, 9 Subra Suresh, 10 Maria T. Zuber , 11 Barbara Kline Pope, 12 Kathleen Hall Jamieson 13, 14 W eek after week, news outlets carry word of new scientific discover- ies, but the media sometimes give suspect science equal play with substantive discoveries. Care- ful qualifications about what is known are lost in categorical headlines. Rare instances of misconduct or instances of irreproducibility are translated into concerns that science is broken. The Octo- ber 2013 Economist headline proclaimed “Trouble at the lab: Scientists like to think of science as self-correcting. To an alarming degree, it is not” ( 1). Yet, that article is also rich with instances of science both policing itself, which is how the problems came to The Economist’s attention in the first place, and addressing discovered lapses and ir- reproducibility concerns. In light of such issues and efforts, the U.S. National Acad- emy of Sciences (NAS) and the Annenberg Retreat at Sunnylands convened our group to examine ways to remove some of the cur- rent disincentives to high standards of in- tegrity in science. Like all human endeavors, science is imperfect. However, as Robert Merton noted more than half a century ago “the activities of scientists are subject to rigor- ous policing, to a degree perhaps unparal- leled in any other field of activity” (2). As a result, as Popper argued, “science is one of the very few human activities—perhaps the only one—in which errors POLICY are systematically criticized and fairly often, in time, corrected” (3). Instances in which scientists detect and address flaws in work constitute evidence of success, not failure, because they dem- onstrate the underlying protective mecha- nisms of science at work. Still, as in any human venture, science writ large does not always live up to its ide- als. Although attempts to replicate the 1998 Wakefield study alleging an association between autism and the MMR (measles, sciencemag.org SCIENCE 26 JUNE 2015 • VOL 348 ISSUE 6242 Published by AAAS ILLUSTRATION: DAVIDE BONAZZI Improve incentives to support research integrity
No caption available
… 
Content may be subject to copyright.
INSIGHTS |
PERSPECTIVES
1422 26 JUNE 2015 • VOL 348 ISSUE 6242 sciencemag.org SCIENCE
and neutral resource that supports and
complements efforts of the research enter-
prise and its key stakeholders.
Universities should insist that their fac-
ulties and students are schooled in the eth-
ics of research, their publications feature
neither honorific nor ghost authors, their
public information offices avoid hype in
publicizing findings, and suspect research
is promptly and thoroughly investigated.
All researchers need to realize that the
best scientific practice is produced when,
like Darwin, they persistently search for
flaws in their arguments. Because inherent
variability in biological systems makes it
possible for researchers to explore differ-
ent sets of conditions until the expected
(and rewarded) result is obtained, the need
for vigilant self-critique may be especially
great in research with direct application to
human disease. We encourage each branch
of science to invest in case studies identify-
ing what went wrong in a selected subset
of nonreproducible publications—enlisting
social scientists and experts in the respec-
tive fields to interview those who were
involved (and perhaps examining lab note-
books or redoing statistical analyses), with
the hope of deriving general principles for
improving science in each field.
Industry should publish its failed efforts
to reproduce scientific findings and join
scientists in the academy in making the
case for the importance of scientific work.
Scientific associations should continue to
communicate science as a way of know-
ing, and educate their members in ways to
more effectively communicate key scien-
tific findings to broader publics. Journals
should continue to ask for higher stan-
dards of transparency and reproducibility.
We recognize that incentives can backfire.
Still, because those such as enhanced social
image and forms of public recognition ( 10,
11) can increase productive social behavior
( 12), we believe that replacing the stigma of
retraction with language that lauds report-
ing of unintended errors in a publication will
increase that behavior. Because sustaining a
good reputation can incentivize cooperative
behavior ( 13), we anticipate that our pro-
posed changes in the review process will not
only increase the quality of the final product
but also expose efforts to sabotage indepen-
dent review. To ensure that such incentives
not only advance our objectives but above
all do no harm, we urge that each be scru-
tinized and evaluated before being broadly
implemented.
Will past be prologue? If science is to
enhance its capacities to improve our un-
derstanding of ourselves and our world,
protect the hard-earned trust and esteem
in which society holds it, and preserve its
role as a driver of our economy, scientists
must safeguard its rigor and reliability in
the face of challenges posed by a research
ecosystem that is evolving in dramatic and
sometimes unsettling ways. To do this, the
scientific research community needs to be
involved in an ongoing dialogue. We hope
that this essay and the report The Integrity
of Science ( 14), forthcoming in 2015, will
serve as catalysts for such a dialogue.
Asked at the close of the U.S. Consti-
tutional Convention of 1787 whether the
deliberations had produced a republic or
a monarchy, Benjamin Franklin said “A
Republic, if you can keep it.” Just as pre-
serving a system of government requires
ongoing dedication and vigilance, so too
does protecting the integrity of science.
REFERENCES AND NOTES
1. Trouble at the lab, The Economist, 19 October 2013;
www.economist.com/news/briefing/
21588057-scientists-think-science-self-correcting-
alarming-degree-it-not-trouble.
2. R. M erton , The Sociology of Science: Theoretical and
Empirical Investigations (University of Chicago Press,
Chicago, 1973), p. 276.
3. K. Popper, Conjectures and Refutations: The Growth of
Scientific Knowledge (Routledge, London, 1963), p. 293.
4. Editorial Board, Nature 511, 5 (2014); www.nature.com/
news/stap-retracted-1.15488.
5. B. A. Nose k et al., Science 348, 142 2 (20 15).
6. Institute of Medicine, Discussion Framework for Clinical
Trial Data Sharing: Guiding Principles, Elements, and
Activities (National Academies Press, Washington, DC,
2014) .
7. B. Nosek, J. Spies, M. Motyl, Perspect. Psychol. Sci. 7, 615
(201 2).
8. C. Franzoni, G. Scellato, P. Stephan, Science 333, 702
(201 1).
9. National Academy of Sciences, National Academy of
Engineering, and Institute of Medicine, Responsible
Science, Volume I: Ensuring the Integrity of the Research
Process (National Academies Press, Washington, DC,
1992).
10. N. Lacetera, M. Macis, J. Econ. Beh av. Organ. 76, 225
(2010 ).
11. D. Karlan, M. McConnell, J. Econ. Beha v. Organ. 106, 40 2
(2014 ).
12. R. Thaler, C. Sunstein, Nudge: Improving Decisions About
Health, Wealth and Happiness (Yale Univ. Press, New
Haven, CT, 2009).
13. T. Pfeiffer, L. Tran, C. Krumme, D. Rand, J. R. Soc. I nterf ace
2012, rsif20120332 (2012).
14. Committee on Science, Engineering, and Public Policy
of the National Academy of Sciences, National Academy
of Engineering, and Institute of Medicine, The Integrity
of Science (National Academies Press, forthcoming).
http://www8.nationalacademies.org/cp/projectview.
aspx?key=49387.
10.1126/science.aab3847
“Instances in which
scientists detect and
address flaws in work
constitute evidence
of success, not failure.”
Transparency, openness, and repro-
ducibility are readily recognized as
vital features of science ( 1, 2). When
asked, most scientists embrace these
features as disciplinary norms and
values ( 3). Therefore, one might ex-
pect that these valued features would be
routine in daily practice. Yet, a growing
body of evidence suggests that this is not
the case ( 46).
A likely culprit for this disconnect is an
academic reward system that does not suf-
ficiently incentivize open practices ( 7). In the
present reward system, emphasis on innova-
tion may undermine practices
that support verification. Too
often, publication requirements
(whether actual or perceived) fail to encour-
age transparent, open, and reproducible sci-
ence ( 2, 4, 8, 9). For example, in a transparent
science, both null results and statistically
significant results are made available and
help others more accurately assess the evi-
dence base for a phenomenon. In the present
culture, however, null results are published
less frequently than statistically significant
results ( 10) and are, therefore, more likely
inaccessible and lost in the “file drawer” ( 11).
The situation is a classic collective action
problem. Many individual researchers lack
Promoting an
open research
culture
By B. A. Nosek ,* G. Alter, G. C. Banks,
D. Borsboom, S. D. Bowman,
S. J. Breckler, S. Buck, C. D. Chambers,
G. Chin, G. Christensen, M. Contestabile,
A. Dafoe, E. Eich, J. Freese,
R. Glennerster, D. Goroff, D. P. Green, B.
Hesse, M. Humphreys, J. Ishiyama,
D. Karlan, A. Kraut, A. Lupia, P. Mabry,
T. A . Madon, N. Malhotra,
E. Mayo-Wilson, M. McNutt, E. Miguel,
E. Levy Paluck, U. Simonsohn,
C. Soderberg, B. A. Spellman,
J. Tu rit to , G. VandenBos, S. Vazire,
E. J. Wagenmakers, R. Wilson, T. Yarkoni
Author guidelines for
journals could help to
promote transparency,
openness, and
reproducibility
SCIENTIFIC STANDARDS
POLI CY
Published by AAAS
on June 29, 2015www.sciencemag.orgDownloaded from on June 29, 2015www.sciencemag.orgDownloaded from on June 29, 2015www.sciencemag.orgDownloaded from on June 29, 2015www.sciencemag.orgDownloaded from
26 JUNE 2015 • VOL 348 ISSUE 6242 1423SCIENCE sciencemag.org
strong incentives to be more transparent,
even though the credibility of science would
benefit if everyone were more transparent.
Unfortunately, there is no centralized means
of aligning individual and communal incen-
tives via universal scientific policies and pro-
cedures. Universities, granting agencies, and
publishers each create different incentives
for researchers. With all of this complexity,
nudging scientific practices toward greater
openness requires complementary and coor-
dinated efforts from all stakeholders.
THE TRANSPARENCY AND OPENNESS
PROMOTION GUIDELINES. The Transpar-
ency and Openness Promotion (TOP) Com-
mittee met at the Center for Open Science
in Charlottesville, Virginia, in November
2014 to address one important element of
the incentive systems: journals’
procedures and policies for pub-
lication. The committee con-
sisted of disciplinary leaders,
journal editors, funding agency
representatives, and disciplin-
ary experts largely from the
social and behavioral sciences.
By developing shared standards
for open practices across jour-
nals, we hope to translate sci-
entific norms and values into
concrete actions and change the
current incentive structures to
drive researchers’ behavior to-
ward more openness. Although
there are some idiosyncratic is-
sues by discipline, we sought to
produce guidelines that focus
on the commonalities across
disciplines.
Standards. There are eight
standards in the TOP guidelines;
each moves scientific communi-
cation toward greater openness.
These standards are modular,
facilitating adoption in whole
or in part. However, they also
complement each other, in that
commitment to one standard
may facilitate adoption of oth-
ers. Moreover, the guidelines are sensitive
to barriers to openness by articulating, for
example, a process for exceptions to shar-
ing because of ethical issues, intellectual
property concerns, or availability of neces-
sary resources. The complete guidelines are
available in the TOP information commons
at http://cos.io/top, along with a list of
signatories that numbered 86 journals and
26 organizations as of 15 June 2015. The
table provides a summary of the guidelines.
First, two standards reward research-
ers for the time and effort they have spent
engaging in open practices. (i) Citation
standards extend current article citation
norms to data, code, and research materi-
als. Regular and rigorous citation of these
materials credit them as original intellec-
tual contributions. (ii) Replication stan-
dards recognize the value of replication
for independent verification of research
results and identify the conditions under
which replication studies will be published
in the journal. To progress, science needs
both innovation and self-correction; repli-
cation offers opportunities for self-correc-
tion to more efficiently identify promising
research directions.
Second, four standards describe what
openness means across the scientific pro-
cess so that research can be reproduced
and evaluated. Reproducibility increases
confidence in results and also allows schol-
ars to learn more about what results do
and do not mean. (i) Design standards in-
crease transparency about the research
process and reduce vague or incomplete
reporting of the methodology. (ii) Research
materials standards encourage the provi-
sion of all elements of that methodology.
(iii) Data sharing standards incentivize
authors to make data available in trusted
repositories such as Dataverse, Dryad, the
Interuniversity Consortium for Political and
Social Research (ICPSR), the Open Science
Framework, or the Qualitative Data Reposi-
tory. (iv) Analytic methods standards do the
same for the code comprising the statistical
models or simulations conducted for the re-
search. Many discipline-specific standards
for disclosure exist, particularly for clini-
cal trials and health research more gener-
ally (e.g., www.equator-network.org). Many
more are emerging for other disciplines,
such as those developed by Psychological
Science ( 12).
Finally, two standards address the values
resulting from preregistration. (i) Standards
for preregistration of studies facilitate the
discovery of research, even unpublished
research, by ensuring that the existence of
the study is recorded in a public registry.
(ii) Preregistration of analysis plans certify
the distinction between confirmatory and
exploratory research, or what is also called
hypothesis-testing versus hypothesis-gen-
erating research. Making transparent the
distinction between confirmatory and ex-
ploratory methods can enhance reproduc-
ibility ( 3, 13, 14).
Levels. The TOP Committee recognized
that not all of the standards are applicable
to all journals or all disciplines. Therefore,
rather than advocating for a single set of
guidelines, the TOP Committee defined
ILLUSTRATION: DAVIDE BONAZZI
*Corresponding author. E-mail: nosek@virginia.edu
A liations for the authors, all of whom are members of the
TOP Guidelines Committee, are given in the supplementary
materials.
Published by AAAS
INSIGHTS |
PERSPECTIVES
1424 26 JUNE 2015 • VOL 348 ISSUE 6242 sciencemag.org SCIENCE
Citation standards Journal encourages
citation of data, code,
and materials—or says
nothing.
Journal describes
citation of data in
guidelines to authors
with clear rules and
examples.
Article provides appropriate
citation for data and materials
used, consistent with journal's
author guidelines.
Article is not published until
appropriate citation for data
and materials is provided that
follows journal's author
guidelines.
Data transparency Journal encourages
data sharing—or says
nothing.
Article states whether
data are available and,
if so, where to access
them.
Data must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Data must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Analytic methods
(code) transparency
Journal encourages
code sharing—or says
nothing.
Article states whether
code is available and, if
so, where to access
them.
Code must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Code must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Research materials
transparency
Journal encourages
materials sharing—or
says nothing
Article states whether
materials are available
and, if so, where to
access them.
Materials must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Materials must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Design and analysis
transparency
Journal encourages
design and analysis
transparency or says
nothing.
Journal articulates
design transparency
standards.
Journal requires adherence to
design transparency standards
for review and publication.
Journal requires and enforces
adherence to design transpar-
ency standards for review and
publication.
Preregistration
of studies
Journal says nothing. Journal encourages
preregistration of
studies and provides
link in article to
preregistration if it
exists.
Journal encourages preregis-
tration of studies and provides
link in article and certication
of meeting preregistration
badge requirements.
Journal requires preregistration
of studies and provides link and
badge in article to meeting
requirements.
Preregistration
of analysis plans
Journal says nothing. Journal encourages
preanalysis plans and
provides link in article
to registered analysis
plan if it exists.
Journal encourages preanaly-
sis plans and provides link in
article and certication of
meeting registered analysis
plan badge requirements.
Journal requires preregistration
of studies with analysis plans
and provides link and badge in
article to meeting requirements.
Replication Journal discourages
submission of
replication studies—or
says nothing.
Journal encourages
submission of
replication studies.
Journal encourages submis-
sion of replication studies and
conducts blind review of
results.
Journal uses Registered
Reports as a submission option
for replication studies with peer
review before observing the
study outcomes.
LEVEL 0 LEVEL 1 LEVEL 2 LEVEL 3
Summary of the eight standards and three levels of the TOP guidelines
Levels 1 to 3 are increasingly stringent for each standard. Level 0 oers a comparison that does not meet the standard.
three levels for each standard. Level 1 is de-
signed to have little to no barrier to adop-
tion while also offering an incentive for
openness. For example, under the analytic
methods (code) sharing standard, authors
must state in the text whether and where
code is available. Level 2 has stronger ex-
pectations for authors but usually avoids
adding resource costs to editors or pub-
lishers that adopt the standard. In Level 2,
journals would require code to be deposited
in a trusted repository and check that the
link appears in the article and resolves to
the correct location. Level 3 is the strongest
standard but also may present some barri-
ers to implementation for some journals.
For example, the journals Political Analysis
and Quarterly Journal of Political Science
require authors to provide their code for
review, and editors reproduce the reported
analyses publication. In the table, we pro-
vide “Level 0” for comparison of common
journal policies that do not meet the trans-
parency standards.
Adoption. Defining multiple levels and
distinct standards facilitates informed
decision-making by journals. It also ac-
knowledges the variation in evolving norms
about research transparency. Depending on
the discipline or publishing format, some
of the standards may not be relevant for
a journal. Journal and publisher decisions
can be based on many factors—including
their readiness to adopt modest to stron-
ger transparency standards for authors,
internal journal operations, and disciplin-
ary norms and expectations. For example,
in economics, many highly visible journals
such as American Economic Review have
already adopted strong policies requiring
data sharing, whereas few psychology jour-
nals have comparable requirements.
In this way, the levels are designed to fa-
cilitate the gradual adoption of best prac-
tices. Journals may begin with a standard
that rewards adherence, perhaps as a step
toward requiring the practice. For example,
Psychological Science awards badges for
“open data,” “open materials,” and “prereg-
istration” ( 12), and approximately 25% of
accepted articles earned at least one badge
in the first year of operation.
The Level 1 guidelines are designed to
have minimal effect on journal efficiency
and workflow while also having a measur-
able impact on transparency. Moreover,
although higher levels may require greater
implementation effort up front, such efforts
may benefit publishers and editors and the
quality of publications by, for example, re-
Published by AAAS
26 JUNE 2015 • VOL 348 ISSUE 6242 1425SCIENCE sciencemag.org
In synthetic ecology, a nascent offshoot
of synthetic biology, scientists aim to
design and construct microbial com-
munities with desirable properties.
Such mixed populations of microor-
ganisms can simultaneously perform
otherwise incompatible functions ( 1).
Compared with individual organisms, they
can also better resist losses in function as
a result of environmental perturbation or
invasion by other species ( 2). Synthetic
ecology may thus be a promising approach
for developing robust, stable biotechno-
logical processes, such as the conversion
of cellulosic biomass to biofuels ( 3). How-
ever, achieving this will require detailed
knowledge of the principles that guide the
structure and function of microbial com-
munities (see the image).
Recent work with synthetic communities
is shedding light on microbial interactions
that may lead to new principles for commu-
nity design and engineering. In game the-
ory, cooperators provide publicly available
goods that benefit all, whereas cheaters
exploit those goods without reciprocation.
The tragedy of the commons predicts that
cheaters are more fit than cooperators,
eventually destroying the cooperation. Yet,
this is not borne out by observations. For
example, using a synthetic consortium of
genetically modified yeast to represent co-
operators and cheaters, Waite and Shou ( 4)
found that, although initially less fit than
cheaters, cooperators rapidly dominated in
a fraction of the cultures. The evolved coop-
erators harbored mutations allowing them
to grow at much lower nutrient concentra-
tions than their ancestor. This suggests that
the tragedy of the commons can be avoided
Ecological communities
by design
Learning from nature. Photomicrograph of cyanobacterial-heterotroph microbial consortia derived from a
phototrophic microbial mat community from a saline lake. Emerging understanding of cooperative mechanisms
in such communities may be helpful in the design of synthetic communities for use in biotechnology.
By Jame s K. Fredrickson
Synthetic ecology requires knowledge of how
microbial communities function
ECOLOGY
ducing time spent on communication with
authors and reviewers, improving standards
of reporting, increasing detectability of er-
rors before publication, and ensuring that
publication-related data are accessible for a
long time.
Evaluation and revision. An information
commons and support team at the Center
for Open Science is available (top@cos.io)
to assist journals in selection and adop-
tion of standards and will track adoption
across journals. Moreover, adopting jour-
nals may suggest revisions that improve
the guidelines or make them more flexible
or adaptable for the needs of particular
subdisciplines.
The present version of the guidelines is
not the last word on standards for openness
in science. As with any research enterprise,
the available empirical evidence will expand
with application and use of these guide-
lines. To reflect this evolutionary process,
the guidelines are accompanied by a version
number and will be improved as experience
with them accumulates.
Conclusion. The journal article is central
to the research communication process.
Guidelines for authors define what aspects
of the research process should be made
available to the community to evaluate,
critique, reuse, and extend. Scientists rec-
ognize the value of transparency, openness,
and reproducibility. Improvement of journal
policies can help those values become more
evident in daily practice and ultimately im-
prove the public trust in science, and sci-
ence itself.
REFERENCES AND NOTES
1. M. McNutt, Science 343, 229 (2014).
2. E. Miguel et al., Science 343, 30 (20 14).
3. M. S. Anderson, B. C. Martinson, R. De Vries, J. Empir. Res.
Hum. Res. Ethics 2, 3 (2 007).
4. J. P. A. Ioanni dis, M . R. Muna fò, P. Fus ar-Poli , B. A. Nose k, S.
P. D a v i d , Trends Cogn. Sci. 18, 235 (2014) .
5. L. K . John, G. Lo ewenst ein, D. Pre lec, Psychol. Sci. 23, 524
(201 2).
6. E. H. O ’Boyle Jr., G. C. Bank s, E. Gon zalez- Mule, J. Manage.
10.11 77/014920 63145 27133 (2014 ).
7. B. A. N osek , J. R. Spie s, M. Moty l, Perspect. Psychol. Sci. 7,
615 (2012).
8. J. B. Asendorpf et al., Eur. J. Pers. 27, 108 (2 013).
9. J. P. Simmons, L. D. Nelson, U. Simonsohn, Psychol. Sci. 22,
1359 ( 2011) .
10. A. Franco, N. Malhotra, G. Simonovits, Science 345, 1502
(2014 ).
11. R. Rosenthal, Psychol. Bull. 86, 638 (1979) .
12. E. Eich, Psychol. Sci. 25, 3 (2014).
13. E.-J. Wagenmakers, R. Wetzels, D. Borsboom, H. L. van der
Maas, R. A. Kievit, Perspect. Psychol. Sci. 7, 632 (2012 ).
14. C. D. Chambers, Cortex 49, 609 (2013).
ACKNOWLEDGMENTS
This work was supported by the Laura and John Arnold
Foundation.
SUPPLEMENTARY MATERIALS
www.sciencemag.org/content/348/6242/1422/suppl/DC1
PHOTO: ALICE DOHNALKOVA/PNNL
10.1126/science.aab2374
Published by AAAS
... An English-written paper titled "Treatise upon Electricity" was the first to be retracted in 1755 when Philosophical Transactions of the Royal Society of London discovered its errors (Wilson 1997). Thereafter, paper retractions were purposed as a mechanism for self-correcting the scientific literature (Alberts et al. 2015;Jamieson 2018). Contrarily, this self-correcting system had been argued to be a "myth" (Stroebe et al. 2012). ...
Article
Full-text available
Reasons underlying retractions of papers authored by the Iran-affiliated highly cited researchers (HCRs) have not been documented. Here, we report that 229 of the Iran-affiliated researchers were listed by the Clarivate Analytics as HCRs. We investigated the Retraction Watch Database and found that, in total, 51 papers authored by the Iran-affiliated HCRs were retracted from 2006 to 2019. Twenty-three of the 229 HCRs (10%) had at least one paper retracted. One of the listed HCRs had 22 papers retracted; 14 of the 23 (60.8%) had only one paper retracted. Among the 51 retracted papers, three had been authored by two female authors. Eight (16.8%) retracted papers had international co-authorships. The shortest and longest times from publication to retraction were 20 and 2610 (mean ± SD, 857 ± 616) days, respectively. Of the 51 papers, 43 (84%) had a single reason for retraction, whereas eight had multiple reasons. Among the 43 papers, 23 (53%) were retracted due to fake peer-review, eight (19%) were duplications, six (14%) had errors, four (9%) had plagiarism, and two (5%) were labelled as “limited or no information.” Duplication of data, which is easily preventable, amounted to 27%. Any publishing oversight committed by an HCR may not be tolerated because they represent the stakeholders of the scientific literature and stand as role-models for other peer researchers. Future policies supporting the Iranian academia should radically change by implementation of educational and awareness programs on publishing ethics to reduce the rate of retractions in Iran.
... Manipulated data should be stored at a central university level, also well documented and in principle available for colleagues to use" (Hoogen, 2013). In (Nosek et al., 2015) there are similar requirements proposed for publishing the information relevant for reproducibility of experiments, which are already adopted by some journals and conferences (Alberts, 2015). Similarly, we consider data published with restricted access, with enough metadata, and within limited usage intervals as examples of Semi-Open Data. ...
... To be trustworthy, science must be self-correcting (30). Sometimes these corrections are a refinement of current understanding (e.g. ...
Preprint
Full-text available
Despite extensive scientific research supporting the safety and effectiveness of approved vaccines, debates about their use continue in the public sphere. A paper prominently circulated on social media concluded that countries requiring more infant vaccinations have higher infant mortality rates (IMR), which has serious public health implications. However, inappropriate data exclusion and other statistical flaws in that paper merit a closer examination of this correlation. We re-analyzed the original data used in Miller and Goldman's study to investigate the relationship between vaccine doses and IMR. We show that the sub-sample of 30 countries used in the original paper was not a random sample from the entire dataset, and the correlation coefficient of 0.49 reported in that study is virtually impossible without data manipulation. Next, we show IMR as a function of countries' actual vaccination rates, rather than vaccination schedule, and show a strong negative correlation between vaccination rates and IMR. Finally, we analyze United States IMR data as a function of Hepatitis B vaccination rate to show an example of increased vaccination rates corresponding with reduced infant death over time. From our analyses, it is clear that vaccination does not predict higher IMR as previously reported.
... Some commentators have suggested that journalists and the media are responsible for the crisis narrative, translating "rare instances of misconduct or instances of irreproducibility. . .into concerns that science is broken," as a Science policy forum article puts it [15]. Content analysis of news media has similarly suggested coverage of reproducibility issues is promoting a "science in crisis" story, raising concerns among scientists about overgeneralized media narratives decreasing public trust in science [16]. ...
Article
Full-text available
To those involved in discussions about rigor, reproducibility, and replication in science, conversation about the "reproducibility crisis" appear ill-structured. Seemingly very different issues concerning the purity of reagents, accessibility of computational code, or misaligned incentives in academic research writ large are all collected up under this label. Prior work has attempted to address this problem by creating analytical definitions of reproducibility. We take a novel empirical, mixed methods approach to understanding variation in reproducibility discussions, using a combination of grounded theory and correspondence analysis to examine how a variety of authors narrate the story of the reproducibility crisis. Contrary to expectations, this analysis demonstrates that there is a clear thematic core to reproducibility discussions, centered on the incentive structure of science, the transparency of methods and data, and the need to reform academic publishing. However, we also identify three clusters of discussion that are distinct from the main body of articles: one focused on reagents, another on statistical methods, and a final cluster focused on the heterogeneity of the natural world. Although there are discursive differences between scientific and popular articles, we find no strong differences in how scientists and journalists write about the reproducibility crisis. Our findings demonstrate the value of using qualitative methods to identify the bounds and features of reproducibility discourse, and identify distinct vocabularies and constituencies that reformers should engage with to promote change.
... Examples such as the link between vaccines and autism are almost unanimously panned by experts (Flaherty, 2011), yet science is permeated with persistent misconceptions despite contradictory research (Scudellari, 2015). Even with lofty ambitions for objectivity (Ziman, 1996) and self-correction (Alberts et al., 2015), the iterative testing and eventual rectifying nature of collective knowledge and research over centuries may operate on timescales far longer than the average scientist's lifetime, if at all (Ioannidis, 2012). Reasons why hypotheses may become dogma without sufficient evidence include confirmation biasinterpreting evidence as supporting one's beliefs (Munaf`o, et al. 2017). ...
Article
Full-text available
Despite a 1944 publication questioning the misconception that Eastern Spadefoots (Scaphiopus holbrookii) and other Scaphiopodidae are ‘secretive' outside of rain-induced migration and breeding aggregations, confirmation bias has perpetuated this fallacy. As a result, S. holbrookii is one of the least studied frogs in the United States. Amassing a large postmetamorphic dataset, we examined the misconception that S. holbrookii are secretive outside of breeding aggregates or optimal environmental conditions. Using eyeshine spotlighting, we conducted transect, mark–recapture, and haphazard spotlighting surveys in Virginia and Rhode Island forests. Although no breeding events or migration occurred during this study, we detected thousands of postmetamorphic S. holbrookii in Virginia and dozens in Rhode Island, the majority of which were subadults—a demographic category severely overlooked in the literature. These results are in direct contradiction with historical surveys of our sites. Spotlighting was an efficient method of detecting S. holbrookii eyeshine in forests, which were easily differentiated from arthropod eyeshine. Minimal effort was needed to detect the presence of S. holbrookii in Virginia and Rhode Island, even though both states have different climates and S. holbrookii densities. We also discovered a previously undetected population in Rhode Island. Scaphiopus holbrookii of all postmetamorphic size classes emerged regularly from burrows, even with no precipitation. We discuss how confirmation bias and lack of appropriate field methods for nonbreeding life history stages has fueled the misconception that S. holbrookii are difficult to find outside of optimal weather conditions, which has hindered progress studying the ecology and conservation of this species.
Chapter
With the continuous growth of the healthcare research literature and growing concerns about the credibility, transparency, and potential waste in research, there is increasing interest in examining the research enterprise itself. Metaresearch, or research on research, provides a way to examine the efficiency, quality, and potential bias in the overall research ecosystem. The field of metaresearch is a relatively new but rapidly growing field that has seen many applications in biomedical research. Applications in pharmacy and health services research, however, are still developing. The purpose of this chapter is to introduce researchers to the concept of metaresearch, discuss examples of metaresearch, and motivate the importance of sustained metaresearch in pharmacy and health services research.
Article
Full-text available
Separated both in academics and practice since the Rockefeller Foundation effort to “liberate” public health from perceived subservience to clinical medicine a century ago, research in public health and clinical medicine have evolved separately. Today, translational research in population health science offers a means of fostering their convergence, with potentially great benefit to both domains. Although evidence that the two fields need not and should not be entirely distinct in their methods and goals has been accumulating for over a decade, the prodigious efforts of biomedical and social sciences over the past year to address the COVID-19 pandemic has placed this unifying approach to translational research in both fields in a new light. Specifically, the coalescence of clinical and population-level strategies to control disease and novel uses of population-level data and tools in research relating to the pandemic have illuminated a promising future for translational research. We exploit this unique window to re-examine how translational research is conducted and where it may be going. We first discuss the transformation that has transpired in the research firmament over the past two decades and the opportunities these changes afford. Next, we present some of the challenges—technical, cultural, legal, and ethical— that need attention if these opportunities are to be successfully exploited. Finally, we present some recommendations for addressing these challenges.
Article
Full-text available
In this paper, we present the Wikipedia Diversity Observatory, a project aimed to increase diversity within Wikipedia content. The project provides dashboards with visualizations and tools which show content gaps in terms of imbalances in the coverage of topics, and of concepts that are not shared across Wikipedia language editions. The dashboards are built on datasets generated for each of the more than 300 existing language editions, with features that label each article according to geography, gender and other categories relevant to overall content diversity. Through various examples, we show how the tools encourage and help editors to bridge the gaps in Wikipedia content. Finally, we discuss the project’s impact on the communities and implications for the Wikimedia movement in a moment in which covering diversity is considered strategic.
Article
Full-text available
We announce the creation of a new body within the National Academies of Sciences, Engineering, and Medicine called the Strategic Council for Research Excellence, Integrity, and Trust, charged with advancing the overall health, quality, and effectiveness of the research enterprise across all domains that fund, execute, disseminate, and apply scientific work in the public interest. By promoting the alignment of incentives and policies, adoption of standard tools, and implementation of proven methods, the Strategic Council seeks to optimize the excellence and trustworthiness of research for the benefit of society.
Article
Key points • Retraction notices (RNs) are historical documents and must be as informative as possible to ensure the scholarly record is complete. • RNs must be complete, informative, open, transparent, and comprehensive, and must include clear indication of fault and responsibility. • A guideline about what RNs should or should not contain will help to ensure consistency and completeness. • Declarations-related retractions can be prevented by conducting a thorough check in the initial quality check. • The issuer of the RN should be named in the notice, and may include the name of research institutions and funders if necessary. • To ensure complete transparency, RNs should include details of any investigation that supported the decision.
Article
Full-text available
Transparency, openness, and reproducibility are readily recognized as vital features of science (1, 2). When asked, most scientists embrace these features as disciplinary norms and values (3). Therefore, one might expect that these valued features would be routine in daily practice. Yet, a growing body of evidence suggests that this is not the case (4–6).
Article
Full-text available
The issue of a published literature not representative of the population of research is most often discussed in terms of entire studies being suppressed. However, alternative sources of publication bias are questionable research practices (QRPs) that entail post hoc alterations of hypotheses to support data or post hoc alterations of data to support hypotheses. Using general strain theory as an explanatory framework, we outline the means, motives, and opportunities for researchers to better their chances of publication independent of rigor and relevance. We then assess the frequency of QRPs in management research by tracking differences between dissertations and their resulting journal publications. Our primary finding is that from dissertation to journal article, the ratio of supported to unsupported hypotheses more than doubled (.82 to 1.00 versus 1.94 to 1.00). The rise in predictive accuracy resulted from the dropping of statistically non-significant hypotheses, the addition of statistically significant hypotheses, the reversing of predicted direction of hypotheses, and alterations to data. We conclude with recommendations to help mitigate the problem of an unrepresentative literature that we label, the Chrysalis Effect.
Book
Full-text available
Every day, we make decisions on topics ranging from personal investments to schools for our children to the meals we eat to the causes we champion. Unfortunately, we often choose poorly. The reason, the authors explain, is that, being human, we all are susceptible to various biases that can lead us to blunder. Our mistakes make us poorer and less healthy; we often make bad decisions involving education, personal finance, health care, mortgages and credit cards, the family, and even the planet itself. Thaler and Sunstein invite us to enter an alternative world, one that takes our humanness as a given. They show that by knowing how people think, we can design choice environments that make it easier for people to choose what is best for themselves, their families, and their society. Using colorful examples from the most important aspects of life, Thaler and Sunstein demonstrate how thoughtful "choice architecture" can be established to nudge us in beneficial directions without restricting freedom of choice. Nudge offers a unique new take-from neither the left nor the right-on many hot-button issues, for individuals and governments alike. This is one of the most engaging and provocative books to come along in many years. © 2008 by Richard H. Thaler and Cass R. Sunstein. All rights reserved.
Article
Full-text available
An academic scientist's professional success depends on publishing. Publishing norms emphasize novel, positive results. As such, disciplinary incentives encourage design, analysis, and reporting decisions that elicit positive results and ignore negative results. Prior reports demonstrate how these incentives inflate the rate of false effects in published science. When incentives favor novelty over replication, false results persist in the literature unchallenged, reducing efficiency in knowledge accumulation. Previous suggestions to address this problem are unlikely to be effective. For example, a journal of negative results publishes otherwise unpublishable reports. This enshrines the low status of the journal and its content. The persistence of false findings can be meliorated with strategies that make the fundamental but abstract accuracy motive-getting it right-competitive with the more tangible and concrete incentive-getting it published. This article develops strategies for improving scientific practices and knowledge accumulation that account for ordinary human motivations and biases. © The Author(s) 2012.
Article
Full-text available
Many national governments have implemented policies providing incentives for researchers to publish, especially in highly ranked international journals. Although still the top publishing nation, the United States has seen its share of publications decline from 34.2% in 1995 to 27.6% in 2007 as the number of articles published by U.S. scientists and engineers has plateaued and that of other countries has grown (1, 2). Hicks (3) argues that the two events are not unrelated: The decline in the relative performance of the United States relates to increased international competition engendered by newly adopted incentives that have crowded out some work by U.S. authors.
Article
We conduct a randomized field experiment with a Yale service club and find that the promise of public recognition increases giving. Some may claim that they give when offered public recognition in order to motivate others to give too, rather than for the more obvious expected private gain from increasing one's social standing. To tease apart these two theories, we also conduct a laboratory experiment with undergraduates. We find that patterns of giving are more consistent with a desire to improve social image than a purely altruistic desire to motivate others' contributions. We discuss the external validity of our lab findings for other settings.
Article
For any given research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the "file drawer problem" is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results. Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed. (15 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Theories abound for why individuals give to charity. We conduct a field experiment with donors to a Yale University service club to test the impact of a promise of public recognition on giving. Some may claim that they respond to an offer of public recognition not to improve their social standing, but rather to motivate others to give. To tease apart these two theories, we conduct a laboratory experiment with undergraduates, and found no evidence to support the alternative, altruistic motivation. We conclude that charitable gifts increase in response to the promise of public recognition primarily because of individuals' desire to improve their social image.
Article
Using longitudinal data on the entire population of blood donors in an Italian town, we examine how donors respond to a nonlinear award scheme that rewards them with symbolic prizes (medals) when they reach certain donation quotas. Our results indicate that donors significantly increase the frequency of their donations immediately before reaching the thresholds for which the rewards are given, but only if the prizes are publicly announced in the local newspaper and awarded in a public ceremony. The results are robust to several specifications, sample definitions, and controls for observable and unobservable heterogeneity. Our findings indicate that social image concerns are a primary motivator of prosocial behavior and that symbolic prizes are most effective as motivators when they are awarded publicly. We discuss the implications of our findings for policies aimed at incentivizing prosocial behavior.