ArticlePDF Available

Abstract and Figures

INSIGHTS Design principles for synthetic ecology p. 1425 ▶ Whacking hydrogen into metal p. 1429 PE R S PE C T IVE S SCIENTIFIC INTEGRITY Self-correction in science at work By Bruce Alberts, 1 Ralph J. Cicerone, 2 Stephen E. Fienberg, 3 Alexander Kamb, 4 Marcia McNutt, 5 * Robert M. Nerem, 6 Randy Schekman , 7 Richard Shiffrin, 8 Victoria Stodden, 9 Subra Suresh, 10 Maria T. Zuber , 11 Barbara Kline Pope, 12 Kathleen Hall Jamieson 13, 14 W eek after week, news outlets carry word of new scientific discover- ies, but the media sometimes give suspect science equal play with substantive discoveries. Care- ful qualifications about what is known are lost in categorical headlines. Rare instances of misconduct or instances of irreproducibility are translated into concerns that science is broken. The Octo- ber 2013 Economist headline proclaimed “Trouble at the lab: Scientists like to think of science as self-correcting. To an alarming degree, it is not” ( 1). Yet, that article is also rich with instances of science both policing itself, which is how the problems came to The Economist’s attention in the first place, and addressing discovered lapses and ir- reproducibility concerns. In light of such issues and efforts, the U.S. National Acad- emy of Sciences (NAS) and the Annenberg Retreat at Sunnylands convened our group to examine ways to remove some of the cur- rent disincentives to high standards of in- tegrity in science. Like all human endeavors, science is imperfect. However, as Robert Merton noted more than half a century ago “the activities of scientists are subject to rigor- ous policing, to a degree perhaps unparal- leled in any other field of activity” (2). As a result, as Popper argued, “science is one of the very few human activities—perhaps the only one—in which errors POLICY are systematically criticized and fairly often, in time, corrected” (3). Instances in which scientists detect and address flaws in work constitute evidence of success, not failure, because they dem- onstrate the underlying protective mecha- nisms of science at work. Still, as in any human venture, science writ large does not always live up to its ide- als. Although attempts to replicate the 1998 Wakefield study alleging an association between autism and the MMR (measles, sciencemag.org SCIENCE 26 JUNE 2015 • VOL 348 ISSUE 6242 Published by AAAS ILLUSTRATION: DAVIDE BONAZZI Improve incentives to support research integrity
No caption available
… 
Content may be subject to copyright.
INSIGHTS |
PERSPECTIVES
1422 26 JUNE 2015 • VOL 348 ISSUE 6242 sciencemag.org SCIENCE
and neutral resource that supports and
complements efforts of the research enter-
prise and its key stakeholders.
Universities should insist that their fac-
ulties and students are schooled in the eth-
ics of research, their publications feature
neither honorific nor ghost authors, their
public information offices avoid hype in
publicizing findings, and suspect research
is promptly and thoroughly investigated.
All researchers need to realize that the
best scientific practice is produced when,
like Darwin, they persistently search for
flaws in their arguments. Because inherent
variability in biological systems makes it
possible for researchers to explore differ-
ent sets of conditions until the expected
(and rewarded) result is obtained, the need
for vigilant self-critique may be especially
great in research with direct application to
human disease. We encourage each branch
of science to invest in case studies identify-
ing what went wrong in a selected subset
of nonreproducible publications—enlisting
social scientists and experts in the respec-
tive fields to interview those who were
involved (and perhaps examining lab note-
books or redoing statistical analyses), with
the hope of deriving general principles for
improving science in each field.
Industry should publish its failed efforts
to reproduce scientific findings and join
scientists in the academy in making the
case for the importance of scientific work.
Scientific associations should continue to
communicate science as a way of know-
ing, and educate their members in ways to
more effectively communicate key scien-
tific findings to broader publics. Journals
should continue to ask for higher stan-
dards of transparency and reproducibility.
We recognize that incentives can backfire.
Still, because those such as enhanced social
image and forms of public recognition ( 10,
11) can increase productive social behavior
( 12), we believe that replacing the stigma of
retraction with language that lauds report-
ing of unintended errors in a publication will
increase that behavior. Because sustaining a
good reputation can incentivize cooperative
behavior ( 13), we anticipate that our pro-
posed changes in the review process will not
only increase the quality of the final product
but also expose efforts to sabotage indepen-
dent review. To ensure that such incentives
not only advance our objectives but above
all do no harm, we urge that each be scru-
tinized and evaluated before being broadly
implemented.
Will past be prologue? If science is to
enhance its capacities to improve our un-
derstanding of ourselves and our world,
protect the hard-earned trust and esteem
in which society holds it, and preserve its
role as a driver of our economy, scientists
must safeguard its rigor and reliability in
the face of challenges posed by a research
ecosystem that is evolving in dramatic and
sometimes unsettling ways. To do this, the
scientific research community needs to be
involved in an ongoing dialogue. We hope
that this essay and the report The Integrity
of Science ( 14), forthcoming in 2015, will
serve as catalysts for such a dialogue.
Asked at the close of the U.S. Consti-
tutional Convention of 1787 whether the
deliberations had produced a republic or
a monarchy, Benjamin Franklin said “A
Republic, if you can keep it.” Just as pre-
serving a system of government requires
ongoing dedication and vigilance, so too
does protecting the integrity of science.
REFERENCES AND NOTES
1. Trouble at the lab, The Economist, 19 October 2013;
www.economist.com/news/briefing/
21588057-scientists-think-science-self-correcting-
alarming-degree-it-not-trouble.
2. R. M erton , The Sociology of Science: Theoretical and
Empirical Investigations (University of Chicago Press,
Chicago, 1973), p. 276.
3. K. Popper, Conjectures and Refutations: The Growth of
Scientific Knowledge (Routledge, London, 1963), p. 293.
4. Editorial Board, Nature 511, 5 (2014); www.nature.com/
news/stap-retracted-1.15488.
5. B. A. Nose k et al., Science 348, 142 2 (20 15).
6. Institute of Medicine, Discussion Framework for Clinical
Trial Data Sharing: Guiding Principles, Elements, and
Activities (National Academies Press, Washington, DC,
2014) .
7. B. Nosek, J. Spies, M. Motyl, Perspect. Psychol. Sci. 7, 615
(201 2).
8. C. Franzoni, G. Scellato, P. Stephan, Science 333, 702
(201 1).
9. National Academy of Sciences, National Academy of
Engineering, and Institute of Medicine, Responsible
Science, Volume I: Ensuring the Integrity of the Research
Process (National Academies Press, Washington, DC,
1992).
10. N. Lacetera, M. Macis, J. Econ. Beh av. Organ. 76, 225
(2010 ).
11. D. Karlan, M. McConnell, J. Econ. Beha v. Organ. 106, 40 2
(2014 ).
12. R. Thaler, C. Sunstein, Nudge: Improving Decisions About
Health, Wealth and Happiness (Yale Univ. Press, New
Haven, CT, 2009).
13. T. Pfeiffer, L. Tran, C. Krumme, D. Rand, J. R. Soc. I nterf ace
2012, rsif20120332 (2012).
14. Committee on Science, Engineering, and Public Policy
of the National Academy of Sciences, National Academy
of Engineering, and Institute of Medicine, The Integrity
of Science (National Academies Press, forthcoming).
http://www8.nationalacademies.org/cp/projectview.
aspx?key=49387.
10.1126/science.aab3847
“Instances in which
scientists detect and
address flaws in work
constitute evidence
of success, not failure.”
Transparency, openness, and repro-
ducibility are readily recognized as
vital features of science ( 1, 2). When
asked, most scientists embrace these
features as disciplinary norms and
values ( 3). Therefore, one might ex-
pect that these valued features would be
routine in daily practice. Yet, a growing
body of evidence suggests that this is not
the case ( 46).
A likely culprit for this disconnect is an
academic reward system that does not suf-
ficiently incentivize open practices ( 7). In the
present reward system, emphasis on innova-
tion may undermine practices
that support verification. Too
often, publication requirements
(whether actual or perceived) fail to encour-
age transparent, open, and reproducible sci-
ence ( 2, 4, 8, 9). For example, in a transparent
science, both null results and statistically
significant results are made available and
help others more accurately assess the evi-
dence base for a phenomenon. In the present
culture, however, null results are published
less frequently than statistically significant
results ( 10) and are, therefore, more likely
inaccessible and lost in the “file drawer” ( 11).
The situation is a classic collective action
problem. Many individual researchers lack
Promoting an
open research
culture
By B. A. Nosek ,* G. Alter, G. C. Banks,
D. Borsboom, S. D. Bowman,
S. J. Breckler, S. Buck, C. D. Chambers,
G. Chin, G. Christensen, M. Contestabile,
A. Dafoe, E. Eich, J. Freese,
R. Glennerster, D. Goroff, D. P. Green, B.
Hesse, M. Humphreys, J. Ishiyama,
D. Karlan, A. Kraut, A. Lupia, P. Mabry,
T. A . Madon, N. Malhotra,
E. Mayo-Wilson, M. McNutt, E. Miguel,
E. Levy Paluck, U. Simonsohn,
C. Soderberg, B. A. Spellman,
J. Tu rit to , G. VandenBos, S. Vazire,
E. J. Wagenmakers, R. Wilson, T. Yarkoni
Author guidelines for
journals could help to
promote transparency,
openness, and
reproducibility
SCIENTIFIC STANDARDS
POLI CY
Published by AAAS
on June 29, 2015www.sciencemag.orgDownloaded from on June 29, 2015www.sciencemag.orgDownloaded from on June 29, 2015www.sciencemag.orgDownloaded from on June 29, 2015www.sciencemag.orgDownloaded from
26 JUNE 2015 • VOL 348 ISSUE 6242 1423SCIENCE sciencemag.org
strong incentives to be more transparent,
even though the credibility of science would
benefit if everyone were more transparent.
Unfortunately, there is no centralized means
of aligning individual and communal incen-
tives via universal scientific policies and pro-
cedures. Universities, granting agencies, and
publishers each create different incentives
for researchers. With all of this complexity,
nudging scientific practices toward greater
openness requires complementary and coor-
dinated efforts from all stakeholders.
THE TRANSPARENCY AND OPENNESS
PROMOTION GUIDELINES. The Transpar-
ency and Openness Promotion (TOP) Com-
mittee met at the Center for Open Science
in Charlottesville, Virginia, in November
2014 to address one important element of
the incentive systems: journals’
procedures and policies for pub-
lication. The committee con-
sisted of disciplinary leaders,
journal editors, funding agency
representatives, and disciplin-
ary experts largely from the
social and behavioral sciences.
By developing shared standards
for open practices across jour-
nals, we hope to translate sci-
entific norms and values into
concrete actions and change the
current incentive structures to
drive researchers’ behavior to-
ward more openness. Although
there are some idiosyncratic is-
sues by discipline, we sought to
produce guidelines that focus
on the commonalities across
disciplines.
Standards. There are eight
standards in the TOP guidelines;
each moves scientific communi-
cation toward greater openness.
These standards are modular,
facilitating adoption in whole
or in part. However, they also
complement each other, in that
commitment to one standard
may facilitate adoption of oth-
ers. Moreover, the guidelines are sensitive
to barriers to openness by articulating, for
example, a process for exceptions to shar-
ing because of ethical issues, intellectual
property concerns, or availability of neces-
sary resources. The complete guidelines are
available in the TOP information commons
at http://cos.io/top, along with a list of
signatories that numbered 86 journals and
26 organizations as of 15 June 2015. The
table provides a summary of the guidelines.
First, two standards reward research-
ers for the time and effort they have spent
engaging in open practices. (i) Citation
standards extend current article citation
norms to data, code, and research materi-
als. Regular and rigorous citation of these
materials credit them as original intellec-
tual contributions. (ii) Replication stan-
dards recognize the value of replication
for independent verification of research
results and identify the conditions under
which replication studies will be published
in the journal. To progress, science needs
both innovation and self-correction; repli-
cation offers opportunities for self-correc-
tion to more efficiently identify promising
research directions.
Second, four standards describe what
openness means across the scientific pro-
cess so that research can be reproduced
and evaluated. Reproducibility increases
confidence in results and also allows schol-
ars to learn more about what results do
and do not mean. (i) Design standards in-
crease transparency about the research
process and reduce vague or incomplete
reporting of the methodology. (ii) Research
materials standards encourage the provi-
sion of all elements of that methodology.
(iii) Data sharing standards incentivize
authors to make data available in trusted
repositories such as Dataverse, Dryad, the
Interuniversity Consortium for Political and
Social Research (ICPSR), the Open Science
Framework, or the Qualitative Data Reposi-
tory. (iv) Analytic methods standards do the
same for the code comprising the statistical
models or simulations conducted for the re-
search. Many discipline-specific standards
for disclosure exist, particularly for clini-
cal trials and health research more gener-
ally (e.g., www.equator-network.org). Many
more are emerging for other disciplines,
such as those developed by Psychological
Science ( 12).
Finally, two standards address the values
resulting from preregistration. (i) Standards
for preregistration of studies facilitate the
discovery of research, even unpublished
research, by ensuring that the existence of
the study is recorded in a public registry.
(ii) Preregistration of analysis plans certify
the distinction between confirmatory and
exploratory research, or what is also called
hypothesis-testing versus hypothesis-gen-
erating research. Making transparent the
distinction between confirmatory and ex-
ploratory methods can enhance reproduc-
ibility ( 3, 13, 14).
Levels. The TOP Committee recognized
that not all of the standards are applicable
to all journals or all disciplines. Therefore,
rather than advocating for a single set of
guidelines, the TOP Committee defined
ILLUSTRATION: DAVIDE BONAZZI
*Corresponding author. E-mail: nosek@virginia.edu
A liations for the authors, all of whom are members of the
TOP Guidelines Committee, are given in the supplementary
materials.
Published by AAAS
INSIGHTS |
PERSPECTIVES
1424 26 JUNE 2015 • VOL 348 ISSUE 6242 sciencemag.org SCIENCE
Citation standards Journal encourages
citation of data, code,
and materials—or says
nothing.
Journal describes
citation of data in
guidelines to authors
with clear rules and
examples.
Article provides appropriate
citation for data and materials
used, consistent with journal's
author guidelines.
Article is not published until
appropriate citation for data
and materials is provided that
follows journal's author
guidelines.
Data transparency Journal encourages
data sharing—or says
nothing.
Article states whether
data are available and,
if so, where to access
them.
Data must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Data must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Analytic methods
(code) transparency
Journal encourages
code sharing—or says
nothing.
Article states whether
code is available and, if
so, where to access
them.
Code must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Code must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Research materials
transparency
Journal encourages
materials sharing—or
says nothing
Article states whether
materials are available
and, if so, where to
access them.
Materials must be posted to a
trusted repository. Exceptions
must be identied at article
submission.
Materials must be posted to a
trusted repository, and
reported analyses will be
reproduced independently
before publication.
Design and analysis
transparency
Journal encourages
design and analysis
transparency or says
nothing.
Journal articulates
design transparency
standards.
Journal requires adherence to
design transparency standards
for review and publication.
Journal requires and enforces
adherence to design transpar-
ency standards for review and
publication.
Preregistration
of studies
Journal says nothing. Journal encourages
preregistration of
studies and provides
link in article to
preregistration if it
exists.
Journal encourages preregis-
tration of studies and provides
link in article and certication
of meeting preregistration
badge requirements.
Journal requires preregistration
of studies and provides link and
badge in article to meeting
requirements.
Preregistration
of analysis plans
Journal says nothing. Journal encourages
preanalysis plans and
provides link in article
to registered analysis
plan if it exists.
Journal encourages preanaly-
sis plans and provides link in
article and certication of
meeting registered analysis
plan badge requirements.
Journal requires preregistration
of studies with analysis plans
and provides link and badge in
article to meeting requirements.
Replication Journal discourages
submission of
replication studies—or
says nothing.
Journal encourages
submission of
replication studies.
Journal encourages submis-
sion of replication studies and
conducts blind review of
results.
Journal uses Registered
Reports as a submission option
for replication studies with peer
review before observing the
study outcomes.
LEVEL 0 LEVEL 1 LEVEL 2 LEVEL 3
Summary of the eight standards and three levels of the TOP guidelines
Levels 1 to 3 are increasingly stringent for each standard. Level 0 oers a comparison that does not meet the standard.
three levels for each standard. Level 1 is de-
signed to have little to no barrier to adop-
tion while also offering an incentive for
openness. For example, under the analytic
methods (code) sharing standard, authors
must state in the text whether and where
code is available. Level 2 has stronger ex-
pectations for authors but usually avoids
adding resource costs to editors or pub-
lishers that adopt the standard. In Level 2,
journals would require code to be deposited
in a trusted repository and check that the
link appears in the article and resolves to
the correct location. Level 3 is the strongest
standard but also may present some barri-
ers to implementation for some journals.
For example, the journals Political Analysis
and Quarterly Journal of Political Science
require authors to provide their code for
review, and editors reproduce the reported
analyses publication. In the table, we pro-
vide “Level 0” for comparison of common
journal policies that do not meet the trans-
parency standards.
Adoption. Defining multiple levels and
distinct standards facilitates informed
decision-making by journals. It also ac-
knowledges the variation in evolving norms
about research transparency. Depending on
the discipline or publishing format, some
of the standards may not be relevant for
a journal. Journal and publisher decisions
can be based on many factors—including
their readiness to adopt modest to stron-
ger transparency standards for authors,
internal journal operations, and disciplin-
ary norms and expectations. For example,
in economics, many highly visible journals
such as American Economic Review have
already adopted strong policies requiring
data sharing, whereas few psychology jour-
nals have comparable requirements.
In this way, the levels are designed to fa-
cilitate the gradual adoption of best prac-
tices. Journals may begin with a standard
that rewards adherence, perhaps as a step
toward requiring the practice. For example,
Psychological Science awards badges for
“open data,” “open materials,” and “prereg-
istration” ( 12), and approximately 25% of
accepted articles earned at least one badge
in the first year of operation.
The Level 1 guidelines are designed to
have minimal effect on journal efficiency
and workflow while also having a measur-
able impact on transparency. Moreover,
although higher levels may require greater
implementation effort up front, such efforts
may benefit publishers and editors and the
quality of publications by, for example, re-
Published by AAAS
26 JUNE 2015 • VOL 348 ISSUE 6242 1425SCIENCE sciencemag.org
In synthetic ecology, a nascent offshoot
of synthetic biology, scientists aim to
design and construct microbial com-
munities with desirable properties.
Such mixed populations of microor-
ganisms can simultaneously perform
otherwise incompatible functions ( 1).
Compared with individual organisms, they
can also better resist losses in function as
a result of environmental perturbation or
invasion by other species ( 2). Synthetic
ecology may thus be a promising approach
for developing robust, stable biotechno-
logical processes, such as the conversion
of cellulosic biomass to biofuels ( 3). How-
ever, achieving this will require detailed
knowledge of the principles that guide the
structure and function of microbial com-
munities (see the image).
Recent work with synthetic communities
is shedding light on microbial interactions
that may lead to new principles for commu-
nity design and engineering. In game the-
ory, cooperators provide publicly available
goods that benefit all, whereas cheaters
exploit those goods without reciprocation.
The tragedy of the commons predicts that
cheaters are more fit than cooperators,
eventually destroying the cooperation. Yet,
this is not borne out by observations. For
example, using a synthetic consortium of
genetically modified yeast to represent co-
operators and cheaters, Waite and Shou ( 4)
found that, although initially less fit than
cheaters, cooperators rapidly dominated in
a fraction of the cultures. The evolved coop-
erators harbored mutations allowing them
to grow at much lower nutrient concentra-
tions than their ancestor. This suggests that
the tragedy of the commons can be avoided
Ecological communities
by design
Learning from nature. Photomicrograph of cyanobacterial-heterotroph microbial consortia derived from a
phototrophic microbial mat community from a saline lake. Emerging understanding of cooperative mechanisms
in such communities may be helpful in the design of synthetic communities for use in biotechnology.
By Jame s K. Fredrickson
Synthetic ecology requires knowledge of how
microbial communities function
ECOLOGY
ducing time spent on communication with
authors and reviewers, improving standards
of reporting, increasing detectability of er-
rors before publication, and ensuring that
publication-related data are accessible for a
long time.
Evaluation and revision. An information
commons and support team at the Center
for Open Science is available (top@cos.io)
to assist journals in selection and adop-
tion of standards and will track adoption
across journals. Moreover, adopting jour-
nals may suggest revisions that improve
the guidelines or make them more flexible
or adaptable for the needs of particular
subdisciplines.
The present version of the guidelines is
not the last word on standards for openness
in science. As with any research enterprise,
the available empirical evidence will expand
with application and use of these guide-
lines. To reflect this evolutionary process,
the guidelines are accompanied by a version
number and will be improved as experience
with them accumulates.
Conclusion. The journal article is central
to the research communication process.
Guidelines for authors define what aspects
of the research process should be made
available to the community to evaluate,
critique, reuse, and extend. Scientists rec-
ognize the value of transparency, openness,
and reproducibility. Improvement of journal
policies can help those values become more
evident in daily practice and ultimately im-
prove the public trust in science, and sci-
ence itself.
REFERENCES AND NOTES
1. M. McNutt, Science 343, 229 (2014).
2. E. Miguel et al., Science 343, 30 (20 14).
3. M. S. Anderson, B. C. Martinson, R. De Vries, J. Empir. Res.
Hum. Res. Ethics 2, 3 (2 007).
4. J. P. A. Ioanni dis, M . R. Muna fò, P. Fus ar-Poli , B. A. Nose k, S.
P. D a v i d , Trends Cogn. Sci. 18, 235 (2014) .
5. L. K . John, G. Lo ewenst ein, D. Pre lec, Psychol. Sci. 23, 524
(201 2).
6. E. H. O ’Boyle Jr., G. C. Bank s, E. Gon zalez- Mule, J. Manage.
10.11 77/014920 63145 27133 (2014 ).
7. B. A. N osek , J. R. Spie s, M. Moty l, Perspect. Psychol. Sci. 7,
615 (2012).
8. J. B. Asendorpf et al., Eur. J. Pers. 27, 108 (2 013).
9. J. P. Simmons, L. D. Nelson, U. Simonsohn, Psychol. Sci. 22,
1359 ( 2011) .
10. A. Franco, N. Malhotra, G. Simonovits, Science 345, 1502
(2014 ).
11. R. Rosenthal, Psychol. Bull. 86, 638 (1979) .
12. E. Eich, Psychol. Sci. 25, 3 (2014).
13. E.-J. Wagenmakers, R. Wetzels, D. Borsboom, H. L. van der
Maas, R. A. Kievit, Perspect. Psychol. Sci. 7, 632 (2012 ).
14. C. D. Chambers, Cortex 49, 609 (2013).
ACKNOWLEDGMENTS
This work was supported by the Laura and John Arnold
Foundation.
SUPPLEMENTARY MATERIALS
www.sciencemag.org/content/348/6242/1422/suppl/DC1
PHOTO: ALICE DOHNALKOVA/PNNL
10.1126/science.aab2374
Published by AAAS
... The development strategy and the running platform of open science have been booming around the world. For example, the European Union, the United States, France, Finland and Canada have formulated and implemented open science plans, and the science field has entered a new era of open science [51][52][53]. In the new generation of the information environment, the internal demand for scientific development, the open scientific policies and practical projects of various institutions and countries are the main drivers of open scientific development. ...
Article
Full-text available
The theories, methods and techniques of bibliometrics, scientometrics, informetrics, webometrics and knowledgometrics together constitute Five-Metrics. Five-Metrics is one of the most active research fields in China’s library and information science (LIS), and the research on Five-Metrics in China is characterised by the diversity of disciplines. Quantitative analysis of interdisciplinary research in Five-Metrics of China reveals the disciplinary origin and knowledge structure of Chinese Five-Metrics, grasps the interdisciplinary patterns and laws of Five-Metrics, and helps promote international exchange and cooperation, innovation and development of Five-Metrics research in the context of open science. Based on the theory of knowledge flow, this study uses a combination of citation analysis, mathematical modelling analysis, social network analysis and statistical analysis. We study the interdisciplinary degree of Five-Metrics based on 20,528 publications and corresponding 207,530 reference records and 111,823 citing article records, using a combination of python, gephi, origin and other tools. The results show that the interdisciplinarity of Five-Metrics publications and knowledge flow at the macroscopic level is high, and interdisciplinarity of the cited references and citing articles of Five-Metrics is higher. At the microscopic level, there is a wide gap in the interdisciplinarity of Five-Metrics in different disciplines. In addition, this study identifies three interdisciplinary knowledge flow patterns of Five-Metrics of China. This study conducts a comprehensive analysis of the interdisciplinary Five-Metrics study in China based on the cited references, publications and citing articles.
... In an ideal world, the scientific system would support knowledge generation in that incentives for the individual and society are aligned. However, the comments suggested that (some) respondents do perceive the system far from this ideal (close to what has been discussed in past publications, e.g., Alberts et al., 2015;Dawes, 1980). For example, respondents mentioned that deviations from pre-registrations lead to rejections of submitted manuscripts (Rec.1), that replication attempts do not contribute to or even hurt researchers' reputations (Rec.2), that reviewers do not appreciate theory comparison (Rec.9), and that editors (still) prefer sensational or counter-intuitive findings (Rec.10). ...
Article
Full-text available
In 2020, the Division of Social Psychology of the German Psychological Society (DGPs) published 11 methodological recommendations for improving the quality, replicability, and transparency of social psychological research. We evaluate these recommendations in a quantitative and qualitative survey conducted with members of the division (N = 54). Most – but not all – recommendations are well-known and understood by these members of the division. For 73% of the recommendations, researchers indicated that they tend to apply them in their research. Perceived behavioral control was the strongest predictor of adherence and was generally high (82%). In the open comments, challenges and suggestions for improvement were expressed. Overall, the implementation of the recommendations progresses and respective norms emerge, but substantial challenges remain to be solved requiring collective efforts.
... Two Key Questions on Questionable Research Practices low replicability of findings even in top journals (Begley & Ellis, 2012;Camerer Colin et al., 2016;Chalkia et al., 2020;Open Science Collaboration, 2015;Prinz et al., 2011) have brought attention to problems in contemporary science. While blatant fraud, such as falsification and fabrication of results, seems to be rare, questionable research practices (QRPs) aimed at increasing the likelihood of publishing significant findings appear to be widespread (Alberts et al., 2015;Banks et al., 2021). QRPs as understood here implies an intention to achieve success by dubious behavior, such as failure to acknowledge co-authors, selective presentation of findings, removal of data not supporting desired outcomes, and overstatement of a study's empirical or methodological foundation (Ravn & Sorensen, 2021). ...
Article
Full-text available
Breaches of research integrity have gained considerable attention due to high-profile scandals involving questionable research practices by reputable scientists. These practices include plagiarism, manipulation of authorship, biased presentation of findings and misleading reports of significance. To combat such practices, policymakers tend to rely on top-down measures, mandatory ethics training and stricter regulation, despite limited evidence of their effectiveness. In this study, we investigate the occurrence and underlying factors of questionable research practices (QRPs) through an original survey of 3,005 social and medical researchers at Swedish universities. By comparing the role of the organizational culture, researchers´ norms and counter norms, and individual motivation, the study reveals that the counter norm of Biasedness—the opposite of universalism and skepticism—is the overall most important factor. Thus, Biasedness was related to 40–60% of the prevalence of the questionable practices. The analysis also reveals the contradictory impact of other elements in the organizational environment. Internal competition was positively associated with QRP prevalence, while group-level ethics discussions consistently displayed a negative association with such practices. Furthermore, in the present study items covering ethics training and policies have only a marginal impact on the prevalence of these practices. The organizational climate and normative environment have a far greater influence. Based on these findings, it is suggested that academic leaders should prioritize the creation and maintenance of an open and unbiased research environment, foster a collaborative and collegial climate, and promote bottom-up ethics discussions within and between research groups.
... 10 Although science is supposed to be self-correcting, once peer-reviewed articles are published, scientists have little incentive to correct or to critically analyse them. [11][12][13] Indeed, in the contemporary scientific landscape, the incentives that drive researchers play a key role in shaping the trajectory of scientific enquiry and publication practices. These incentives, due to the interests involved, could favour the pursuit of discoveries presented as revolutionary and attention-grabbing over meticulous but less conspicuous work. ...
Article
Full-text available
Material science publications are the outcome of research, but they can contain errors. We advocate for post publication peer review as a way to collectively improve self-correction of science.
Article
Full-text available
The open science movement produces vast quantities of openly published data connected to journal articles, creating an enormous resource for educators to engage students in current topics and analyses. However, educators face challenges using these materials to meet course objectives. I present a case study using open science (published articles and corresponding datasets) and open educational practices in a capstone course. While engaging in current topics of conservation, students trace connections in the research process, learn statistical analyses, and recreate analyses using the programming language R. I assessed the presence of best practices in open articles and datasets, examined student selection in the open grading policy, surveyed students on their perceived learning gains, and conducted a thematic analysis on student reflections. First, articles and datasets met just over half of the assessed fairness practices, which increased with the publication date. There was a marginal difference in how assessment categories were weighted by students, with reflections highlighting appreciation for student agency. In course content, students reported the greatest learning gains in describing variables, while collaborative activities (e.g., interacting with peers and instructor) were the most effective support. The most effective tasks to facilitate these learning gains included coding exercises and team‐led assignments. Autocoding of student reflections identified 16 themes, and positive sentiments were written nearly 4x more often than negative sentiments. Students positively reflected on their growth in statistical analyses, and negative sentiments focused on how limited prior experience with statistics and coding made them feel nervous. As a group, we encountered several challenges and opportunities in using open science materials. I present key recommendations, based on student experiences, for scientists to consider when publishing open data to provide additional educational benefits to the open science community.
Article
Respect for the scientific process and a diversity of views; open discourse and debate based on principles of ethics, best available evidence, and scientific inquiry and integrity; and an understanding of evidence gaps and uncertainty and how to communicate about them are important values in the advancement of science and the practice of medicine. Physicians often must make decisions about their recommendations to patients in the face of scarce or conflicting data. Are these characteristics of medicine and science widely understood and effectively communicated among members of the profession and to patients and the public? Issues of scientific integrity are longstanding, but COVID-19 brought them to the forefront, in an environment that was sometimes characterized by communication missteps as guidance came and went-or changed-quickly. Today, is open debate flourishing? Have some debates shed more heat than light? Are people losing confidence in science and medicine? In health care institutions? The American College of Physicians explores these issues and offers guidance in this position paper.
Chapter
Full-text available
Retracted papers are scientific or scholarly works officially withdrawn by the publisher or journal after their initial publication. The primary goal of retractions is to rectify the literature and alert readers about articles containing substantially flawed or erroneous content or data, or due to ethical concerns, rendering reported findings and conclusions unreliable. Retraction notices are typically issued for various reasons, including scientific misconduct, genuine mistakes, or problems with peer review. This chapter provides a systematic analysis of the dubious research identified in the Web of Science Core Collection. Bibliometric analysis was conducted on dubious research to assess the magnitude and influence of the questionable work on the pool of knowledge. The contingency matrix between countries and Web of Science categories of retracted papers reveals correlations between certain domains and the countries. To counter this growing tendency, a multi-pronged approach is essential. Robust policies, vigilant watchdogs, and targeted interventions by institutions are necessary to uphold the integrity of scholarly literature. Academia cannot afford to remain silent in the face of this threat to its credibility.
Preprint
We explore a paradox of collective action and certainty in science wherein the more scientists research together, the less that work contributes to the value of their collective certainty. When scientists address similar problems and share data, methods, and collaborators, their understanding of and trust in their colleagues' research rises, a quality required for scientific advance. This increases the positive reinforcement scientists receive for shared beliefs as they become more dependent on their colleagues' knowledge, interests, and findings. This collective action increases the potential for scientists to reside in epistemic ''bubbles'' that limit their capacity to make new discoveries or have their discoveries generalize. In short, as scientists grow closer, their experience of scientific validity rises as the likelihood of genuine replication falls, creating a trade-off between certainty and truth.
Article
Full-text available
Using computational methods and chemical intuition, the proposed structure of janthinolide A is shown to be incorrect. It is further shown that the material described as janthinolide A is highly likely to be janthinolide C.
Article
Full-text available
Transparency, openness, and reproducibility are readily recognized as vital features of science (1, 2). When asked, most scientists embrace these features as disciplinary norms and values (3). Therefore, one might expect that these valued features would be routine in daily practice. Yet, a growing body of evidence suggests that this is not the case (4–6).
Article
Full-text available
The issue of a published literature not representative of the population of research is most often discussed in terms of entire studies being suppressed. However, alternative sources of publication bias are questionable research practices (QRPs) that entail post hoc alterations of hypotheses to support data or post hoc alterations of data to support hypotheses. Using general strain theory as an explanatory framework, we outline the means, motives, and opportunities for researchers to better their chances of publication independent of rigor and relevance. We then assess the frequency of QRPs in management research by tracking differences between dissertations and their resulting journal publications. Our primary finding is that from dissertation to journal article, the ratio of supported to unsupported hypotheses more than doubled (.82 to 1.00 versus 1.94 to 1.00). The rise in predictive accuracy resulted from the dropping of statistically non-significant hypotheses, the addition of statistically significant hypotheses, the reversing of predicted direction of hypotheses, and alterations to data. We conclude with recommendations to help mitigate the problem of an unrepresentative literature that we label, the Chrysalis Effect.
Book
Full-text available
Every day, we make decisions on topics ranging from personal investments to schools for our children to the meals we eat to the causes we champion. Unfortunately, we often choose poorly. The reason, the authors explain, is that, being human, we all are susceptible to various biases that can lead us to blunder. Our mistakes make us poorer and less healthy; we often make bad decisions involving education, personal finance, health care, mortgages and credit cards, the family, and even the planet itself. Thaler and Sunstein invite us to enter an alternative world, one that takes our humanness as a given. They show that by knowing how people think, we can design choice environments that make it easier for people to choose what is best for themselves, their families, and their society. Using colorful examples from the most important aspects of life, Thaler and Sunstein demonstrate how thoughtful "choice architecture" can be established to nudge us in beneficial directions without restricting freedom of choice. Nudge offers a unique new take-from neither the left nor the right-on many hot-button issues, for individuals and governments alike. This is one of the most engaging and provocative books to come along in many years. © 2008 by Richard H. Thaler and Cass R. Sunstein. All rights reserved.
Article
Full-text available
For any given research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the "file drawer problem" is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results. Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed. (15 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
An academic scientist's professional success depends on publishing. Publishing norms emphasize novel, positive results. As such, disciplinary incentives encourage design, analysis, and reporting decisions that elicit positive results and ignore negative results. Prior reports demonstrate how these incentives inflate the rate of false effects in published science. When incentives favor novelty over replication, false results persist in the literature unchallenged, reducing efficiency in knowledge accumulation. Previous suggestions to address this problem are unlikely to be effective. For example, a journal of negative results publishes otherwise unpublishable reports. This enshrines the low status of the journal and its content. The persistence of false findings can be meliorated with strategies that make the fundamental but abstract accuracy motive-getting it right-competitive with the more tangible and concrete incentive-getting it published. This article develops strategies for improving scientific practices and knowledge accumulation that account for ordinary human motivations and biases. © The Author(s) 2012.
Article
Full-text available
Many national governments have implemented policies providing incentives for researchers to publish, especially in highly ranked international journals. Although still the top publishing nation, the United States has seen its share of publications decline from 34.2% in 1995 to 27.6% in 2007 as the number of articles published by U.S. scientists and engineers has plateaued and that of other countries has grown (1, 2). Hicks (3) argues that the two events are not unrelated: The decline in the relative performance of the United States relates to increased international competition engendered by newly adopted incentives that have crowded out some work by U.S. authors.
Article
We conduct a randomized field experiment with a Yale service club and find that the promise of public recognition increases giving. Some may claim that they give when offered public recognition in order to motivate others to give too, rather than for the more obvious expected private gain from increasing one's social standing. To tease apart these two theories, we also conduct a laboratory experiment with undergraduates. We find that patterns of giving are more consistent with a desire to improve social image than a purely altruistic desire to motivate others' contributions. We discuss the external validity of our lab findings for other settings.
Article
Theories abound for why individuals give to charity. We conduct a field experiment with donors to a Yale University service club to test the impact of a promise of public recognition on giving. Some may claim that they respond to an offer of public recognition not to improve their social standing, but rather to motivate others to give. To tease apart these two theories, we conduct a laboratory experiment with undergraduates, and found no evidence to support the alternative, altruistic motivation. We conclude that charitable gifts increase in response to the promise of public recognition primarily because of individuals' desire to improve their social image.
Article
Using longitudinal data on the entire population of blood donors in an Italian town, we examine how donors respond to a nonlinear award scheme that rewards them with symbolic prizes (medals) when they reach certain donation quotas. Our results indicate that donors significantly increase the frequency of their donations immediately before reaching the thresholds for which the rewards are given, but only if the prizes are publicly announced in the local newspaper and awarded in a public ceremony. The results are robust to several specifications, sample definitions, and controls for observable and unobservable heterogeneity. Our findings indicate that social image concerns are a primary motivator of prosocial behavior and that symbolic prizes are most effective as motivators when they are awarded publicly. We discuss the implications of our findings for policies aimed at incentivizing prosocial behavior.