PreprintPDF Available

Taking stock of the credibility revolution: Scientific reform 2011-2019

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
For knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous, and transparent at all stages of design, execution, and reporting. Assessment of researchers still rarely includes considerations related to trustworthiness, rigor, and transparency. We have developed the Hong Kong Principles (HKPs) as part of the 6th World Conference on Research Integrity with a specific focus on the need to drive research improvement through ensuring that researchers are explicitly recognized and rewarded for behaviors that strengthen research integrity. We present five principles: responsible research practices; transparent reporting; open science (open research); valuing a diversity of types of research; and recognizing all contributions to research and scholarly activity. For each principle, we provide a rationale for its inclusion and provide examples where these principles are already being adopted.
Article
Full-text available
Reproducibility is essential to science, yet a distressingly large number of research findings do not seem to replicate. Here I discuss one underappreciated reason for this state of affairs. I make my case by noting that, due to artifacts, several of the replication failures of the vastly advertised Open Science Collaboration’s Reproducibility Project: Psychology turned out to be invalid. Although these artifacts would have been obvious on perusal of the data, such perusal was deemed undesirable because of its post hoc nature and was left out. However, while data do not lie, unforeseen confounds can render them unable to speak to the question of interest. I look further into one unusual case in which a major artifact could be removed statistically—the nonreplication of the effect of fertility on partnered women’s preference for single over attached men. I show that the “failed replication” datasets contain a gross bias in stimulus allocation which is absent in the original dataset; controlling for it replicates the original study’s main finding. I conclude that, before being used to make a scientific point, all data should undergo a minimal quality control—a provision, it appears, not always required of those collected for purpose of replication. Because unexpected confounds and biases can be laid bare only after the fact, we must get over our understandable reluctance to engage in anything post hoc. The reproach attached to p-hacking cannot exempt us from the obligation to (openly) take a good look at our data.
Article
Full-text available
Most scientific research is conducted by small teams of investigators who together formulate hypotheses, collect data, conduct analyses, and report novel findings. These teams operate independently as vertically integrated silos. Here we argue that scientific research that is horizontally distributed can provide substantial complementary value, aiming to maximize available resources, promote inclusiveness and transparency, and increase rigor and reliability. This alternative approach enables researchers to tackle ambitious projects that would not be possible under the standard model. Crowdsourced scientific initiatives vary in the degree of communication between project members from largely independent work curated by a coordination team to crowd collaboration on shared activities. The potential benefits and challenges of large-scale collaboration span the entire research process: ideation, study design, data collection, data analysis, reporting, and peer review. Complementing traditional small science with crowdsourced approaches can accelerate the progress of science and improve the quality of scientific research.
Article
Full-text available
A wide range of disciplines are building preprint services—web-based systems that enable publishing non peer-reviewed scholarly manuscripts before publication in a peer-reviewed journal. We have quantitatively surveyed nine of the largest English language preprint services offered by the Center for Open Science (COS) and available through an Application Programming Interface. All of the services we investigate also permit the submission of postprints, non-typeset versions of peer-reviewed manuscripts. Data indicates that all services are growing, but with submission rates below more mature services (e.g., bioRxiv). The trend of the preprint-to-postprint ratio for each service indicates that recent growth is a result of more preprint submissions. The nine COS services we investigate host papers that appear in a range of peer-reviewed journals, and many of these publication venues are not listed in the Directory of Open Access Journals. As a result, COS services function as open repositories for peer-reviewed papers that would otherwise be behind a paywall. We further analyze the coauthorship network for each COS service, which indicates that the services have many small connected components, and the largest connected component encompasses only a small percentage of total authors on each service. When comparing the papers submitted to each service, we observe topic overlap measured by keywords self-assigned to each manuscript, indicating that search functionalities would benefit from cutting across the boundaries of a single service. Finally, though annotation capabilities are integrated into all COS services, it is rarely used by readers. Our analysis of these services can be a benchmark for future studies of preprint service growth.
Article
Full-text available
In this article, we assess the 31 articles published in Basic and Applied Social Psychology (BASP) in 2016, which is one full year after the BASP editors banned the use of inferential statistics. We discuss how the authors collected their data, how they reported and summarized their data, and how they used their data to reach conclusions. We found multiple instances of authors overstating conclusions beyond what the data would support if statistical significance had been considered. Readers would be largely unable to recognize this because the necessary information to do so was not readily available.
Preprint
Full-text available
To appear in Scholarship of Teaching and Learning in Psychology
Article
During the methods crisis in psychology and other sciences, much discussion developed online in forums such as blogs and other social media. Hence, this increasingly popular channel of scientific discussion itself needs to be explored to inform current controversies, record the historical moment, improve methods communication, and address equity issues. Who posts what about whom, and with what effect? Does a particular generation or gender contribute more than another? Do blogs focus narrowly on methods, or do they cover a range of issues? How do they discuss individual researchers, and how do readers respond? What are some impacts? Web-scraping and text-analysis techniques provide a snapshot characterizing 41 current research-methods blogs in psychology. Bloggers mostly represented psychology’s traditional leaderships’ demographic categories: primarily male, mid- to late career, associated with American institutions, White, and with established citation counts. As methods blogs, their posts mainly concern statistics, replication (particularly statistical power), and research findings. The few posts that mentioned individual researchers substantially focused on replication issues; they received more views, social-media impact, comments, and citations. Male individual researchers were mentioned much more often than female researchers. Further data can inform perspectives about these new channels of scientific communication, with the shared aim of improving scientific practices.