Preprint

Conceptual replication study of fifteen JDM effects: Insights from the Polish sample

Preprint

Conceptual replication study of fifteen JDM effects: Insights from the Polish sample

If you want to read the PDF, try requesting it from the authors.

Abstract

We conducted pre-registered replications of 15 effects in the field of judgment and decision making (JDM). We aimed to test the generalizability of different classical and modern JDM effects, including, among others: less-is-better, anchoring, and framing to different languages, cultures, or current situations (COVID-19 pandemic). Replicated studies were selected and conducted by undergraduate psychology students enrolled in a decision-making course. Two hundred and two adult volunteers completed an online battery of replicated studies. With a classical significance criterion (p < .05), seven effects were successfully replicated (47%), five partially replicated (33%), and three did not replicate (20%). Even though research materials differed from the originals in several ways, the replication rate in our project is slightly above earlier reported findings in similar replication projects.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

Preprint
Full-text available
A considerable proportion of psychological research has not been replicable, and estimates range from 9% to 77% for nonreplicable results. The extent to which vast proportions of studies in the field are replicable is still unknown, as researchers lack incentives for publishing individual replication studies. When preregistering replication studies via the Open Science Foundation website (OSF, osf.io), researchers can publicly register their results without having to publish them and thus circumvent file-drawer effects. We analyzed data from 139 replication studies for which the results were publicly registered on the OSF and found that out of 62 reports that included the authors’ assessments, 23 were categorized as “informative failures to replicate” by the original authors. 24 studies allowed for comparisons between the original and replication effect sizes, and whereas 75% of the original effects were statistically significant, only 30% of the replication effects were. The replication effects were also significantly smaller than the original effects (approx. 38% the size). Replication closeness did not moderate the difference between the original and the replication effects. Our results provide a glimpse into estimating replicability for studies from a wide range of psychological fields chosen for replication by independent groups of researchers. We invite researchers to browse the Replication Database (ReD) ShinyApp, which we created to check for whether seminal studies from their respective fields have been replicated. Our data and code are available online: https://osf.io/9r62x/
ResearchGate has not been able to resolve any references for this publication.