Citation bias favoring statistically significant studies was present in medical research

CRC & Division of Clinical Epidemiology, Department of Health and Community Medicine, University of Geneva and University Hospitals of Geneva, Rue Gabrielle Perret-Gentil 6, 1211 GENEVE 14, Switzerland. Electronic address: .
Journal of clinical epidemiology (Impact Factor: 3.42). 03/2013; 66(3):296-301. DOI: 10.1016/j.jclinepi.2012.09.015
Source: PubMed


Statistically significant studies may be cited more than negative studies on the same topic. We aimed to assess here whether such citation bias is present across the medical literature.
We conducted a cohort study of the association between statistical significance and citations. We selected all therapeutic intervention studies included in meta-analyses published between January and March 2010 in the Cochrane database, and retrieved citation counts of all individual studies using ISI Web of Knowledge. The association between the statistical significance of each study and the number of citations it received between 2008 and 2010 was assessed in mixed Poisson models.
We identified 89 research questions addressed in 458 eligible articles. Significant studies were cited twice as often as nonsignificant studies (multiplicative effect of significance: 2.14, 95% confidence interval: 1.38-3.33). This association was partly because of the higher impact factor of journals where significant studies are published (adjusted multiplicative effect of significance: 1.14, 95% confidence interval: 0.87-1.51).
A citation bias favoring significant results occurs in medical research. As a consequence, treatments may seem more effective to the readers of medical literature than they really are.

Download full-text


Available from: Thomas Agoritsas,
    • "If gray literature or difficult-tolocate studies with large or statistically significant results are more likely to be cited than other gray literature studies with small or nonsignificant effects, this could potentially bias the findings of the larger meta-analysis. Several studies have documented that studies with significant or positive results are more likely to be cited by other studies (Etter & Stapleton, 2009; Jannot et al., 2013; Nieminen, Rucker, Miettunen, Carpenter, & Schumacher, 2007). However, in an analysis of articles submitted to the Society for Academic Emergency Medicine meeting, Callaham et al. (2002) found no evidence that positive study outcomes were correlated with subsequent citations. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This study examined dissemination and reporting biases in the brief alcohol intervention literature. We used retrospective data from 179 controlled trials included in a meta-analysis on brief alcohol interventions for adolescents and young adults. We examined whether the magnitude and direction of effect sizes were associated with publication type, identification source, language, funding, time lag between intervention and publication, number of reports, journal impact factor, and subsequent citations. Results indicated that effect sizes were larger for studies that had been funded (b = 0.14, 95% confidence interval [CI] [0.04, 0.23]), had a shorter time lag between intervention and publication (b = -0.03, 95% CI [-0.05, -.001]), and were cited more frequently (b = 0.01, 95% CI [+0.00, 0.01]). Studies that were cited more frequently by other authors also had greater odds of reporting positive effects (odds ratio = 1.10, 95% CI [1.02, 1.18]). Results indicated that time lag bias has increased recently: Larger and positive effect sizes were published more quickly in recent years. We found no evidence, however, that the magnitude or direction of effects was associated with location source, language, or journal impact factor. We conclude that dissemination biases may indeed occur in the social and behavioral science literature, as has been consistently documented in the medical literature. As such, primary researchers, journal reviewers, editors, systematic reviewers, and meta-analysts must be cognizant of the causes and consequences of these biases, and commit to engage in ethical research practices that attempt to minimize them. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Psychology of Addictive Behaviors 08/2014; 29(1). DOI:10.1037/adb0000014 · 2.09 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Scholarly publishing is the cornerstone of advancing our understanding of the world around us. A staggering amount of research is conducted each day, and this research would effectively serve no purpose without a system for its dissemination. However, academic publishing is slanted toward positive results, with negative results becoming scarcer in the literature (Fanelli, 2012). Countless experiments without splashy results end up in dusty lab notebooks, forever closed off from the rest of the scientific community.
    04/2013; 1(4):51-52. DOI:10.14304/SURYA.JPR.V1N4.8
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Publication bias undermines the integrity of the evidence base by inflating apparent drug efficacy and minimizing drug harms, thus skewing the risk-benefit ratio. This paper reviews the topic of publication bias with a focus on drugs prescribed for psychiatric conditions, especially depression, schizophrenia, bipolar disorder, and autism. Publication bias is pervasive; although psychiatry/psychology may be the most seriously afflicted field, it occurs throughout medicine and science. Responsibility lies with various parties (authors as well as journals, academia as well as industry), so the motives appear to extend beyond the financial interests of drug companies. The desire for success, in combination with cognitive biases, can also influence academic authors and journals. Amid the flood of new medical information coming out each day, the attention of the news media and academic community is more likely to be captured by studies whose results are positive or newsworthy. In the peer review system, a fundamental flaw arises from the fact that authors usually write manuscripts after they know the results. This allows hindsight and other biases to come into play, so data can be "tortured until they confess" (a detailed example is given). If a "publishable" result cannot be achieved, non-publication remains an option. To address publication bias, various measures have been undertaken, including registries of clinical trials. Drug regulatory agencies can provide valuable unpublished data. It is suggested that journals borrow from the FDA review model. Because the significance of study results biases reviewers, results should be excluded from review until after a preliminary judgment of study scientific quality has been rendered, based on the original study protocol. Protocol publication can further enhance the credibility of the published literature.
    CNS Drugs 05/2013; 27(6). DOI:10.1007/s40263-013-0067-9 · 5.11 Impact Factor
Show more