Citation bias favoring statistically significant studies was present in medical research.
ABSTRACT Statistically significant studies may be cited more than negative studies on the same topic. We aimed to assess here whether such citation bias is present across the medical literature.
We conducted a cohort study of the association between statistical significance and citations. We selected all therapeutic intervention studies included in meta-analyses published between January and March 2010 in the Cochrane database, and retrieved citation counts of all individual studies using ISI Web of Knowledge. The association between the statistical significance of each study and the number of citations it received between 2008 and 2010 was assessed in mixed Poisson models.
We identified 89 research questions addressed in 458 eligible articles. Significant studies were cited twice as often as nonsignificant studies (multiplicative effect of significance: 2.14, 95% confidence interval: 1.38-3.33). This association was partly because of the higher impact factor of journals where significant studies are published (adjusted multiplicative effect of significance: 1.14, 95% confidence interval: 0.87-1.51).
A citation bias favoring significant results occurs in medical research. As a consequence, treatments may seem more effective to the readers of medical literature than they really are.
- SourceAvailable from: Ben Mudrak[Show abstract] [Hide abstract]
ABSTRACT: Scholarly publishing is the cornerstone of advancing our understanding of the world around us. A staggering amount of research is conducted each day, and this research would effectively serve no purpose without a system for its dissemination. However, academic publishing is slanted toward positive results, with negative results becoming scarcer in the literature (Fanelli, 2012). Countless experiments without splashy results end up in dusty lab notebooks, forever closed off from the rest of the scientific community.Journal of Postdoctoral Research. 04/2013; 1(4):51-52.
- [Show abstract] [Hide abstract]
ABSTRACT: Publication bias undermines the integrity of the evidence base by inflating apparent drug efficacy and minimizing drug harms, thus skewing the risk-benefit ratio. This paper reviews the topic of publication bias with a focus on drugs prescribed for psychiatric conditions, especially depression, schizophrenia, bipolar disorder, and autism. Publication bias is pervasive; although psychiatry/psychology may be the most seriously afflicted field, it occurs throughout medicine and science. Responsibility lies with various parties (authors as well as journals, academia as well as industry), so the motives appear to extend beyond the financial interests of drug companies. The desire for success, in combination with cognitive biases, can also influence academic authors and journals. Amid the flood of new medical information coming out each day, the attention of the news media and academic community is more likely to be captured by studies whose results are positive or newsworthy. In the peer review system, a fundamental flaw arises from the fact that authors usually write manuscripts after they know the results. This allows hindsight and other biases to come into play, so data can be "tortured until they confess" (a detailed example is given). If a "publishable" result cannot be achieved, non-publication remains an option. To address publication bias, various measures have been undertaken, including registries of clinical trials. Drug regulatory agencies can provide valuable unpublished data. It is suggested that journals borrow from the FDA review model. Because the significance of study results biases reviewers, results should be excluded from review until after a preliminary judgment of study scientific quality has been rendered, based on the original study protocol. Protocol publication can further enhance the credibility of the published literature.CNS Drugs 05/2013; · 4.38 Impact Factor
- American Journal of Epidemiology 07/2014; · 4.98 Impact Factor
Citation bias favoring statistically significant studies was present
in medical research
Anne-Sophie Jannota,*, Thomas Agoritsasa,b, Ang? ele Gayet-Agerona, Thomas V. Pernegera
aCRC & Division of Clinical Epidemiology, Department of Health and Community Medicine, University of Geneva and University Hospitals of Geneva, Rue
Gabrielle Perret-Gentil 6, 1211 GENEVE 14, Switzerland
bDivision of General Internal Medicine, University Hospitals of Geneva, Geneva, Switzerland
Accepted 28 September 2012
Objective: Statistically significant studies may be cited more than negative studies on the same topic. We aimed to assess here whether
such citation bias is present across the medical literature.
Study Design and Setting: We conducted a cohort study of the association between statistical significance and citations. We selected
all therapeutic intervention studies included in meta-analyses published between January and March 2010 in the Cochrane database, and
retrieved citation counts of all individual studies using ISI Web of Knowledge. The association between the statistical significance of each
study and the number of citations it received between 2008 and 2010 was assessed in mixed Poisson models.
Results: We identified 89 research questions addressed in 458 eligible articles. Significant studies were cited twice as often as nonsig-
nificant studies (multiplicative effect of significance: 2.14, 95% confidence interval: 1.38e3.33). This association was partly because of the
higher impact factor of journals where significant studies are published (adjusted multiplicative effect of significance: 1.14, 95% confidence
Conclusion: A citation bias favoring significant results occurs in medical research. As a consequence, treatments may seem more ef-
fective to the readers of medical literature than they really are. ? 2013 Elsevier Inc. All rights reserved.
Keywords: Systematic bias; Medical literature analysis and retrieval system; Regression analysis; Journal impact factor; Meta-analysis; Publication bias
Citation counts are sometimes used to judge the quality
and societal impact of medical research. However, study at-
tributes other than scientific merit drive citations . For
instance, positive or statistically significant findings may
be cited more often than negative or nonsignificant studies
on the same topic. This phenomenon, which we call ‘‘cita-
tion bias,’’ may distort the perception of available scientific
evidence among the users of scientific literature. Citation
bias is less extreme than but analogous to publication bias
, which occurs when the chance for a study to be pub-
lished is increased if its outcome is statistically significant.
As an illustration, Emerson et al.  showed that a fabri-
cated manuscript with a significant outcome was more
likely to be recommended for publication than an otherwise
identical manuscript reporting no difference between treat-
ments. In the same vein, it is possible that of two published
articles, identical except for the statistical significance of
the main finding, the ‘‘significant’’ study would be cited
more often than the nonsignificant study. Such a citation
bias, if it exists, would influence the construction of scien-
tific knowledge .
To date, citation bias favoring more significant studies
has only been shown for some medical topics [5e8].
Whether citation bias is a more general phenomenon re-
mains unclear. In this study, we used a broad set of clinical
questions reviewed in the Cochrane Database of Systematic
Reviews to evaluate if statistical significance was associated
with citation counts.
2. Materials and methods
We conducted a cohort study by retrieving all citations
of relevant publications pooled in meta-analyses on a broad
set of medical questions assessing the efficacy of various
Funding: This work was funded by the University Hospitals of Geneva.
Competing interests: None.
* Correspondingauthor.Tel.:þ4122372 9038;fax:þ4122 37290 35.
E-mail address: firstname.lastname@example.org (A.-S. Jannot).
0895-4356/$ - see front matter ? 2013 Elsevier Inc. All rights reserved.
Journal of Clinical Epidemiology 66 (2013) 296e301
What is new?
? Citation bias favoring significant results exists
across medical fields. This bias is mediated by
the impact factor of the journal where the article
is published, as statistically significant findings
are more likely published in journals with higher
What this adds to what was known?
? To date, citation bias has been reported for specific
topics. Our study shows that it is a general phe-
nomenon in medical research.
What is the implication and what should change
? Looking for references in articles to explore the lit-
erature on a specific question should be avoided
owing to a possible citation bias, as treatments
may thus seem more effective to the readers of
medical literature than they really are.
In November 2011, we extracted all reviews on therapeu-
tic interventions from the Cochrane Database of Systematic
Reviews published in January, February, and March 2010.
We chose Cochrane reviews because of their well-
established quality and methodological rigor . To assess
the current citation bias, we focused on a recent period and
restricted the search to reviews with the record status of
‘‘new review’’ or ‘‘new search.’’ In each of the selected re-
views, we used the first forest plot appearing in the publica-
tion that met the following criteria: at least two retrievable
publications in ISI Web of Knowledge published before
2008, a binary outcome, availability of the numbers of pa-
tients in the two treatment groups (experimental and control
or placebo), and the numbers of patients experiencing the
outcome in each treatment group. If such a forest plot was
not available, then the review was excluded.
To allow sufficient opportunity for citation, we selected
only articles published until end of 2007 in each forest plot.
For each article, we extracted the denominators and numer-
ators for risks (numbers of events and the numbers of pa-
tients in each study arm). The outcome for which these
numbers were extracted was the one analyzed in the forest
plot and not necessarily the primary outcome of the publi-
cation. We also retrieved whether the pooled effect of the
meta-analysis was statistically significant or not, assessed
here as a proxy of the true effect of the intervention. We
extracted the delay in years since publication. Finally, we
obtained the journal’s impact factor from the ISI Web of
Knowledge 2010 Journal Citation Reports. When the jour-
nal was no longer published, we used the last available
impact factor in ISI. We categorized meta-analyses in seven
clinical fields: cancer, cardiovascular disease, infectious
disease, neurology, mother and child, psychiatry, and other.
The dependent variable was the number of citations ac-
crued by each article from 2008 to 2010 according to the
ISI Web of Knowledge. We subtracted the Cochrane Review
citation used to identify the relevant publications from the
total number of citations, if found among the citing articles.
individual study defined as the P-value of the c2(or exact
test when necessary), testing the contrast between the two
study arms for the outcome retrieved by the meta-analysis
and dichotomized at the threshold of 0.05.
The theoretical framework we used to guide the analysis
is represented in Fig. 1: the citation count depends directly
on the journal’s impact factor; the publication year; the re-
search question and the medical area; and through different
pathways (both indirect and direct) on sample size, the
odds-ratio that characterizes the association between treat-
ment and outcome, and its statistical significance. One ex-
ample of such an indirect pathway is the relationship
between the significance of the study and the citation count:
a significant result may be published in a more prominent
journal, which in itself may lead to more citations.
2.2. Statistical analysis
As citations are count data, we used a mixed Poisson re-
gression model assessing the predictors (fixed effects) asso-
ciated with citation counts. The clinical question (i.e., the
corresponding meta-analysis used to identify relevant pub-
lications) was taken as a random effect (random intercept)
in the model.
The independent variables (fixed effects) in the model
were the statistical significance of the result, the delay since
the study publication,the logarithm ofthe study sample size,
study outcome and the study arm, and the significance of the
icant). We used logarithmic transformations where indicated
to improve the fit of linear models based on examination
of scatter plots against the logarithm of citation counts. Re-
gression coefficients were estimated using penalized quasi
likelihood to take into account overdispersion (function
glmmPQL, package MASS, R 2.13.2; R Development Core
Team, Vienna, Austria).
We first performed mixed-model univariate analyses
with each independent variable. Regression coefficients
for the different clinical fields were standardized by sub-
tracting the mean coefficient of all medical fields. We then
performed a multivariate analysis with all variables that had
been shown to be significantly associated with statistical
significance in univariate analyses.
To test whether the impact factor mediated the effect of
the significance of the study on citation count (Fig. 1), we
297A.-S. Jannot et al. / Journal of Clinical Epidemiology 66 (2013) 296e301
applied the Sobel test , which tests the significance of
the change in a regression coefficient after adjustment. Me-
diation occurs when the effect of an independent variable
(e.g., statistical significance) on a dependent variable
(e.g., citation count) is transmitted through a mediating var-
iable (e.g., the impact factor). Bootstrapping by research
question strata (package boot, R 2.13.2), we generated
1,000 bootstrap samples from the original data set, and in
each we obtained the regression coefficient for statistical
significance with and without adjustment for impact factor,
as well as the difference between the two. All analyses for
mediation were adjusted for all variables significantly asso-
ciated with the citation count in univariate analyses.
The search yielded 242 systematic reviews on therapeutic
interventions published in the Cochrane Library between
January and March 2010. After applying the eligibility crite-
ria, we retained 89 research questions: 5 in the field of can-
cer, 12 in cardiovascular diseases, 15 in infectious diseases,
5 in neurology, 29 in the field of mother and child, 10 in psy-
chiatry, and 13 that did not belong to any of these clinical
fields and were thus grouped under ‘‘other’’ (see Table 1 in
These 89 meta-analyses included 470 retrievable articles
published before 2008 in 206 journals (Table 1). We could
not retrieve the journal’s impact factor for 12 articles, all
published before 1991, and therefore removed them from
the analysis. The median and the mean numbers of retriev-
able articles for each forest plot were 3 and 5, respectively.
For 31 forest plots, only two such articles were retrieved,
whereas for 26, five or more publications were retrieved.
The median number of citations per publication was 4,
the mean was 14, and the distribution of the number of ci-
tations per research question was skewed to high values
(Fig. 2). Seven publications had extreme numbers of cita-
tions with more than 200 citations.
In univariate analysis, all factors, except the logarithm
of the odds ratio and the significance of the pooled effect
for the meta-analysis, were significantly associated with
nificant articles were cited more than twice as often as non-
significant ones. The delay from the study publication was
correlated to citation count, with 7% fewer citations for each
additional year since publication. Mother and child articles
were the least cited; other fields had similar citation counts.
The logarithm of the impact factor was also strongly corre-
lated to citation count, with close to nine times as many ci-
tations when the logarithm of the impact factor increased by
one (in other words, was multiplied by 10). The same held
for sample size with close to five times as many citations
when the sample size was multiplied by 10.
After adjustment for mother and child field and delay
from study publication, statistical significance remained
significantly associated with the logarithm of the citation
count (multiplicative effect 2.27, P ! 0.001). Further ad-
justment for sample size, a possible confounding factor,
Fig. 1. Theoretical framework illustrating the determination of citation count.
Table 1. Characteristics of included publications
Mother and child
Statistical significance (P-value)
298 A.-S. Jannot et al. / Journal of Clinical Epidemiology 66 (2013) 296e301
only moderately decreased the effect of statistical signifi-
cance (the multiplicative effect went from 2.27 to 1.85
and remained statistically significant). To examine a possi-
ble mediation by the impact factor of the journal where the
study is published, we also added the logarithm of the im-
pact factor in the model that already included mother and
child field, delay from study publication, and sample size.
This caused the multiplicative effect of statistical signifi-
cance to further decrease to 1.14. This association was no
longer statistically significant (Table 3). The Sobel test con-
firmed that the decrease of the coefficient for statistical sig-
nificance in the two models (with and without the logarithm
of the impact factor, from 1.85 to 1.14) was statistically sig-
nificant (P ! 0.01 estimated by bootstrap).
This study showed that the citation count was associated
with the statistical significance of the primary result, with
more than twice as many citations when the reported result
was significant. To date, publications on citation bias fo-
cused on specific research questions, so that their results
could not be generalized [5e8]. Our study allows us to gen-
eralize the existence of a citation bias in medical research
taken globally. This effect of statistical significance is medi-
ated by the impact factor, as studies reporting statistically
significant findings were more likely published in journals
with higher impact factors. However, we did not find any as-
sociation between the strength of the treatment effect (cap-
tured by the odds ratio) and the citation count. This lack of
association has been shown by others . We also did not
find any association with the significance of the pooled esti-
mate of the corresponding meta-analyses, a proxy of the true
effect of the intervention tested.
Studies with significant outcomes may be more cited
than studies with nonsignificant outcomes for several rea-
sons. First, as previous studies suggested, we confirmed that
the journal’s impact factor was a strong determinant of ci-
tation counts at the study level [5,12,13]. As described in
the theoretical framework, the impact factor may be consid-
ered as an intervening causal variable between statistical
significance and citations: significant studies have a higher
likelihood of being published in more prestigious journals,
and articles in such journals may be more frequently cited
independently from their intrinsic quality. To a large extent,
the effect of statistical significance on citation count was
mediated by the impact factor.
Second, studies with significant results may be cited in
clinical guidelines, unlike negative studies, which do not
change practice. As we did not examine the sources of ci-
tations, we do not know the proportion of clinical guide-
lines among the citing articles. However, as there was no
association between citation count and the pooled signifi-
cance of the corresponding meta-analysis, this phenomenon
was likely marginal.
Third, a nonsignificant outcome may be attributed to
a lack of power or low methodological quality, and less
powerful studies may be deemed less worthy of citation.
We indeed found that when adjusting on sample size, the
effect size for significance slightly decreased.
Authors also might perform an informal review when
reading the literature on a specific topic, and when conclud-
ing that the intervention seems to be efficient, they might
cite only significant studies, leading to a citation bias.
Mean citation count per research question
0 50100 150
Fig. 2. Histogram for the mean number of retrieved citations per pub-
lication for each meta-analysis.
Table 2. Univariate Poisson mixed model of predictors of citation
on citations Confidence interval
Statistical significance of study (P-value)
Delay since publication
Clinical field (reference: mean of all medical fields)
Mother and child
Significance of pooled effect (P-value)
299A.-S. Jannot et al. / Journal of Clinical Epidemiology 66 (2013) 296e301
Finally, another reason relates to cognitive biases when
citing an article: either confirmatory bias, when the author
is persuaded that the treatment is effective and therefore
cites only significant studies, or attentional bias, as an au-
thor may be more prone to recall significant articles when
reading the literature. It is plausible that a combination of
these mechanisms is at play.
Other authors have reported on the existence of a ‘‘refer-
ence bias,’’ a phenomenon that is related but not identical to
citation bias. Reference bias occurs when an article prefer-
entially cites results that concur with its conclusions .
Thus reference bias affects a given citing article, whereas
citation bias affects the literature in general.
We considered citation counts during a given time period
instead of all citations (adjusting for time since publication)
this variable because citation patterns may vary over time
partly because of changing availability of journals, increas-
ing performance of searching engines, and publications of
review articles or meta-analysis (cited instead of the original
article). Therefore, we focused on a time window during
which citation behavior was likely stable.
Other determinants of citation counts have been identi-
fied by others [1,16]. These include article-dependent fac-
tors such as the number of coauthors and the length of
the article [16,17], and author-dependent factors such as ac-
quaintance between authors and parochialism  or tech-
nical problems owing to incorrect citing of sources and
online availability . The importance of such factors re-
mains controversial . Furthermore, because such vari-
ables are not likely to be associated to the statistical
significance of the study, they are not potential confounding
factors for the association between citation count and statis-
The main strength of this study is that we assessed cita-
tion bias in a wide array of research questions and clinical
fields. By using a mixed model, we were able to adjust the
analysis for each specific research question. This is desir-
able because some research questions are more often de-
bated in the literature than others, and this characteristic
may be associated (although noncausally) with the statisti-
cal significance of the relevant studies. Indeed, we found
that the average citation count varied across clinical fields.
The research question was taken into account in some pre-
vious studies of citation bias [6,11], but not by others [5,7].
The main limitation of our study is that we did not assess
why each article was cited. In most articles, several results
of statistical significance are presented, such as for the pri-
mary and secondary outcomes of a clinical trial. Therefore,
an article could be cited for any of these outcomes and not
specifically for the research question of interest. This would
introduce random noise in the model, which may have
weakened the association between statistical significance
and citation counts. Another limitation is that we only ex-
tracted citation counts from ISI Web of Knowledge. Num-
bers of citations from Scopus, Google Scholar, and ISI Web
of Knowledge are highly correlated, but not exactly the
same . ISI Web Of Knowledge covers only 8,700 jour-
nals, mainly in English language , and therefore re-
trieves fewer citations than other sources. However, ISI
journals tend to be the most authoritative and most scientif-
ically influential. Indeed, 458 of 470 articles included in the
meta-analyses were published in ISI-indexed journals. We
also did not measure the quality of the published articles,
a possible confounding factor. However, as Cochrane
meta-analyses only include articles that fulfill minimum
quality criteria , article’s quality probably did not play
an important role in our study.
This work documents the existence of citation bias
across medical fields, albeit this bias is moderate. Looking
for references in articles to explore the literature on a spe-
cific question should be avoided owing to a possible cita-
tion bias, as treatments may thus seem more effective to
the readers of medical literature than they really are. Jour-
nal editors should recommend to authors to cite systematic
reviews rather than original articles when possible to limit
Supplementary data associated with this article can be
found, in the online version, at doi: 10.1016/j.jclinepi.
Table 3. Multivariate Poisson mixed model of predictors of citation count
VariableMultiplicative effect on citationsConfidence interval
Delay since publication in year
Mother and child field vs. other fields
10-Fold impact factor increase
10-Fold sample size increase
300 A.-S. Jannot et al. / Journal of Clinical Epidemiology 66 (2013) 296e301
 Bornmann L, Daniel HD. What do citation counts measure? A review
of studies on citing behavior. J Doc 2008;64(1):45e80.
 Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication
bias in clinical research. Lancet 1991;337:867e72.
 Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA,
Leopold SS. Testing for the presence of positive-outcome bias in peer
review a randomized controlled trial. Arch Intern Med 2010;170:
 Greenberg SA. How citation distortions create unfounded authority:
analysis of a citation network. BMJ 2009;339:b2680.
 Callaham M, Wears RL, Weber E. Journal prestige, publication bias,
and other characteristics associated with citation of published studies
in peer-reviewed journals. JAMA 2002;287:2847e50.
 Etter JF, Stapleton J. Citations to trials of nicotine replacement ther-
apy were biased toward positive results and high-impact-factor jour-
nals. J Clin Epidemiol 2009;62:831e7.
 Kjaergard LL, Gluud C. Citation bias of hepato-biliary randomized
clinical trials. J Clin Epidemiol 2002;55:407e10.
 Nieminen P, Rucker G, Miettunen J, Carpenter J, Schumacher M.
Statistically significant papers in psychiatry were cited more often
than others. J Clin Epidemiol 2007;60:939e46.
 Jadad AR, Cook DJ, Jones A, Klassen TP, Tugwell P, Moher M, et al.
Methodology and reports of systematic reviews and meta-analysesd
a comparison of COCHRANE reviews with articles published in
paper-based journals. JAMA 1998;280:278e80.
 Sobel ME. Asymptotic confidence intervals for indirect effects in
structural equation models. In: Leinhardt S, editor. Sociological
methodology. Washington, DC: American Sociological Association;
1982. p. 290e312.
 Leimu R, Koricheva J. What determines the citation frequency of
ecological papers? Trends Ecol Evol 2005;20(1):28e32.
 Fu LD, Aliferis C. Models for predicting and explaining citation
count of biomedical articles. AMIA Annu Symp Proc 2008;
 Perneger TV. Citation analysis of identical consensus statements re-
vealed journal-related bias. J Clin Epidemiol 2010;63:660e4.
 Schmidt LM, Gotzsche PC. Of mites and men: reference bias in nar-
rative review articles - a systematic review. J Fam Pract 2005;54:
 Kulkarni AV, Busse JW, Shams I. Characteristics associated with ci-
tation rate of the medical literature. PLoS One 2007;2(5):5.
 Lokker C,McKibbon KA,
Haynes RB. Prediction of citation counts for clinical articles at two
years using data available within three weeks of publication: retro-
spective cohort study. BMJ 2008;336:655e7.
 Baldi S. Normative versus social constructivist processes in the allo-
cation of citations: a network-analytic model. Am Sociol Rev 1998;
 Case DO, Higgins GM. How can we investigate citation behavior? A
study of reasons for citing literature in communication. J Am Soc Inf
Sci Technol 2000;51(7):635e45.
 Eysenbach G. Citation advantage of open access articles. Plos Biol
 Kulkarni AV, Aziz B, Shams I, Busse JW. Comparisons of citations in
Web of Science, Scopus, and Google Scholar for articles published in
general medical journals. JAMA 2009;302:1092e6.
 Meho LI, Yang K. Impact of data sources on citation counts and rank-
ings of LIS faculty: Web of Science versus Scopus and Google
Scholar. J Am Soc Inf Sci Technol 2007;58(13):2105e25.
McKinlayRJ, Wilczynski NL,
301A.-S. Jannot et al. / Journal of Clinical Epidemiology 66 (2013) 296e301