Article

The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: The case of depression

Abstract and Figures

Evidence-based medicine is the cornerstone of clinical practice, but it is dependent on the quality of evidence upon which it is based. Unfortunately, up to half of all randomized controlled trials (RCTs) have never been published, and trials with statistically significant findings are more likely to be published than those without (Dwan et al., 2013). Importantly, negative trials face additional hurdles beyond study publication bias that can result in the disappearance of non-significant results (Boutron et al., 2010; Dwan et al., 2013; Duyx et al., 2017). Here, we analyze the cumulative impact of biases on apparent efficacy, and discuss possible remedies, using the evidence base for two effective treatments for depression: Antidepressants and psychotherapy.
Content may be subject to copyright.
Psychological Medicine
cambridge.org/psm
Editorial
Cite this article: de Vries YA, Roest AM, de
Jonge P, Cuijpers P, Munafò MR, Bastiaansen
JA (2018). The cumulative effect of reporting
and citation biases on the apparent efficacy of
treatments: the case of depression.
Psychological Medicine 13. https://doi.org/
10.1017/S0033291718001873
Received: 4 September 2017
Revised: 29 November 2017
Accepted: 26 June 2018
Key words:
Antidepressants; bias; citation bias;
depression; psychotherapy; reporting bias
Author for correspondence:
Y. A. de Vries, E-mail: y.a.de.vries@rug.nl
© Cambridge University Press 2018. This is an
Open Access article, distributed under the
terms of the Creative Commons Attribution
licence (http://creativecommons.org/licenses/
by/4.0/), which permits unrestricted re-use,
distribution, and reproduction in any medium,
provided the original work is properly cited.
The cumulative effect of reporting and citation
biases on the apparent efficacy of treatments:
the case of depression
Y. A. de Vries1,2, A. M. Roest1,2, P. de Jonge1,2, P. Cuijpers3, M. R. Munafò4,5
and J. A. Bastiaansen1,6
1
Department of Psychiatry, Interdisciplinary Center Psychopathology and Emotion regulation, University of
Groningen, University Medical Center Groningen, Groningen, the Netherlands;
2
Developmental Psychology,
Department of Psychology, University of Groningen, Groningen, the Netherlands;
3
Department of Clinical, Neuro-,
and Developmental Psychology, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam,
Amsterdam, the Netherlands;
4
MRC Integrative Epidemiology Unit, University of Bristol, Bristol, UK;
5
UK Centre for
Tobacco and Alcohol Studies, School of Experimental Psychology, University of Bristol, Bristol, UK and
6
Department of Education and Research, Friesland Mental Health Care Services, Leeuwarden, The Netherlands
Introduction
Evidence-based medicine is the cornerstone of clinical practice, but it is dependent on the
quality of evidence upon which it is based. Unfortunately, up to half of all randomized con-
trolled trials (RCTs) have never been published, and trials with statistically significant findings
are more likely to be published than those without (Dwan et al., 2013). Importantly, negative
trials face additional hurdles beyond study publication bias that can result in the disappearance
of non-significant results (Boutron et al., 2010;Dwanet al., 2013; Duyx et al., 2017). Here, we
analyze the cumulative impact of biases on apparent efficacy, and discuss possible remedies,
using the evidence base for two effective treatments for depression: antidepressants and
psychotherapy.
Reporting and citation biases
We distinguish among four major biases, although others exist: study publication bias, out-
come reporting bias, spin, and citation bias. While study publication bias involves non-
publication of an entire study, outcome reporting bias refers to non-publication of negative
outcomes within a published article or to switching the status of (non-significant) primary
and (significant) secondary outcomes (Dwan et al., 2013). Both biases pose an important
threat to the validity of meta-analyses (Kicinski, 2014).
Trials that faithfully report non-significant results will yield accurate effect size estimates,
but results interpretation can still be positively biased, which may affect apparent efficacy.
Reporting strategies that could distort the interpretation of results and mislead readers are
defined as spin (Boutron et al., 2010). Spin occurs when authors conclude that the treatment
is effective despite non-significant results on the primary outcome, for instance by focusing on
statistically significant, but secondary, analyses (e.g. instead of concluding that treatment X was
not more effective than placebo, concluding that treatment X was well tolerated and was effect-
ive in patients who had not received prior therapy). If an article has been spun, treatments are
perceived as more beneficial (Boutron et al., 2014). Finally, citation bias is an obstacle to ensur-
ing that negative findings are discoverable. Studies with positive results receive more citations
than negative studies (Duyx et al., 2017), leading to a heightened visibility of positive results.
The evidence base for antidepressants
We assembled a cohort of 105 depression trials, of which 74 were also included in a previous
study on publication bias (Turner et al., 2008); we added 31 trials of novel antidepressants
(approved after 2008) from the Food and Drug Administration (FDA) database (see online
Supplementary materials). Pharmaceutical companies must preregister all trials they intend
to use to obtain FDA approval; hence, trials with non-significant results, even if unpublished,
are still accessible.
Figure 1 demonstrates the cumulative impact of reporting and citation biases. Of 105 anti-
depressant trials, 53 (50%) trials were considered positive by the FDA and 52 (50%) were con-
sidered negative or questionable (Fig. 1a). While all but one of the positive trials (98%) were
published, only 25 (48%) of the negative trials were published. Hence, 77 trials were published,
of which 25 (32%) were negative (Fig. 1b). Ten negative trials, however, became positivein the
published literature, by omitting unfavorable outcomes or switching the status of the primary
https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0033291718001873
Downloaded from https://www.cambridge.org/core. Rijksuniversiteit Groningen, on 22 Aug 2018 at 07:52:18, subject to the Cambridge Core terms of use, available at
and secondary outcomes (Fig. 1c). Without access to the FDA
reviews, it would not have been possible to conclude that these
trials, when analyzed according to protocol, were not positive.
Among the remaining 15 (19%) negative trials, five were pub-
lished with spin in the abstract (i.e. concluding that the treatment
was effective). For instance, one article reported non-significant
results for the primary outcome ( p= 0.10), yet concluded that
the trial demonstrates an antidepressant effect for fluoxetine
that is significantly more marked than the effect produced by pla-
cebo(Rickels et al., 1986). Five additional articles contained mild
spin (e.g. suggesting the treatment is at least numerically better
than placebo). One article lacked an abstract, but the discussion
section concluded that there was a trend for efficacy. Hence,
only four (5%) of 77 published trials unambiguously reported
that the treatment was not more effective than placebo in that par-
ticular trial (Fig. 1d). Compounding the problem, positive trials
were cited three times as frequently as negative trials (92 v. 32
citations in Web of Science, January 2016, p< 0.001, see online
Supplementary material for further details) (Fig. 1e). Among
negative trials, those with (mild) spin in the abstract received an
average of 36 citations, while those with a clearly negative abstract
received 25 citations. While this might suggest a synergistic effect
between spin and citation biases, where negatively presented
negative studies receive especially few citations (de Vries et al.,
2016), this difference was not statistically significant ( p= 0.50),
likely due to the small sample size. Altogether, these results
show that the effects of different biases accumulate to hide non-
significant results from view.
The evidence base for psychotherapy
While the pharmaceutical industry has a financial motive for sup-
pressing unfavorable results, these biases are also present in the
other areas of research, such as psychotherapy. Without a
standardized trial registry, however, they are more difficult to
detect and disentangle. Statistical tests suggest an excess of posi-
tive findings in the psychotherapy literature, due to either study
publication bias or outcome reporting bias (Flint et al., 2015).
Of 55 National Institutes of Health-funded psychotherapy trials,
13 (24%) remained unpublished (Driessen et al., 2015), and
these had a markedly lower effect size than the published trials.
Regarding spin, 49 (35%) of 142 papers were considered nega-
tive in a recent meta-analysis (Flint et al., 2015), but we found that
only 12 (8%) abstracts concluded that psychotherapy was not
more effective than a control condition. The remaining abstracts
were either positive (73%) or mixed (19%) (e.g. concluding that
the treatment was effective for one outcome but not another).
Although we could not establish the pre-specified primary out-
come for these trials, and therefore cannot determine whether a
specific abstract is biased, published psychotherapy trials, as a
whole, clearly provide a more positive impression of the effective-
ness of psychotherapy than is justified by available evidence.
Positive psychotherapy trials were also cited nearly twice as fre-
quently as negative trials (111 citations v. 58, p= 0.003).
Negative trials with a positive or mixed abstract were cited
more often than those with a negative abstract (59 and 87 cita-
tions, respectively v. 26, p= 0.05); however, the small sample
size precludes definitive conclusions on the effects of spin on cit-
ation rates.
Preventing bias
Mandatory prospective registration has long been advocated as a
solution for study publication and outcome reporting bias. The
International Committee of Medical Journal Editors (ICMJE)
began requiring prospective registration of clinical trials as a pre-
condition for publication in 2005, but many journals do not
require registration (Knüppel et al., 2013) and others allow
Fig. 1. The cumulative impact of reporting and cit-
ation biases on the evidence base for antidepres-
sants. (a) displays the initial, complete cohort of
trials, while (b) through (e) show the cumulative
effect of biases. Each circle indicates a trial, while
the color indicates the results or the presence of
spin. Circles connected by a grey line indicate trials
that were published together in a pooled publica-
tion. In (e), the size of the circle indicates the (rela-
tive) number of citations received by that category
of studies.
2 Y. A. de Vries et al.
https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0033291718001873
Downloaded from https://www.cambridge.org/core. Rijksuniversiteit Groningen, on 22 Aug 2018 at 07:52:18, subject to the Cambridge Core terms of use, available at
retrospective registration (Harriman and Patel, 2016). Since 2007,
the FDA also requires prospective registration of most drug trials.
This increasing pressure may explain why recently completed,
negative antidepressant trials are more frequently published
than older negative trials: all negative trials that remained unpub-
lished were completed before 2004, while the 25 trials completed
in 2004 or later (including 14 for which registration was legally
required) were all published, even though nine were negative. A
regulatory requirement is likely to be one of the most effective
measures to ensure universal registration; unfortunately, the
2007 law excludes trials of behavioral interventions (e.g. psycho-
therapy) and phase 1 (healthy volunteer) trials.
Nevertheless, registration seems insufficient to ensure com-
plete and accurate reporting of a trial. Only around half of all
trials registered at ClinicalTrials.gov were published within two
years of completion (Ross et al., 2009), and non-reporting of
protocol-specified outcomes or the silent addition of new out-
comes is also common (Jones et al., 2015,http://www.compare-
trials.org). Close examination of registries by independent
researchers may be necessary for registration to be a truly effective
deterrent to study publication and outcome reporting bias. An
alternative (or addition) to registration could be publication of
study protocols or registered reports, in which journals accept
a study for publication based on the introduction and methods,
before the results are known. Widespread adoption of this format
might also help to prevent spin, by reducing the pressure that
researchers might feel to overselltheir results to get published.
Furthermore, in our analysis, positive studies were published in
journals with a higher median impact factor (and thus higher visi-
bility) than negative studies (3.5 v. 2.4 for antidepressant trials
and 3.1 v. 2.6 for psychotherapy trials), which may be one driver
behind the difference in citation rates. Hence, adoption of regis-
tered reports might also reduce citation bias by reducing the ten-
dency for positive studies to be published in higher impact
journals. Peer reviewers could also play a crucial role in ensuring
that abstracts accurately report trial results and that important
negative studies are cited. Finally, the prevalence of spin and cit-
ation biases also shows the importance of assessing a studys
actual results (rather than relying on the authorsconclusions)
and of conducting independent literature searches, since reference
lists may yield a disproportionate number of positive (and posi-
tively presented) studies.
Conclusions
The problem of study publication bias is well-known. Our exam-
ination of antidepressant trials, however, shows the pernicious
cumulative effect of additional reporting and citation biases,
which together eliminated most negative results from the anti-
depressant literature and left the few published negative results
difficult to discover. These biases are unlikely to be unique to anti-
depressant trials. We have already shown that similar processes,
though more difficult to assess, occur within the psychotherapy
literature, and it seems likely that the effect of these biases accu-
mulates whenever they are present. Consequently, researchers and
clinicians across medical fields must be aware of the potential for
bias to distort apparent treatment efficacy, which poses a threat to
the practice of evidence-based medicine.
Supplementary material. The supplementary material for this article can
be found at https://doi.org/10.1017/S0033291718001873.
Acknowledgements. Marcus R. Munafò is a member of the United
Kingdom Centre for Tobacco and Alcohol Studies, a UKCRC Public Health
Research: Centre of Excellence. Funding from British Heart Foundation,
Cancer Research UK, Economic and Social Research Council, Medical
Research Council, and the National Institute for Health Research, under the
auspices of the UK Clinical Research Collaboration, is gratefully acknowledged.
Conflict of interest. The authors have no conflicts of interest to report.
References
Boutron I, Altman DG, Hopewell S, Vera-Badillo F, Tannock I and
Ravaud P (2014) Impact of spin in the abstracts of articles reporting results
of randomized controlled trials in the field of cancer: the SPIIN randomized
controlled trial. Journal of Clinical Oncology 32, 41204126.
Boutron I, Dutton S, Ravaud P and Altman DG (2010) Reporting and inter-
pretation of randomized controlled trials with statistically nonsignificant
results for primary outcomes. JAMA 303, 20582064.
de Vries YA, Roest AM, Franzen M, Munafò MR and Bastiaansen JA (2016)
Citation bias and selective focus on positive findings in the literature on the
serotonin transporter gene (5-HTTLPR), life stress and depression.
Psychological Medicine 46, 29712979.
Driessen E, Hollon SD, Bockting CLH, Cuijpers P and Turner EH (2015)
Does publication bias inflate the apparent efficacy of psychological treat-
ment for major depressive disorder? A systematic review and meta-analysis
of US National Institutes of Health-funded trials. PLoS One 10, e0137864.
Duyx B, Urlings MJE, Swaen GMH, Bouter LM and Zeegers MP (2017)
Scientific citations favor positive results: a systematic review and meta-analysis.
Journal of Clinical Epidemiology 88,92101.
Dwan K, Gamble C, Williamson PR and Kirkham JJ (2013) Systematic
review of the empirical evidence of study publication bias and outcome
reporting bias an updated review. PLoS One 8, e66844.
Flint J, Cuijpers P, Horder J, Koole SL and Munafò MR (2015) Is there an
excess of significant findings in published studies of psychotherapy for
depression? Psychological Medicine 45, 439446.
Harriman SL and Patel J (2016) When are clinical trials registered? An ana-
lysis of prospective versus retrospective registration. Trials 17, 187.
Jones CW, Keil LG, Holland WC, Caughey MC and Platts-Mills TF (2015)
Comparison of registered and published outcomes in randomized con-
trolled trials: a systematic review. BMC Medicine 13, 282.
Kicinski M (2014) How does under-reporting of negative and inconclusive
results affect the false-positive rate in meta-analysis? A simulation study.
BMJ Open 4, e004831.
Knüppel H, Metz C, Meerpohl JJ and Strech D (2013) How psychiatry jour-
nals support the unbiased translation of clinical research. A cross-sectional
study of editorial policies. PLoS One 8, e75995.
Rickels K, Amsterdam JD and Avallone MF (1986) Fluoxetine in major
depression: a controlled study. Curr. Ther. Res.39, 1986.
Ross JS, Mulvey GK, Hines EM, Nissen SE and Krumholz HM (2009) Trial
publication after registration in ClinicalTrials.gov: a cross-sectional analysis.
PLoS Medicine 6, e1000144.
Turner EH, Matthews AM, Linardatos E, Tell RA and Rosenthal R (2008)
Selective publication of antidepressant trials and its influence on apparent
efficacy. New England Journal of Medicine 358, 252260.
Psychological Medicine 3
https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0033291718001873
Downloaded from https://www.cambridge.org/core. Rijksuniversiteit Groningen, on 22 Aug 2018 at 07:52:18, subject to the Cambridge Core terms of use, available at
... However, there has been increasing concern that the evidence base that EBM depends on is distorted. The efficacy of antidepressants and antipsychotics, for instance, has been inflated by reporting bias (de Vries et al., 2018;Roest et al., 2015;Turner, Knoepflmacher, & Shapley, 2012;Turner, Matthews, Linardatos, Tell, & Rosenthal, 2008), and the same is probably true for psychotherapy (de Vries et al., 2018;Driessen, Hollon, Bockting, Cuijpers, & Turner, 2015). Problems in trial design can also lead to stacking the deck in favor of a treatment (Heres et al., 2006;Leichsenring et al., 2017) or to difficulty generalizing results to clinical practice (Lorenzo-Luaces, Zimmerman, . ...
... However, there has been increasing concern that the evidence base that EBM depends on is distorted. The efficacy of antidepressants and antipsychotics, for instance, has been inflated by reporting bias (de Vries et al., 2018;Roest et al., 2015;Turner, Knoepflmacher, & Shapley, 2012;Turner, Matthews, Linardatos, Tell, & Rosenthal, 2008), and the same is probably true for psychotherapy (de Vries et al., 2018;Driessen, Hollon, Bockting, Cuijpers, & Turner, 2015). Problems in trial design can also lead to stacking the deck in favor of a treatment (Heres et al., 2006;Leichsenring et al., 2017) or to difficulty generalizing results to clinical practice (Lorenzo-Luaces, Zimmerman, . ...
Article
Full-text available
Background Previous research has suggested that statistical power is suboptimal in many biomedical disciplines, but it is unclear whether power is better in trials for particular interventions, disorders, or outcome types. We therefore performed a detailed examination of power in trials of psychotherapy, pharmacotherapy, and complementary and alternative medicine (CAM) for mood, anxiety, and psychotic disorders. Methods We extracted data from the Cochrane Database of Systematic Reviews (Mental Health). We focused on continuous efficacy outcomes and estimated power to detect predetermined effect sizes (standardized mean difference [SMD] = 0.20–0.80, primary SMD = 0.40) and meta-analytic effect sizes (ES MA ). We performed meta-regression to estimate the influence of including underpowered studies in meta-analyses. Results We included 256 reviews with 10 686 meta-analyses and 47 384 studies. Statistical power for continuous efficacy outcomes was very low across intervention and disorder types (overall median [IQR] power for SMD = 0.40: 0.32 [0.19–0.54]; for ES MA : 0.23 [0.09–0.58]), only reaching conventionally acceptable levels (80%) for SMD = 0.80. Median power to detect the ES MA was higher in treatment-as-usual (TAU)/waitlist-controlled (0.49–0.63) or placebo-controlled (0.12–0.38) trials than in trials comparing active treatments (0.07–0.13). Adequately-powered studies produced smaller effect sizes than underpowered studies ( B = −0.06, p ⩽ 0.001). Conclusions Power to detect both predetermined and meta-analytic effect sizes in psychiatric trials was low across all interventions and disorders examined. Consistent with the presence of reporting bias, underpowered studies produced larger effect sizes than adequately-powered studies. These results emphasize the need to increase sample sizes and to reduce reporting bias against studies reporting null results to improve the reliability of the published literature.
... A primary cause of QRPs is common cognitive biases that affect the analysis, reporting, and interpretation of data [7][8][9][10]. For example, apophenia (the tendency to see patterns in random data) and confirmation bias (the tendency to focus on evidence that is consistent with one's beliefs) can lead to particular analytical choices and selective reporting of "publishable" results [11][12][13]. In addition, hindsight bias (the tendency to view past events as predictable) can lead to HARK-ing, so that observed results appear more compelling. ...
... For example, pre-registered studies are more likely to report null results [22,23], likely due to reduced analytic flexibility and selective reporting. While this is a positive outcome for research integrity, null results are less likely to be published [13,41,42] and cited [11], which could disadvantage researchers' careers. ...
Article
Full-text available
Analysis of secondary data sources (such as cohort studies, survey data, and administrative records) has the potential to provide answers to science and society’s most pressing questions. However, researcher biases can lead to questionable research practices in secondary data analysis, which can distort the evidence base. While pre-registration can help to protect against researcher biases, it presents challenges for secondary data analysis. In this article, we describe these challenges and propose novel solutions and alternative approaches. Proposed solutions include approaches to (1) address bias linked to prior knowledge of the data, (2) enable pre-registration of non-hypothesis-driven research, (3) help ensure that pre-registered analyses will be appropriate for the data, and (4) address difficulties arising from reduced analytic flexibility in pre-registration. For each solution, we provide guidance on implementation for researchers and data guardians. The adoption of these practices can help to protect against researcher bias in secondary data analysis, to improve the robustness of research based on existing data.
... Unfortunately, the pandemic of COVID-19 in recent years has only aggravated those conditions (Zhang et al., 2020;Hampshire et al., 2021). Antidepressants are widely prescribed and act by increasing brain monoamine levels but they have a disappointingly low response rate (around 50%) and a significant lag period (4-6 weeks) in their activities when compared to placebos in clinical trials (de Vries et al., 2018). Thus, it is urgent to perform further research and develop new and effective antidepressants. ...
Article
Full-text available
Depression is the most common type of neuropsychiatric illness and has increasingly become a major cause of disability. Unfortunately, the recent global pandemic of COVID-19 has dramatically increased the incidence of depression and has significantly increased the burden of mental health care worldwide. Since full remission of the clinical symptoms of depression has not been achieved with current treatments, there is a constant need to discover new compounds that meet the major clinical needs. Recently, the roles of sigma receptors, especially the sigma-1 receptor subtype, have attracted increasing attention as potential new targets and target-specific drugs due to their translocation property that produces a broad spectrum of biological functions. Even clinical first-line antidepressants with or without affinity for sigma-1 receptors have different pharmacological profiles. Thus, the regulatory role of sigma-1 receptors might be useful in treating these central nervous system (CNS) diseases. In addition, long-term mental stress disrupts the homeostasis in the CNS. In this review, we discuss the topical literature concerning sigma-1 receptor antidepressant mechanism of action in the regulation of intracellular proteostasis, calcium homeostasis and especially the dynamic Excitatory/Inhibitory (E/I) balance in the brain. Furthermore, based on these discoveries, we discuss sigma-1 receptor ligands with respect to their promise as targets for fast-onset action drugs in treating depression.
... The firstline treatment for this condition includes the antidepressants used in the treatment of the major depressive disorder, such as selective serotonin reuptake inhibitors (SSRIs) (4). Though proven to be effective, SSRIs take several weeks to elicit pharmacological effects and the response rate rarely exceeds 50% (5). Considering that the potential grave consequences of PPD may occur abruptly, fast resolution of symptoms is highly desired. ...
Article
Full-text available
Postpartum depression (PPD) is a debilitating psychiatric disorder characterized by a high worldwide prevalence and serious long-term negative outcomes for both mothers and children. The lack of a specific treatment and overreliance on pharmacotherapy with limited efficacy and delayed treatment response has constituted a complication in the management of this condition. Recently, the US Food and Drug Administration (FDA) approved a synthetic formulation of the GABAergic neurosteroid allopregnanolone (brexanolone), administered intravenously for the rapid, long-lasting and effective treatment of PPD. Hereinafter, we review findings on allopregnanolone biosynthesis and GABAA receptor plasticity in the pathophysiology of PPD. We also discuss evidence supporting the efficacy of braxanolone for the treatment of PPD, which opens a promising new horizon for neurosteroid-based therapeutics for mood disorders.
... From the search output, titles and abstracts were screened to include journal articles focused on the overall efficacy of the drug in question for major depressive disorder; thus, we excluded articles focused on other indications, subsets with specific comorbid conditions, particular symptom clusters, safety (as opposed to efficacy), specific demographic samples, trials lacking a parallel design (add-on, open-label, crossover), trials that were not placebo controlled, trials not involving acute treatment (long-term trials, including maintenance trials), and trials involving other routes of administration. A literature search for the FDA-registered trials was also carried out independently by author YD in the context of a separate publication [17]. Separately, TF and YO identified the trials in ClinicalTrials.gov ...
Article
Full-text available
Background Valid assessment of drug efficacy and safety requires an evidence base free of reporting bias. Using trial reports in Food and Drug Administration (FDA) drug approval packages as a gold standard, we previously found that the published literature inflated the apparent efficacy of antidepressant drugs. The objective of the current study was to determine whether this has improved with recently approved drugs. Methods and findings Using medical and statistical reviews in FDA drug approval packages, we identified 30 Phase II/III double-blind placebo-controlled acute monotherapy trials, involving 13,747 patients, of desvenlafaxine, vilazodone, levomilnacipran, and vortioxetine; we then identified corresponding published reports. We compared the data from this newer cohort of antidepressants (approved February 2008 to September 2013) with the previously published dataset on 74 trials of 12 older antidepressants (approved December 1987 to August 2002). Using logistic regression, we examined the effects of trial outcome and trial cohort (newer versus older) on transparent reporting (whether published and FDA conclusions agreed). Among newer antidepressants, transparent publication occurred more with positive (15/15 = 100%) than negative (7/15 = 47%) trials (OR 35.1, CI 95% 1.8 to 693). Controlling for trial outcome, transparent publication occurred more with newer than older trials (OR 6.6, CI 95% 1.6 to 26.4). Within negative trials, transparent reporting increased from 11% to 47%. We also conducted and contrasted FDA- and journal-based meta-analyses. For newer antidepressants, FDA-based effect size (ES FDA ) was 0.24 (CI 95% 0.18 to 0.30), while journal-based effect size (ES Journals ) was 0.29 (CI 95% 0.23 to 0.36). Thus, effect size inflation, presumably due to reporting bias, was 0.05, less than for older antidepressants (0.10). Limitations of this study include a small number of trials and drugs—belonging to a single class—and a focus on efficacy (versus safety). Conclusions Reporting bias persists but appears to have diminished for newer, compared to older, antidepressants. Continued efforts are needed to further improve transparency in the scientific literature.
... Food and Drug Administration (FDA) zatwierdziło ketaminę do leczenia lekoopornej depresji, pomimo że w trzech krótkoterminowych próbach klinicznych wyniki tylko jednej wykazały przewagę nad placebo, a długoterminowe badanie osób, które na wcześniejszych etapach odpowiedziały na leczenie ketaminą, wykazało u nich dłuższy czas do nawrotu choroby niż u osób, którym podawano placebo (i "klasyczne" antydepresanty -zarówno w grupie otrzymującej ketaminę, jak i w grupie kontrolnej) (Food and Drug Administration 2020). Zazwyczaj, żeby zatwierdzić lek, FDA wymaga dwóch prób klinicznych wykazujących istotną przewagę nad placebo (niezależnie od tego, ile faktycznie takich prób przeprowadzono, statystycznie rzecz ujmując wystarczająca liczba podejść musi w końcu prowadzić do przynajmniej dwóch, wykazujących istotną różnicę; w dodatku wyników negatywnych prób bardzo często się nie publikuje oraz rzadziej je cytuje jeśli zostaną już opublikowane, co wpływa na ich widoczność [Turner et al. 2008;Vries et al. 2018]). ...
Article
Full-text available
As Mark Fisher wrote, „the task of repoliticizing mental illness is an urgent one if the left wants to challenge capitalist realism” (Fisher 2020, 56). This paper attempts to follow his thought by showing what a repoliticization of interrelated phenomena defined as mental illnesses could look like, and how relating it to the ongoing “psychedelic renaissance” could potentially undermine capitalist realism. This repoliticization could be the first step towards acid communism – a step that would enable a comprehensive formulation of the project, i.e. the image of acid communism itself and the road towards it. While psychedelics could spark the shift in the dominant psychiatric paradigm and induce a reorganization of mental health services, the process of capturing these substances by the alienating and commodifying discourses of psychiatry and capitalism is already underway, so that in effect both these intertwined and mutually supporting discourses are reinforced. From this perspective, the institution of psychiatry becomes a key element in preserving the status quo, which makes it impossible to imagine the end of capitalism. The politicization of mental health that would challenge capitalist realism needs be accompanied by the deconstruction of the ideology of psychiatry.
Article
Background Symptoms of depression are commonly experienced by informal caregivers of older adults, however there is uncertainty concerning effectiveness of psychological interventions targeting symptoms of depression in this population. Further, there is uncertainty concerning important clinical moderators, including intervention type and care recipient health condition. This review examined the effectiveness of psychological interventions targeting symptoms of depression in informal caregivers of older adults. Methods PubMed, CINAHL, Embase, PsycINFO, Cochrane Library and Web of Science were searched. Risk of bias was assessed using the Cochrane Risk of Bias tool version 2. Results Fifteen studies were identified and twelve (1270 participants) provided data for the meta-analysis. Interventions included cognitive behavioural therapy (4 studies), problem-solving therapy (4 studies); non-directive supportive therapy (4 studies) and behavioural activation (3 studies). A small effect size favouring the intervention was found for symptoms of depression (g = −0.49, CI = −0.79, −0.19, I² = 83.42 %) and interventions were effective in reducing incidence of major depression (OR = 0.177, CI = 0.08, 0.38), caregiver burden (g = −0.35, CI = −0.55, −0.15) and psychological distress (g = −0.49, CI = −0.70, −0.28). Given high heterogeneity, findings should be interpreted with caution. Overall risk of bias was high. Limitations Studies were limited to those in English or Swedish. Conclusion Psychological interventions may be effective in reducing symptoms of depression among informal caregivers of older adults. However, evidence is inconclusive due to heterogeneity, high risk of bias, and indirectness of evidence.
Article
Improving child and adolescent mental health requires the careful development and rigorous testing of interventions and delivery methods. This includes universal school-based mindfulness training, evaluated in the My Resilience in Adolescence (MYRIAD) trial reported in this special edition. While discovering effective interventions through randomised controlled trials is our ultimate aim, null or negative results can and should play an important role in progressing our understanding of what works. Unfortunately, alongside publication bias there can be a tendency to ignore, spin or unfairly undermine disappointing findings. This creates research waste that can increase risk and reduce benefits for future service users. We advocate several practices to help optimise learning from all trials, whatever the results: stronger intervention design reduces the likelihood of foreseeable null or negative results; an evidence-informed conceptual map of the subject area assists with understanding how results contribute to the knowledge base; mixed methods trial designs aid explanation of outcome results; various open science practices support the dispassionate analysis of data and transparent reporting of trial findings; and preparation for null or negative results helps to temper stakeholder expectations and increase understanding of why we conduct trials in the first place. To embed these practices, research funders must be willing to pay for pilot studies and ‘thicker’ trials, and publishers should judge trials according to their conduct and not their outcome. MYRIAD is an exemplar of how to design, conduct and report a trial to optimise learning, with important implications for practice.
Article
Full-text available
The open science movement has developed out of growing concerns over the scientific standard of published academic research and a perception that science is in crisis (the “replication crisis”). Bullying research sits within this scientific family and without taking a full part in discussions risks falling behind. Open science practices can inform and support a range of research goals while increasing the transparency and trustworthiness of the research process. In this paper, we aim to explain the relevance of open science for bullying research and discuss some of the questionable research practices which challenge the replicability and integrity of research. We also consider how open science practices can be of benefit to research on school bullying. In doing so, we discuss how open science practices, such as pre-registration, can benefit a range of methodologies including quantitative and qualitative research and studies employing a participatory research methods approach. To support researchers in adopting more open practices, we also highlight a range of relevant resources and set out a series of recommendations to the bullying research community.
Article
Treatments for depression have improved, and their availability has markedly increased since the 1980s. Mysteriously the general population prevalence of depression has not decreased. This “treatment-prevalence paradox” (TPP) raises fundamental questions about the diagnosis and treatment of depression. We propose and evaluate seven explanations for the TPP. First, two explanations assume that improved and more widely available treatments have reduced prevalence, but that the reduction has been offset by an increase in: 1) misdiagnosing distress as depression, yielding more “false positive” diagnoses; or 2) an actual increase in depression incidence. Second, the remaining five explanations assume prevalence has not decreased, but suggest that: 3) treatments are less efficacious and 4) less enduring than the literature suggests; 5) trial efficacy doesn't generalize to real-world settings; 6) population-level treatment impact differs for chronic-recurrent versus non-recurrent cases; and 7) treatments have some iatrogenic consequences. Any of these seven explanations could undermine treatment impact on prevalence, thereby helping to explain the TPP. Our analysis reveals that there is little evidence that incidence or prevalence have increased as a result of error or fact (Explanations 1 and 2), and strong evidence that (a) the published literature overestimates short- and long-term treatment efficacy, (b) treatments are considerably less effective as deployed in “real world” settings, and (c) treatment impact differs substantially for chronic-recurrent cases relative to non-recurrent cases. Collectively, these 4 explanations likely account for most of the TPP. Lastly, little research exists on iatrogenic effects of current treatments (Explanation 7), but further exploration is critical.
Article
Full-text available
Objective Citation bias concerns the selective citation of scientific articles based on their results. We brought together all available evidence on citation bias across scientific disciplines and quantified its impact. Study Design and Setting An extensive search strategy was applied to the Web of Science Core Collection and Medline, yielding 52 studies in total. We classified these studies on scientific discipline, selection method and other variables. We also performed random effects meta-analyses to pool the effect of positive versus negative results on subsequent citations. Finally, we checked for other determinants of citation as reported in the citation bias literature. Results Evidence for the occurrence of citation bias was most prominent in the biomedical sciences, and least in the natural sciences. Articles with statistically significant results were cited 1.6 times more often than articles with non-significant results. Articles in which the authors explicitly conclude to have found support for their hypothesis were cited 2.7 times as often. Article results and journal impact factor were associated with citation more often than any other reported determinant. Conclusion Similar to what we already know on publication bias, also citation bias can lead to an over-representation of positive results and unfounded beliefs.
Article
Full-text available
Background Due to problems of publication bias and selective reporting, the ICMJE requires prospective registration of all clinical trials with an appropriate registry before the first participant is enrolled. Previous research has shown that not all clinical trials are registered at this time (prospectively). This study investigated the extent and timing of trial registration. The aims were to determine 1) the proportion of clinical trials that were registered prospectively or retrospectively and 2) when retrospective registration took place in relation to submission to the journal in which they were published. Methods All clinical trials published in the BMC series in 2013 were identified. All articles that met the study’s inclusion criteria were categorised into one of three categories: 1) prospectively registered, 2) retrospectively registered before submission to the journal in which they were published or 3) retrospectively registered after submission to the journal in which they were published. Results One hundred and eight eligible studies were identified. Of these, 33 (31 %) reported studies that were registered prospectively, 72 reported studies that were registered retrospectively (67 %) and three articles (3 %) did not include a trial registration number. Of the 72 studies that were registered retrospectively, 66 (92 %) were registered before the article was submitted to the journal and six (8 %) were registered after the article was submitted to the journal. Conclusions Ten years after the ICMJE requirements for prospective registration of clinical trials this study found that the majority of included clinical trials were registered retrospectively but before submission to a journal for publication. This highlights the need for organisations other than journals, such as research institutions and grant giving bodies, to be more involved in enforcing prospective trial registration.
Article
Full-text available
Background: Clinical trial registries can improve the validity of trial results by facilitating comparisons between prospectively planned and reported outcomes. Previous reports on the frequency of planned and reported outcome inconsistencies have reported widely discrepant results. It is unknown whether these discrepancies are due to differences between the included trials, or to methodological differences between studies. We aimed to systematically review the prevalence and nature of discrepancies between registered and published outcomes among clinical trials. Methods: We searched MEDLINE via PubMed, EMBASE, and CINAHL, and checked references of included publications to identify studies that compared trial outcomes as documented in a publicly accessible clinical trials registry with published trial outcomes. Two authors independently selected eligible studies and performed data extraction. We present summary data rather than pooled analyses owing to methodological heterogeneity among the included studies. Results: Twenty-seven studies were eligible for inclusion. The overall risk of bias among included studies was moderate to high. These studies assessed outcome agreement for a median of 65 individual trials (interquartile range [IQR] 25-110). The median proportion of trials with an identified discrepancy between the registered and published primary outcome was 31 %; substantial variability in the prevalence of these primary outcome discrepancies was observed among the included studies (range 0 % (0/66) to 100 % (1/1), IQR 17-45 %). We found less variability within the subset of studies that assessed the agreement between prospectively registered outcomes and published outcomes, among which the median observed discrepancy rate was 41 % (range 30 % (13/43) to 100 % (1/1), IQR 33-48 %). The nature of observed primary outcome discrepancies also varied substantially between included studies. Among the studies providing detailed descriptions of these outcome discrepancies, a median of 13 % of trials introduced a new, unregistered outcome in the published manuscript (IQR 5-16 %). Conclusions: Discrepancies between registered and published outcomes of clinical trials are common regardless of funding mechanism or the journals in which they are published. Consistent reporting of prospectively defined outcomes and consistent utilization of registry data during the peer review process may improve the validity of clinical trial publications.
Article
Full-text available
Background The efficacy of antidepressant medication has been shown empirically to be overestimated due to publication bias, but this has only been inferred statistically with regard to psychological treatment for depression. We assessed directly the extent of study publication bias in trials examining the efficacy of psychological treatment for depression. Methods and Findings We identified US National Institutes of Health grants awarded to fund randomized clinical trials comparing psychological treatment to control conditions or other treatments in patients diagnosed with major depressive disorder for the period 1972–2008, and we determined whether those grants led to publications. For studies that were not published, data were requested from investigators and included in the meta-analyses. Thirteen (23.6%) of the 55 funded grants that began trials did not result in publications, and two others never started. Among comparisons to control conditions, adding unpublished studies (Hedges’ g = 0.20; CI95% -0.11~0.51; k = 6) to published studies (g = 0.52; 0.37~0.68; k = 20) reduced the psychotherapy effect size point estimate (g = 0.39; 0.08~0.70) by 25%. Moreover, these findings may overestimate the "true" effect of psychological treatment for depression as outcome reporting bias could not be examined quantitatively. Conclusion The efficacy of psychological interventions for depression has been overestimated in the published literature, just as it has been for pharmacotherapy. Both are efficacious but not to the extent that the published literature would suggest. Funding agencies and journals should archive both original protocols and raw data from treatment trials to allow the detection and correction of outcome reporting bias. Clinicians, guidelines developers, and decision makers should be aware that the published literature overestimates the effects of the predominant treatments for depression.
Article
Full-text available
Objective: To investigate the impact of a higher publishing probability for statistically significant positive outcomes on the false-positive rate in meta-analysis. Design: Meta-analyses of different sizes (N=10, N=20, N=50 and N=100), levels of heterogeneity and levels of publication bias were simulated. Primary and secondary outcome measures: The type I error rate for the test of the mean effect size (ie, the rate at which the meta-analyses showed that the mean effect differed from 0 when it in fact equalled 0) was estimated. Additionally, the power and type I error rate of publication bias detection methods based on the funnel plot were estimated. Results: In the presence of a publication bias characterised by a higher probability of including statistically significant positive results, the meta-analyses frequently concluded that the mean effect size differed from zero when it actually equalled zero. The magnitude of the effect of publication bias increased with an increasing number of studies and between-study variability. A higher probability of including statistically significant positive outcomes introduced little asymmetry to the funnel plot. A publication bias of a sufficient magnitude to frequently overturn the meta-analytic conclusions was difficult to detect by publication bias tests based on the funnel plot. When statistically significant positive results were four times more likely to be included than other outcomes and a large between-study variability was present, more than 90% of the meta-analyses of 50 and 100 studies wrongly showed that the mean effect size differed from zero. In the same scenario, publication bias tests based on the funnel plot detected the bias at rates not exceeding 15%. Conclusions: This study adds to the evidence that publication bias is a major threat to the validity of medical research and supports the usefulness of efforts to limit publication bias.
Article
Full-text available
Background The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Methodology/Principal Findings In this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. Conclusions This update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Article
Background: Caspi et al.'s 2003 report that 5-HTTLPR genotype moderates the influence of life stress on depression has been highly influential but remains contentious. We examined whether the evidence base for the 5-HTTLPR-stress interaction has been distorted by citation bias and a selective focus on positive findings. Method: A total of 73 primary studies were coded for study outcomes and focus on positive findings in the abstract. Citation rates were compared between studies with positive and negative results, both within this network of primary studies and in Web of Science. In addition, the impact of focus on citation rates was examined. Results: In all, 24 (33%) studies were coded as positive, but these received 48% of within-network and 68% of Web of Science citations. The 38 (52%) negative studies received 42 and 23% of citations, respectively, while the 11 (15%) unclear studies received 10 and 9%. Of the negative studies, the 16 studies without a positive focus (42%) received 47% of within-network citations and 32% of Web of Science citations, while the 13 (34%) studies with a positive focus received 39 and 51%, respectively, and the nine (24%) studies with a partially positive focus received 14 and 17%. Conclusions: Negative studies received fewer citations than positive studies. Furthermore, over half of the negative studies had a (partially) positive focus, and Web of Science citation rates were higher for these studies. Thus, discussion of the 5-HTTLPR-stress interaction is more positive than warranted. This study exemplifies how evidence-base-distorting mechanisms undermine the authenticity of research findings.
Article
This 5-week, double-blind, placebo-controlled trial of fluoxetine in outpatients with major depressive disorder demonstrates an antidepressant effect for fluoxetine that is significantly more marked than the effect produced by placebo. Onset of improvement was rather slow, and dropout rate for unimprovement in both treatments was relatively high. Adverse effects were reported for both fluoxetine and placebo with more fluoxetine patients reporting such effects.
Article
We aimed to assess the impact of spin (ie, reporting to convince readers that the beneficial effect of the experimental treatment is greater than shown by the results) on the interpretation of results of abstracts of randomized controlled trials (RCTs) in the field of cancer. We performed a two-arm, parallel-group RCT. We selected a sample of published RCTs with statistically nonsignificant primary outcome and with spin in the abstract conclusion. Two versions of these abstracts were used-the original with spin and a rewritten version without spin. Participants were clinician corresponding authors of articles reporting RCTs, investigators of trials, and reviewers of French national grants. The primary outcome was clinicians' interpretation of the beneficial effect of the experimental treatment (0 to 10 scale). Participants were blinded to study hypothesis. Three hundred clinicians were randomly assigned using a Web-based system; 150 clinicians assessed an abstract with spin and 150 assessed an abstract without spin. For abstracts with spin, the experimental treatment was rated as being more beneficial (mean difference, 0.71; 95% CI, 0.07 to 1.35; P = .030), the trial was rated as being less rigorous (mean difference, -0.59; 95% CI, -1.13 to 0.05; P = .034), and clinicians were more interested in reading the full-text article (mean difference, 0.77; 95% CI, 0.08 to 1.47; P = .029). There was no statistically significant difference in the clinicians' rating of the importance of the study or the need to run another trial. Spin in abstracts can have an impact on clinicians' interpretation of the trial results. © 2014 by American Society of Clinical Oncology.
Article
Background: Many studies have examined the efficacy of psychotherapy for major depressive disorder (MDD) but publication bias against null results may exist in this literature. However, to date, the presence of an excess of significant findings in this literature has not been explicitly tested. Method: We used a database of 1344 articles on the psychological treatment of depression, identified through systematic search in PubMed, PsycINFO, EMBASE and the Cochrane database of randomized trials. From these we identified 149 studies eligible for inclusion that provided 212 comparisons. We tested for an excess of significant findings using the method developed by Ioannidis and Trikalinos (2007), and compared the distribution of p values in this literature with the distribution in the antidepressant literature, where publication bias is known to be operating. Results: The average statistical power to detect the effect size indicated by the meta-analysis was 49%. A total of 123 comparisons (58%) reported a statistically significant difference between treatment and control groups, but on the basis of the average power observed, we would only have expected 104 (i.e. 49%) to do so. There was therefore evidence of an excess of significance in this literature (p = 0.010). Similar results were obtained when these analyses were restricted to studies including a cognitive behavioural therapy (CBT) arm. Finally, the distribution of p values for psychotherapy studies resembled that for published antidepressant studies, where publication bias against null results has already been established. Conclusions: The small average size of individual psychotherapy studies is only sufficient to detect large effects. Our results indicate an excess of significant findings relative to what would be expected, given the average statistical power of studies of psychotherapy for major depression.