Ain't Necessarily So: Review and Critique of Recent Meta-Analyses of Behavioral Medicine Interventions in Health Psychology

Department of Psychiatry, University of Pennsylvania School of Medicine, 3535 Market St., Room 676, Philadelphia, PA 19104, USA.
Health Psychology (Impact Factor: 3.59). 03/2010; 29(2):107-16. DOI: 10.1037/a0017633
Source: PubMed


We examined four meta-analyses of behavioral interventions for adults (Dixon, Keefe, Scipio, Perri, & Abernethy, 2007; Hoffman, Papas, Chatkoff, & Kerns, 2007; Irwin, Cole, & Nicassio, 2006; and Jacobsen, Donovan, Vadaparampil, & Small, 2007) that have appeared in the Evidence Based Treatment Reviews section of Health Psychology.
Narrative review.
We applied the following criteria to each meta-analysis: (1) whether each meta-analysis was described accurately, adequately, and transparently in the article; (2) whether there was an adequate attempt to deal with methodological quality of the original trials; (3) the extent to which the meta-analysis depended on small, underpowered studies; and (4) the extent to which the meta-analysis provided valid and useful evidence-based recommendations.
Across the four meta-analyses, we identified substantial problems with the transparency and completeness with which these meta-analyses were reported, as well as a dependence on small, underpowered trials of generally poor quality.
Results of our exercise raise questions about the clinical validity and utility of the conclusions of these meta-analyses. Results should serve as a wake up call to prospective authors, reviewers, and end-users of meta-analyses now appearing in the literature.


Available from: James C Coyne
  • Source
    • "The inclusion of underpowered studies with limited methodological quality may have biased the results of previous meta-analyses [45] [46]. In the present systematic review, no psychotherapy RCT met inclusion criteria. "
    [Show abstract] [Hide abstract]
    ABSTRACT: While women with breast cancer often face varying levels of psychological distress, there is a subgroup whose symptomatology reaches a threshold for diagnosis of major depressive disorder (MDD). Major depressive disorder is known to influence patient outcomes, such as health-related quality of life and treatment adherence. There are no systematic reviews that evaluate pharmacological and psychotherapeutic treatment trials for MDD among individuals with breast cancer. Two authors independently searched MEDLINE, EMBASE, Cochrane and Clinical databases through February 20, 2013 without language restrictions. Core journals, reference lists and citation tracking were also searched. Articles on breast cancer patients were included if they (1) included participants with a diagnosis of MDD; (2) investigated pharmacological or psychotherapeutic treatments for MDD compared to placebo or usual care in a randomized controlled trial (RCT). Two RCTs on antidepressant treatment met inclusion criteria. However, no RCTs investigating the effects of psychological treatments for MDD in breast cancer were identified. Notwithstanding the paucity of data investigating the effects of psychological treatments for MDD in breast cancer, numerous psychotherapeutic strategies targeting depressive symptoms were identified. Mianserin had significant antidepressant effects when compared to placebo in a 6-week, parallel-group, RCT of Stage I-II breast cancer in women with MDD. Desipramine and paroxetine were reported to be no more efficacious than placebo in a 6-week, RCT of Stage I-IV breast cancer in women with MDD. The evidence reviewed herein underscores the paucity of data available to guide clinicians in treatment decisions for MDD in individuals with breast cancer. Therefore, the treatment of MDD in breast cancer is primarily based on clinical experience. Some antidepressants (for example, paroxetine) should be avoided in women concurrently taking tamoxifen due to relevant interactions involving the cytochrome CYP2D6.
    Cancer Treatment Reviews 09/2013; 40(3). DOI:10.1016/j.ctrv.2013.09.009 · 7.59 Impact Factor
    • "Wilson and colleagues noted a number of times in their review that many of the trials are small, but they do not dwell on how many, how small or with what implications. We have adopted the lower limit of 35 participants in the smallest group for inclusion of trials in meta-analyses [5]. The rationale is that any trial that is smaller than this does not have a 50% probability of detecting a moderate sized effect, even if it is present. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Wilson et al. provided a valuable systematic and meta-analytic review of the Triple P-Positive Parenting program in which they identified substantial problems in the quality of available evidence. Their review largely escaped unscathed after Sanders et al.'s critical commentary. However, both of these sources overlook the most serious problem with the Triple P literature, namely, the over-reliance on positive but substantially underpowered trials. Such trials are particularly susceptible to risks of bias and investigator manipulation of apparent results. We offer a justification for the criterion of no fewer than 35 participants in either the intervention or control group. Applying this criterion, 19 of the 23 trials identified by Wilson et al. were eliminated. A number of these trials were so small that it would be statistically improbable that they would detect an effect even if it were present. We argued that clinicians and policymakers implementing Triple P programs incorporate evaluations to ensure that goals are being met and resources are not being squandered. Please see related articles and
    BMC Medicine 01/2013; 11(1):11. DOI:10.1186/1741-7015-11-11 · 7.25 Impact Factor
  • Source
    • "Excluding studies with small sample sizes is an unusual decision, though there are reasons to be concerned about the influence of small study effects in reviews [36], [37]. As well as publication bias, smaller studies are more vulnerable to other forms of bias. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The possible effects of research assessments on participant behaviour have attracted research interest, especially in studies with behavioural interventions and/or outcomes. Assessments may introduce bias in randomised controlled trials by altering receptivity to intervention in experimental groups and differentially impacting on the behaviour of control groups. In a Solomon 4-group design, participants are randomly allocated to one of four arms: (1) assessed experimental group; (2) unassessed experimental group (3) assessed control group; or (4) unassessed control group. This design provides a test of the internal validity of effect sizes obtained in conventional two-group trials by controlling for the effects of baseline assessment, and assessing interactions between the intervention and baseline assessment. The aim of this systematic review is to evaluate evidence from Solomon 4-group studies with behavioural outcomes that baseline research assessments themselves can introduce bias into trials. Electronic databases were searched, supplemented by citation searching. Studies were eligible if they reported appropriately analysed results in peer-reviewed journals and used Solomon 4-group designs in non-laboratory settings with behavioural outcome measures and sample sizes of 20 per group or greater. Ten studies from a range of applied areas were included. There was inconsistent evidence of main effects of assessment, sparse evidence of interactions with behavioural interventions, and a lack of convincing data in relation to the research question for this review. There were too few high quality completed studies to infer conclusively that biases stemming from baseline research assessments do or do not exist. There is, therefore a need for new rigorous Solomon 4-group studies that are purposively designed to evaluate the potential for research assessments to cause bias in behaviour change trials.
    PLoS ONE 10/2011; 6(10):e25223. DOI:10.1371/journal.pone.0025223 · 3.23 Impact Factor
Show more