Article

Issues in comparisons between meta-analyses and large trials

Clinical Research Division, Tufts University, Бостон, Georgia, United States
JAMA The Journal of the American Medical Association (Impact Factor: 30.39). 04/1998; 279(14):1089-93. DOI: 10.1001/jama.279.14.1089
Source: PubMed

ABSTRACT The extent of concordance between meta-analyses and large trials on the same topic has been investigated with different protocols. Inconsistent conclusions created confusion regarding the validity of these major tools of clinical evidence.
To evaluate protocols comparing meta-analyses and large trials in order to understand if and why they disagree on the concordance of these 2 clinical research methods.
Systematic comparison of protocol designs, study selection, definitions of agreement, analysis methods, and reported discrepancies between large trials and meta-analyses.
More discrepancies were claimed when large trials were selected from influential journals (which may prefer trials disagreeing with prior evidence) than from already performed meta-analyses (which may target homogeneous trials) and when both primary and secondary (rather than only primary) end points were considered. Depending on how agreement was defined, kappa coefficients varied from 0.22 (low agreement) to 0.72 (excellent agreement). The correlation of treatment effects between large trials and meta-analyses varied from -0.12 to 0.76, but was more similar (0.50-0.76) when only primary end points were considered. When both the magnitude and uncertainty of treatment effects were considered, large trials disagreed with meta-analyses 10% to 23% of the time. Discrepancies were attributed to different disease risks, variable protocols, quality, and publication bias.
Comparisons of large trials with meta-analyses may reach different conclusions depending on how trials and meta-analyses are selected and how end points and agreement are defined. Scrutiny of these 2 major research methods can enhance our appreciation of both for guiding medical practice.

Full-text

Available from: Joseph C Cappelleri, Jan 20, 2015
0 Followers
 · 
73 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Objective: To investigate the impact of a higher publishing probability for statistically significant positive outcomes on the false-positive rate in meta-analysis. Design: Meta-analyses of different sizes (N=10, N=20, N=50 and N=100), levels of heterogeneity and levels of publication bias were simulated. Primary and secondary outcome measures: The type I error rate for the test of the mean effect size (ie, the rate at which the meta-analyses showed that the mean effect differed from 0 when it in fact equalled 0) was estimated. Additionally, the power and type I error rate of publication bias detection methods based on the funnel plot were estimated. Results: In the presence of a publication bias characterised by a higher probability of including statistically significant positive results, the meta-analyses frequently concluded that the mean effect size differed from zero when it actually equalled zero. The magnitude of the effect of publication bias increased with an increasing number of studies and between-study variability. A higher probability of including statistically significant positive outcomes introduced little asymmetry to the funnel plot. A publication bias of a sufficient magnitude to frequently overturn the meta-analytic conclusions was difficult to detect by publication bias tests based on the funnel plot. When statistically significant positive results were four times more likely to be included than other outcomes and a large between-study variability was present, more than 90% of the meta-analyses of 50 and 100 studies wrongly showed that the mean effect size differed from zero. In the same scenario, publication bias tests based on the funnel plot detected the bias at rates not exceeding 15%. Conclusions: This study adds to the evidence that publication bias is a major threat to the validity of medical research and supports the usefulness of efforts to limit publication bias.
    BMJ Open 08/2014; DOI:10.1136/bmjopen-2014-004831 · 2.06 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background:Laparoscopic appendicectomy has gained wide acceptance as an alternative to open appendicectomy during pregnancy. However, data regarding the safety and optimal surgical approach to appendicitis in pregnancy are still controversial. Methods:This was a systematic review and meta-analysis of studies comparing laparoscopic and open appendicectomy in pregnancy identified using PubMed and Scopus search engines from January 1990 to July 2011. Two reviewers independently extracted data on fetal loss, preterm delivery, wound infection, duration of operation, hospital stay, Apgar score and birth weight between laparoscopic and open appendicectomy groups. Results:Eleven studies with a total of 3415 women (599 in laparoscopic and 2816 in open group) were included in the analysis. Fetal loss was statistically significantly worse in those who underwent laparoscopy compared with open appendicectomy; the pooled relative risk (RR) was 1·91 (95 per cent confidence interval (c.i.) 1·31 to 2·77) without heterogeneity. The pooled RR for preterm labour was 1·44 (0·68 to 3·06), but this risk was not statistically significant. The mean difference in length of hospital stay was − 0·49 (−1·76 to − 0·78) days, but this was not clinically significant. No significant difference was found for wound infection, birth weight, duration of operation or Apgar score. Conclusion:The available low-grade evidence suggests that laparoscopic appendicectomy in pregnant women might be associated with a greater risk of fetal loss. Copyright © 2012 British Journal of Surgery Society Ltd. Published by John Wiley & Sons, Ltd.
    British Journal of Surgery 11/2012; 99(11). DOI:10.1002/bjs.8889 · 5.21 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The efficacy of academic-mind-set interventions has been demonstrated by small-scale, proof-of-concept interventions, generally delivered in person in one school at a time. Whether this approach could be a practical way to raise school achievement on a large scale remains unknown. We therefore delivered brief growth-mind-set and sense-of-purpose interventions through online modules to 1,594 students in 13 geographically diverse high schools. Both interventions were intended to help students persist when they experienced academic difficulty; thus, both were predicted to be most beneficial for poorly performing students. This was the case. Among students at risk of dropping out of high school (one third of the sample), each intervention raised students' semester grade point averages in core academic courses and increased the rate at which students performed satisfactorily in core courses by 6.4 percentage points. We discuss implications for the pipeline from theory to practice and for education reform. © The Author(s) 2015.
    Psychological Science 04/2015; DOI:10.1177/0956797615571017 · 4.43 Impact Factor