Treatment-effect estimates adjusted for small-study effects via a limit meta-analysis.

Institute of Medical Biometry and Medical Informatics, University Medical Center, 79104 Freiburg, Germany.
Biostatistics (Impact Factor: 2.43). 01/2011; 12(1):122-42. DOI: 10.1093/biostatistics/kxq046
Source: PubMed

ABSTRACT Statistical heterogeneity and small-study effects are 2 major issues affecting the validity of meta-analysis. In this article, we introduce the concept of a limit meta-analysis, which leads to shrunken, empirical Bayes estimates of study effects after allowing for small-study effects. This in turn leads to 3 model-based adjusted pooled treatment-effect estimators and associated confidence intervals. We show how visualizing our estimators using the radial plot indicates how they can be calculated using existing software. The concept of limit meta-analysis also gives rise to a new measure of heterogeneity, termed G(2), for heterogeneity that remains after small-study effects are accounted for. In a simulation study with binary data and small-study effects, we compared our proposed estimators with those currently used together with a recent proposal by Moreno and others. Our criteria were bias, mean squared error (MSE), variance, and coverage of 95% confidence intervals. Only the estimators arising from the limit meta-analysis produced approximately unbiased treatment-effect estimates in the presence of small-study effects, while the MSE was acceptably small, provided that the number of studies in the meta-analysis was not less than 10. These limit meta-analysis estimators were also relatively robust against heterogeneity and one of them had a relatively small coverage error.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To assess the influence of trial sample size on treatment effect estimates within meta-analyses. Meta-epidemiological study. 93 meta-analyses (735 randomised controlled trials) assessing therapeutic interventions with binary outcomes, published in the 10 leading journals of each medical subject category of the Journal Citation Reports or in the Cochrane Database of Systematic Reviews. Sample size, outcome data, and risk of bias extracted from each trial. Trials within each meta-analysis were sorted by their sample size: using quarters within each meta-analysis (from quarter 1 with 25% of the smallest trials, to quarter 4 with 25% of the largest trials), and using size groups across meta-analyses (ranging from <50 to ≥1000 patients). Treatment effects were compared within each meta-analysis between quarters or between size groups by average ratios of odds ratios (where a ratio of odds ratios less than 1 indicates larger effects in smaller trials). Treatment effect estimates were significantly larger in smaller trials, regardless of sample size. Compared with quarter 4 (which included the largest trials), treatment effects were, on average, 32% larger in trials in quarter 1 (which included the smallest trials; ratio of odds ratios 0.68, 95% confidence interval 0.57 to 0.82), 17% larger in trials in quarter 2 (0.83, 0.75 to 0.91), and 12% larger in trials in quarter 3 (0.88, 0.82 to 0.95). Similar results were obtained when comparing treatment effect estimates between different size groups. Compared with trials of 1000 patients or more, treatment effects were, on average, 48% larger in trials with fewer than 50 patients (0.52, 0.41 to 0.66) and 10% larger in trials with 500-999 patients (0.90, 0.82 to 1.00). Treatment effect estimates differed within meta-analyses solely based on trial sample size, with stronger effect estimates seen in small to moderately sized trials than in the largest trials.
    BMJ (online) 01/2013; 346:f2304. · 17.22 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: [This corrects the article on p. e81823 in vol. 8.].
    PLoS ONE 01/2013; 8(12). · 3.73 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this study was to examine the presence and extent of small study effects and publication bias in meta-analyses (MAs) based on orthodontic studies. Following an extensive literature search, 25 MAs including 313 studies were identified and were possible to be re-analyzed. For the assessment of publication bias, contour-enhanced funnel plots were examined and their symmetry was tested using the Begg and Mazumdar rank correlation and Egger's linear regression tests. Robustness of MAs' results to publication bias was examined by Rosenthal's failsafe N, and adjusted effect sizes were calculated after consideration of publication bias using Duval and Tweedie's "trim and fill" procedure. Only few of the originally published MAs assessed the existence and effect of publication bias and some only partially. Inspection of the funnel plots indicated possible asymmetry, which was confirmed by Begg and Mazumdar's test in 12 % and by Egger's test in 28 % of the MAs. According to Rosenthal's criterion, 62 % of the MAs were robust, while adjusted effect estimates with unpublished studies differed from little to great from the unadjusted ones. Pooling of Egger's intercepts of included MAs indicated that evidence of asymmetry was found in the orthodontic literature, which was accentuated in medical journals and in diagnostic MAs. Small study effects and publication bias can often distort results of MAs. Since indications of publication bias in orthodontics were found, the influence of small trials on estimated treatment effects should be routinely and more carefully assessed by authors conducting MAs.
    Clinical Oral Investigations 02/2014; · 2.20 Impact Factor