Treatment-effect estimates adjusted for small-study effects via a limit meta-analysis.

Institute of Medical Biometry and Medical Informatics, University Medical Center, 79104 Freiburg, Germany.
Biostatistics (Impact Factor: 2.24). 01/2011; 12(1):122-42. DOI: 10.1093/biostatistics/kxq046
Source: PubMed

ABSTRACT Statistical heterogeneity and small-study effects are 2 major issues affecting the validity of meta-analysis. In this article, we introduce the concept of a limit meta-analysis, which leads to shrunken, empirical Bayes estimates of study effects after allowing for small-study effects. This in turn leads to 3 model-based adjusted pooled treatment-effect estimators and associated confidence intervals. We show how visualizing our estimators using the radial plot indicates how they can be calculated using existing software. The concept of limit meta-analysis also gives rise to a new measure of heterogeneity, termed G(2), for heterogeneity that remains after small-study effects are accounted for. In a simulation study with binary data and small-study effects, we compared our proposed estimators with those currently used together with a recent proposal by Moreno and others. Our criteria were bias, mean squared error (MSE), variance, and coverage of 95% confidence intervals. Only the estimators arising from the limit meta-analysis produced approximately unbiased treatment-effect estimates in the presence of small-study effects, while the MSE was acceptably small, provided that the number of studies in the meta-analysis was not less than 10. These limit meta-analysis estimators were also relatively robust against heterogeneity and one of them had a relatively small coverage error.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, cognitive scientists and commercial interests (e.g., Fit Brains, Lumosity) have focused research attention and financial resources on cognitive tasks, especially working memory tasks, to explore and exploit possible transfer effects to general cognitive abilities, such as fluid intelligence. The increased research attention has produced mixed findings, as well as contention about the disposition of the evidence base. To address this contention, Au et al. (2014) recently conducted a meta-analysis of extant controlled experimental studies of n-back task training transfer effects on measures of fluid intelligence in healthy adults; the results of which showed a small training transfer effect. Using several approaches, the current review evaluated and re-analyzed the meta-analytic data for the presence of two different forms of small-study effects: (1) publication bias in the presence of low power and; (2) low power in the absence of publication bias. The results of these approaches showed no evidence of selection bias in the working memory training literature, but did show evidence of small-study effects related to low power in the absence of publication bias. While the effect size estimate identified by Au et al. (2014) provided the most precise estimate to date, it should be interpreted in the context of a uniformly low-powered base of evidence. The present work concludes with a brief set of considerations for assessing the adequacy of a body of research findings for the application of meta-analytic techniques.
    Frontiers in Psychology 01/2014; 5:1589. DOI:10.3389/fpsyg.2014.01589 · 2.80 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this study was to examine the presence and extent of publication bias and small-study effects in meta-analyses (MAs) investigating pediatric dentistry-related subjects. Following a literature search, 46 MAs including 882 studies were analyzed qualitatively. Of these, 39 provided enough data to be re-analyzed. Publication bias was assessed with the following methods: contour-enhanced funnel plots, Begg and Mazumdar's rank correlation and Egger's linear regression tests, Rosenthal's failsafe N, and Duval and Tweedie's "trim and fill" procedure. Only a few MAs adequately assessed the existence and effect of publication bias. Inspection of the funnel plots indicated asymmetry, which was confirmed by Begg-Mazumdar's test in 18% and by Egger's test in 33% of the MAs. According to Rosenthal's criterion, 80% of the MAs were robust, while adjusted effects with unpublished studies differed from little to great from the unadjusted ones. Pooling of the Egger's intercepts indicated that evidence of asymmetry was found in the pediatric dental literature, which was accentuated in dental journals and in diagnostic MAs. Since indications of small-study effects and publication bias in pediatric dentistry were found, the influence of small or missing trials on estimated treatment effects should be routinely assessed in future MAs. Copyright © 2015 Elsevier Inc. All rights reserved.
    Journal of Evidence Based Dental Practice 03/2015; 15(1):8-24. DOI:10.1016/j.jebdp.2014.09.001
  • Source
    [Show description] [Hide description]
    DESCRIPTION: Accepted at IJE, to appear online shortly.