Article

A comparison of statistical methods for meta-analysis

Department of Mathematics and Statistics, Richard Berry Building, The University of Melbourne, Victoria 3010, Australia.
Statistics in Medicine (Impact Factor: 1.83). 03/2001; 20(6):825-40. DOI: 10.1002/sim.650
Source: PubMed

ABSTRACT

Meta-analysis may be used to estimate an overall effect across a number of similar studies. A number of statistical techniques are currently used to combine individual study results. The simplest of these is based on a fixed effects model, which assumes the true effect is the same for all studies. A random effects model, however, allows the true effect to vary across studies, with the mean true effect the parameter of interest. We consider three methods currently used for estimation within the framework of a random effects model, and illustrate them by applying each method to a collection of six studies on the effect of aspirin after myocardial infarction. These methods are compared using estimated coverage probabilities of confidence intervals for the overall effect. The techniques considered all generally have coverages below the nominal level, and in particular it is shown that the commonly used DerSimonian and Laird method does not adequately reflect the error associated with parameter estimation, especially when the number of studies is small.

Download full-text

Full-text

Available from: Ian Robert Gordon, Sep 19, 2014
  • Source
    • "Although fixed effect models are widely used in two-stage meta-analyses, even when heterogeneity is not zero (Kontopantelis, Springate, and Reeves 2013), accounting for even low levels of between-cluster variability is a more conservative approach (Hunter and Schmidt 2000). When a fixed effects model is incorrectly assumed, both coverage and power deteriorate as true heterogeneity increases (Brockwell and Gordon 2001; Kontopantelis and Reeves 2012). Analogously, for patient data analyses, we would expect poor fit from model(1), the fixed effect approach, in the presence of heterogeneity. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Simulations are a practical and reliable approach to power calculations, especially for multi-level mixed effects models where the analytic solutions can be very complex. In addition, power calculations are model-specific and multi-level mixed effects models are defined by a plethora of parameters. In other words, model variations in this context are numerous and so are the tailored algebraic calculations. This article describes ipdpower in Stata, a new simulations-based command that calculates power for mixed effects two-level data structures. Although the command was developed having individual patient data meta-analyses and primary care databases analyses in mind, where patients are nested within studies and general practices respectively, the methods apply to any two-level structure.
    Full-text · Article · Feb 2016
  • Source
    • "marginal profile method (Hardy and Thompson, 1996)). These methods are often more complex to implement and may still be problematic when there are a small number of trials (Brockwell and Gordon, 2001;Borenstein et al., 2010). There has been little research examining how inverse variance meta-analytical methods perform for continuous outcomes, and how the choice of trial analytical method may affect this performance.Banerjee et al. (2008)examined through statistical simulation the impact of meta-analysing SAFV and SACS, in different combinations, on bias, precision and statistical significance. "
    [Show abstract] [Hide abstract]
    ABSTRACT: When meta-analysing intervention effects calculated from continuous outcomes, meta-analysts often encounter few trials, with potentially a small number of participants, and a variety of trial analytical methods. It is important to know how these factors affect the performance of inverse-variance fixed and DerSimonian and Laird random effects meta-analytical methods. We examined this performance using a simulation study. Meta-analysing estimates of intervention effect from final values, change scores, ANCOVA or a random mix of the three yielded unbiased estimates of pooled intervention effect. The impact of trial analytical method on the meta-analytic performance measures was important when there was no or little heterogeneity, but was of little relevance as heterogeneity increased. On the basis of larger than nominal type I error rates and poor coverage, the inverse-variance fixed effect method should not be used when there are few small trials. When there are few small trials, random effects meta-analysis is preferable to fixed effect meta-analysis. Meta-analytic estimates need to be cautiously interpreted; type I error rates will be larger than nominal, and confidence intervals will be too narrow. Use of trial analytical methods that are more efficient in these circumstances may have the unintended consequence of further exacerbating these issues. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
    Full-text · Article · Dec 2015 · Research Synthesis Methods
  • Source
    • "The confidence interval for the conventional estimate of τ 2 was obtained iteratively via the Q-profile method (Viechtbauer, 2007). A number of simulation studies have demonstrated that the DL procedure is likely to underestimate the between-study variance, particularly when the number of studies is small and there is substantial heterogeneity between studies (Gordon, 2001, Brockwell andGordon, 2007;Sidik and Jonkman, 2005;Sidik and Jonkman, 2007;Hartung and Makambi, 2003). When between-study variance is underestimated, the p-value for the combined intervention effect may become artificially small and the confidence bounds produced for combined intervention effects may be too narrow. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper investigates how inconsistency (as measured by the I(2) statistic) among studies in a meta-analysis may differ, according to the type of outcome data and effect measure. We used hierarchical models to analyse data from 3873 binary, 5132 continuous and 880 mixed outcome meta-analyses within the Cochrane Database of Systematic Reviews. Predictive distributions for inconsistency expected in future meta-analyses were obtained, which can inform priors for between-study variance. Inconsistency estimates were highest on average for binary outcome meta-analyses of risk differences and continuous outcome meta-analyses. For a planned binary outcome meta-analysis in a general research setting, the predictive distribution for inconsistency among log odds ratios had median 22% and 95% CI: 12% to 39%. For a continuous outcome meta-analysis, the predictive distribution for inconsistency among standardized mean differences had median 40% and 95% CI: 15% to 73%. Levels of inconsistency were similar for binary data measured by log odds ratios and log relative risks. Fitted distributions for inconsistency expected in continuous outcome meta-analyses using mean differences were almost identical to those using standardized mean differences. The empirical evidence on inconsistency gives guidance on which outcome measures are most likely to be consistent in particular circumstances and facilitates Bayesian meta-analysis with an informative prior for heterogeneity. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
    Full-text · Article · Dec 2015 · Research Synthesis Methods
Show more