Measuring Inconsistency in Meta-Analyses

MRC Biostatistics Unit, Institute of Public Health, Cambridge CB2 2SR.
BMJ (online) (Impact Factor: 17.45). 10/2003; 327(7414):557-60. DOI: 10.1136/bmj.327.7414.557
Source: PubMed
182 Reads
  • Source
    • "Since advanced usage of the command can be challenging, users should utilize these outputs to ensure they are modelling the desired structure, and re-specify if necessary. Regarding setting heterogeneity levels and their interpretation, there are numerous practical guides that can prove helpful (Fletcher 2007; Higgins, Thompson, Deeks, and Altman 2003). Although we present information on data missingness mechanisms in the help file, interested users can find more details in an excellent overview provided by Horton and Kleinman (2007). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Simulations are a practical and reliable approach to power calculations, especially for multi-level mixed effects models where the analytic solutions can be very complex. In addition, power calculations are model-specific and multi-level mixed effects models are defined by a plethora of parameters. In other words, model variations in this context are numerous and so are the tailored algebraic calculations. This article describes ipdpower in Stata, a new simulations-based command that calculates power for mixed effects two-level data structures. Although the command was developed having individual patient data meta-analyses and primary care databases analyses in mind, where patients are nested within studies and general practices respectively, the methods apply to any two-level structure.
  • Source
    • "The I 2 statistic revealed that systematic differences between studies were responsible for a large part (91%) of the variance in effect sizes across studies (I 2 = .91) (Higgins et al., 2003), indicating that a far larger share of the variance in effect sizes across studies could be attributed to moderator variables rather than to sampling error. For this reason, we performed moderator analyses. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Objective: A systematic review and meta-analysis was performed to quantify the effectiveness of active videogames (AVGs) as obesity prevention interventions aimed at children and adolescents. Method: Studies were included that focused on children or adolescents (≤18 years), assessed BMI as the outcome measure, used one or more AVGs as intervention, employed a controlled experimental design, used BMI as an outcome measure, enrolled participants up to and including 18 years of age, and comprised original studies. Employing these inclusion criteria, nine studies were included in the meta-analysis. Results: Active videogames had a small to medium-sized and significant average effect on children and adolescents: Hedges’ g = 0.38 (95% CI: 0.00 - 0.77). Heterogeneity was substantial (I2 = .91) but neither participants’ weight status, nor sample size, intervention duration or dropout moderated the effect of AVGs. Conclusion: The results of this meta-analysis provide preliminary evidence that active videogames can decrease BMI among children/adolescents.
    Handbook of Research on Holistic Perspectives in Gamification for Clinical Practice, Edited by Daniel Novak, Bengisu Tulu, Havar Brendryen, 10/2015: pages 277-292; IGI Global., ISBN: 9781466695221
  • Source
    • "The related I 2 statistic is a 208 percentage that represents the proportion of observed variation that can be attributed to the actual 209 difference between studies, rather than within-study variance. I 2 thresholds have been proposed 210 (Higgins et al., 2003), with 25%, 50%, and 75% representing low, moderate, and high variance, 211 respectively. The two main advantages of I 2 , compared to the Q-statistic, are that it is not 212 sensitive to the number of studies included and that confidence intervals can also be calculated. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Meta-analysis synthesizes a body of research investigating a common research question. Outcomes from meta-analyses provide a more objective and transparent summary of a research area than traditional narrative reviews. Moreover, they are often used to support research grant applications, guide clinical practice and direct health policy. The aim of this article is to provide a practical and nontechnical guide for psychological scientists that outlines the steps involved in planning and performing a meta-analysis of correlational datasets. I provide a supplementary R script to demonstrate each analytical step described in the paper, which is readily adaptable for researchers to use for their analyses. While the worked example is the analysis of a correlational dataset, the general meta-analytic process described in this paper is applicable for all types of effect sizes. I also emphasise the importance of meta-analysis protocols and pre-registration to improve transparency and help avoid unintended duplication. An improved understanding this tool will not only help scientists to conduct their own meta-analyses but also improve their evaluation of published meta-analyses.
    Frontiers in Psychology 09/2015; DOI:10.3389/fpsyg.2015.01549
Show more

Questions & Answers about this publication