A basic introduction to fixed‐effect and random‐effects models for meta‐analysis

Research Synthesis Methods 04/2010; 1(2):97 - 111. DOI: 10.1002/jrsm.12

ABSTRACT There are two popular statistical models for meta-analysis, the fixed-effect model and the random-effects model. The fact that these two models employ similar sets of formulas to compute statistics, and sometimes yield similar estimates for the various parameters, may lead people to believe that the models are interchangeable. In fact, though, the models represent fundamentally different assumptions about the data. The selection of the appropriate model is important to ensure that the various statistics are estimated correctly. Additionally, and more fundamentally, the model serves to place the analysis in context. It provides a framework for the goals of the analysis as well as for the interpretation of the statistics.In this paper we explain the key assumptions of each model, and then outline the differences between the models. We conclude with a discussion of factors to consider when choosing between the two models. Copyright © 2010 John Wiley & Sons, Ltd.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Drug counterfeiting has serious public health and safety implications. The objective of this study was to systematically review the evidence on the effectiveness of interventions to combat or prevent drug counterfeiting. We searched multiple electronic databases and the grey literature up to March 2014. Two reviewers completed, in duplicate and independently, the study selection, data abstraction and risk of bias assessment. We included randomised trials, non-randomised studies, and case studies examining any intervention at the health system-level to combat or prevent drug counterfeiting. Outcomes of interest included changes in failure rates of tested drugs and changes in prevalence of counterfeit medicines. We excluded studies that focused exclusively on substandard, degraded or expired drugs, or that focused on medication errors. We assessed the risk of bias in each included study. We reported the results narratively and, where applicable, we conducted meta-analyses. We included 21 studies representing 25 units of analysis. Overall, we found low quality evidence suggesting positive effects of drug registration (OR=0.23; 95% CI 0.08 to 0.67), and WHO-prequalification of drugs (OR=0.06; 95% CI 0.01 to 0.35) in reducing the prevalence of counterfeit and substandard drugs. Low quality evidence suggests that licensing of drug outlets is probably ineffective (OR=0.66; 95% CI 0.41 to 1.05). For multifaceted interventions (including a mix of regulations, training of inspectors, public-private collaborations and legal actions), low quality evidence suggest they may be effective. The single RCT provided moderate quality evidence of no effect of 'two extra inspections' in improving drug quality. Policymakers and stakeholders would benefit from registration and WHO-prequalification of drugs and may also consider multifaceted interventions. Future effectiveness studies should address the methodological limitations of the available evidence. PROSPERO CRD42014009269. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to
    BMJ Open 01/2015; 5(3):e006290. DOI:10.1136/bmjopen-2014-006290 · 2.06 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Mediation has to do with the transfer of causality from an independent variable to a dependent variable via a third variable called a “mediator.” Because the experimental method is the universally recognized gold standard for establishing causality, we propose that conducting two experiments, one manipulating the independent variable and another manipulating the hypothesized mediator, most rigorously tests mediation hypotheses. When there are several experiments in which the independent variable was manipulated and also several experiments in which the mediator was manipulated, synthesizing these two sets of experiments using meta-analysis yields the ultimate mediation evidence. If these experiments were conducted in the field, both internal validity and external validity would be maximized. An example of the synthesis of multi-experiment mediation tests is provided and its potential and limitations are discussed.
    Human Resource Management Review 03/2015; DOI:10.1016/j.hrmr.2015.02.001 · 2.38 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is widely believed that sleep is critical to the consolidation of learning and memory. In some skill domains, performance has been shown to improve by 20% or more following sleep, suggesting that sleep enhances learning. However, recent work suggests that those performance gains may be driven by several factors that are unrelated to sleep consolidation, inviting a reconsideration of sleep's theoretical role in the consolidation of procedural memories. Here we report the first comprehensive investigation of that possibility for the case of motor sequence learning. Quantitative meta-analyses involving 34 articles, 88 experimental groups and 1,296 subjects confirmed the empirical pattern of a large performance gain following sleep and a significantly smaller gain following wakefulness. However, the results also confirm strong moderating effects of 4 previously hypothesized variables: averaging in the calculation of prepost gain scores, build-up of reactive inhibition over training, time of testing, and training duration, along with 1 supplemental variable, elderly status. With those variables accounted for, there was no evidence that sleep enhances learning. Thus, the literature speaks against, rather than for, the enhancement hypothesis. Overall there was relatively better performance after sleep than after wakefulness, suggesting that sleep may stabilize memory. That effect, however, was not consistent across different experimental designs. We conclude that sleep does not enhance motor learning and that the role of sleep in the stabilization of memory cannot be conclusively determined based on the literature to date. We discuss challenges and opportunities for the field, make recommendations for improved experimental design, and suggest approaches to data analysis that eliminate confounds due to averaging over online learning. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
    Psychological Bulletin 03/2015; DOI:10.1037/bul0000009 · 14.39 Impact Factor


Available from
May 21, 2014