Treatment effect estimates adjusted for small-study effects via a limit meta-analysis

Institute of Medical Biometry and Medical Informatics, University Medical Center, 79104 Freiburg, Germany.
Biostatistics (Impact Factor: 2.65). 01/2011; 12(1):122-42. DOI: 10.1093/biostatistics/kxq046
Source: PubMed

ABSTRACT Statistical heterogeneity and small-study effects are 2 major issues affecting the validity of meta-analysis. In this article, we introduce the concept of a limit meta-analysis, which leads to shrunken, empirical Bayes estimates of study effects after allowing for small-study effects. This in turn leads to 3 model-based adjusted pooled treatment-effect estimators and associated confidence intervals. We show how visualizing our estimators using the radial plot indicates how they can be calculated using existing software. The concept of limit meta-analysis also gives rise to a new measure of heterogeneity, termed G(2), for heterogeneity that remains after small-study effects are accounted for. In a simulation study with binary data and small-study effects, we compared our proposed estimators with those currently used together with a recent proposal by Moreno and others. Our criteria were bias, mean squared error (MSE), variance, and coverage of 95% confidence intervals. Only the estimators arising from the limit meta-analysis produced approximately unbiased treatment-effect estimates in the presence of small-study effects, while the MSE was acceptably small, provided that the number of studies in the meta-analysis was not less than 10. These limit meta-analysis estimators were also relatively robust against heterogeneity and one of them had a relatively small coverage error.

6 Reads
  • Source
    • "When publication bias is detected, an analyst can attempt to account for it[23,28,39,47,49–51]. Although the methods to conduct meta-analysis in the presence of a publication bias provide a powerful sensitivity analysis tool, their validity depends on the correctness of strong and unverifiable assumptions[12]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Introduction Positive results have a greater chance of being published and outcomes that are statistically significant have a greater chance of being fully reported. One consequence of research underreporting is that it may influence the sample of studies that is available for a meta-analysis. Smaller studies are often characterized by larger effects in published meta-analyses, which can be possibly explained by publication bias. We investigated the association between the statistical significance of the results and the probability of being included in recent meta-analyses. Methods For meta-analyses of clinical trials, we defined the relative risk as the ratio of the probability of including statistically significant results favoring the treatment to the probability of including other results. For meta-analyses of other studies, we defined the relative risk as the ratio of the probability of including biologically plausible statistically significant results to the probability of including other results. We applied a Bayesian selection model for meta-analyses that included at least 30 studies and were published in four major general medical journals (BMJ, JAMA, Lancet, and PLOS Medicine) between 2008 and 2012. Results We identified 49 meta-analyses. The estimate of the relative risk was greater than one in 42 meta-analyses, greater than two in 16 meta-analyses, greater than three in eight meta-analyses, and greater than five in four meta-analyses. In 10 out of 28 meta-analyses of clinical trials, there was strong evidence that statistically significant results favoring the treatment were more likely to be included. In 4 out of 19 meta-analyses of observational studies, there was strong evidence that plausible statistically significant outcomes had a higher probability of being included. Conclusions Publication bias was present in a substantial proportion of large meta-analyses that were recently published in four major medical journals.
    PLoS ONE 11/2013; 8(11):e81823. DOI:10.1371/journal.pone.0081823 · 3.23 Impact Factor
  • Source
    • "We used a network meta-regression model extending a regression-based approach for adjusting for small-study effects in conventional MAs [21,22,27-29]. This regression-based approach takes into account a possible small-study effect by allowing the effect size to depend on a measure of its precision. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Network meta-analysis (NMA), a generalization of conventional MA, allows for assessing the relative effectiveness of multiple interventions. Reporting bias is a major threat to the validity of MA and NMA. Numerous methods are available to assess the robustness of MA results to reporting bias. We aimed to extend such methods to NMA. Methods We introduced 2 adjustment models for Bayesian NMA. First, we extended a meta-regression model that allows the effect size to depend on its standard error. Second, we used a selection model that estimates the propensity of trial results being published and in which trials with lower propensity are weighted up in the NMA model. Both models rely on the assumption that biases are exchangeable across the network. We applied the models to 2 networks of placebo-controlled trials of 12 antidepressants, with 74 trials in the US Food and Drug Administration (FDA) database but only 51 with published results. NMA and adjustment models were used to estimate the effects of the 12 drugs relative to placebo, the 66 effect sizes for all possible pair-wise comparisons between drugs, probabilities of being the best drug and ranking of drugs. We compared the results from the 2 adjustment models applied to published data and NMAs of published data and NMAs of FDA data, considered as representing the totality of the data. Results Both adjustment models showed reduced estimated effects for the 12 drugs relative to the placebo as compared with NMA of published data. Pair-wise effect sizes between drugs, probabilities of being the best drug and ranking of drugs were modified. Estimated drug effects relative to the placebo from both adjustment models were corrected (i.e., similar to those from NMA of FDA data) for some drugs but not others, which resulted in differences in pair-wise effect sizes between drugs and ranking. Conclusions In this case study, adjustment models showed that NMA of published data was not robust to reporting bias and provided estimates closer to that of NMA of FDA data, although not optimal. The validity of such methods depends on the number of trials in the network and the assumption that conventional MAs in the network share a common mean bias mechanism.
    BMC Medical Research Methodology 09/2012; 12(1):150. DOI:10.1186/1471-2288-12-150 · 2.27 Impact Factor
  • Source
    • "Results will be visually displayed as forest plots. We will adjust for funnel plot asymmetry using a method by Rücker et al. [31,32]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Several systematic reviews have summarized the evidence for specific treatments of primary care patients suffering from depression. However, it is not possible to answer the question how the available treatment options compare with each other as review methods differ. We aim to systematically review and compare the available evidence for the effectiveness of pharmacological, psychological, and combined treatments for patients with depressive disorders in primary care. To be included, studies have to be randomized trials comparing antidepressant medication (tricyclic antidepressants, selective serotonin reuptake inhibitors (SSRIs), hypericum extracts, other agents) and/or psychological therapies (e.g. interpersonal psychotherapy, cognitive therapy, behavioural therapy, short dynamically-oriented psychotherapy) with another active therapy, placebo or sham intervention, routine care or no treatment in primary care patients in the acute phase of a depressive episode. Main outcome measure is response after completion of acute phase treatment. Eligible studies will be identified from available systematic reviews, from searches in electronic databases (Medline, Embase and Central), trial registers, and citation tracking. Two reviewers will independently extract study data and assess the risk of bias using the Cochrane Collaboration's corresponding tool. Meta-analyses (random effects model, inverse variance weighting) will be performed for direct comparisons of single interventions and for groups of similar interventions (e.g. SSRIs vs. tricyclics) and defined time-windows (up to 3 months and above). If possible, a global analysis of the relative effectiveness of treatments will be estimated from all available direct and indirect evidence that is present in a network of treatments and comparisons. Practitioners do not only want to know whether there is evidence that a specific treatment is more effective than placebo, but also how the treatment options compare to each other. Therefore, we believe that a multiple treatment systematic review of primary-care based randomized controlled trials on the most important therapies against depression is timely.
    BMC Family Practice 11/2011; 12(1):127. DOI:10.1186/1471-2296-12-127 · 1.67 Impact Factor
Show more


6 Reads