Article

Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview

Centre for Reviews and Dissemination, University of York, York, United Kingdom.
PLoS Medicine (Impact Factor: 14). 05/2011; 8(5):e1001026. DOI: 10.1371/journal.pmed.1001026
Source: PubMed

ABSTRACT There is considerable debate as to the relative merits of using randomised controlled trial (RCT) data as opposed to observational data in systematic reviews of adverse effects. This meta-analysis of meta-analyses aimed to assess the level of agreement or disagreement in the estimates of harm derived from meta-analysis of RCTs as compared to meta-analysis of observational studies.
Searches were carried out in ten databases in addition to reference checking, contacting experts, citation searches, and hand-searching key journals, conference proceedings, and Web sites. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from RCTs could be directly compared, using the ratio of odds ratios, with the pooled estimate for the same adverse effect arising from observational studies. Nineteen studies, yielding 58 meta-analyses, were identified for inclusion. The pooled ratio of odds ratios of RCTs compared to observational studies was estimated to be 1.03 (95% confidence interval 0.93-1.15). There was less discrepancy with larger studies. The symmetric funnel plot suggests that there is no consistent difference between risk estimates from meta-analysis of RCT data and those from meta-analysis of observational studies. In almost all instances, the estimates of harm from meta-analyses of the different study designs had 95% confidence intervals that overlapped (54/58, 93%). In terms of statistical significance, in nearly two-thirds (37/58, 64%), the results agreed (both studies showing a significant increase or significant decrease or both showing no significant difference). In only one meta-analysis about one adverse effect was there opposing statistical significance.
Empirical evidence from this overview indicates that there is no difference on average in the risk estimate of adverse effects of an intervention derived from meta-analyses of RCTs and meta-analyses of observational studies. This suggests that systematic reviews of adverse effects should not be restricted to specific study types. Please see later in the article for the Editors' Summary.

Download full-text

Full-text

Available from: Su Golder, Aug 03, 2015
1 Follower
 · 
116 Views
  • Source
    • "Empirical evidence indicates that flaws in the design, conduct, and analysis of trials can lead to bias and distort their effects. Previous meta-epidemiologic studies have assessed the influence of various study characteristics on their effects, including among others indexing in MEDLINE [1], language [2] [3], design [4] [5], methodological characteristics [6], sample size [7e10], and others with most focus on randomized trials. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Objectives To examine the influence of the following study characteristics on their study effect estimates: (1) indexing in MEDLINE, (2) language, and (3) design. For randomized trials, (4) trial size and (5) unequal randomization were also assessed. Study Design and Setting The CAtegorical Dental and Maxillofacial Outcome Syntheses meta-epidemiologic study was conducted. Eight databases/registers were searched up to September 2012 for meta-analyses of binary outcomes with at least five studies in the field of dental and maxillofacial medicine. The previously mentioned five study characteristics were investigated. The ratio of odds ratios (ROR) according to each characteristic was calculated with random-effects meta-regression and then pooled across meta-analyses. Results A total of 281 meta-analyses were identified and used to assess the influence of the following factors: non-MEDLINE indexing vs. MEDLINE indexing (n = 78; ROR, 1.12; 95% confidence interval [CI]: 1.05, 1.19; P = 0.001), language (n = 61; P = 0.546), design (n = 24; P = 0.576), small trials (<200 patients) vs. large trials (≥200 patients) (n = 80; ROR, 0.92; 95% CI: 0.87, 0.98; P = 0.009) and unequal randomization (n = 36; P = 0.828). Conclusion Studies indexed in MEDLINE might present greater effects than non-indexed ones. Small randomized trials might present greater effects than large ones.
    Journal of clinical epidemiology 09/2014; 67(9). DOI:10.1016/j.jclinepi.2014.04.002 · 5.48 Impact Factor
  • Source
    • "Research questions about harms are widely considered to be less at risk of selection bias / confounding because these outcomes are less likely to be indications for administering or withholding an intervention. There is some recent evidence to support this view (Golder et al., 2011). However, confounding can still occur, especially in circumstances in which the adverse event arises in the same biological pathway / organ system as the one targeted by the intended effects of an intervention (Stampfer, 2004). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Methods need to be further developed to include non-randomised studies (NRS) in systematic reviews of the effects of health care interventions. NRS are often required to answer questions about harms and interventions for which evidence from randomised controlled trials (RCTs) is not available. Methods used to review randomised controlled trials may be inappropriate or insufficient for NRS. Aim and methodsA workshop was convened to discuss relevant methodological issues. Participants were invited from important stakeholder constituencies, including methods and review groups of the Cochrane and Campbell Collaborations, the Cochrane Editorial Unit and organisations that commission reviews and make health policy decisions. The aim was to discuss methods for reviewing evidence when including NRS and to formulate methodological guidance for review authors. Workshop formatThe workshop was structured around four sessions on topics considered in advance to be most critical: (i) study designs and bias; (ii) confounding and meta-analysis; (iii) selective reporting; and (iv) applicability. These sessions were scheduled between introductory and concluding sessions. SummaryThis is the first of six papers and provides an overview. Subsequent papers describe the discussions and conclusions from the four main sessions (papers 2 to 5) and summarise the proposed guidance into lists of issues for review authors to consider (paper 6). Copyright © 2013 John Wiley & Sons, Ltd.
    03/2013; 4(1). DOI:10.1002/jrsm.1068
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: To describe the ethical and methodological quality of non-interventional post-authorization studies promoted by Hospital Pharmacy Departments (HPD). Methods: HPD promoted studies in the 2009-2011 period included in the Spanish Agency of Medicines and Medical Devices (AEMPS) registry and/or published in "Farmacia Hospitalaria" were identified. The most relevant ethical and methodological characteristics were analyzed. Studies promoted by HPD were also compared with studies not promoted by HPD. Results: Twenty two studies promoted by HPD, and registered in the AEMPS were identified. Within the registered studies HPD promoted studies had lower sample size estimation (41,5% vs 80%) and international scope (0% vs 24%) compared to non HPD promoted studies with significant differences (p < 0,05). None of the published studies in the journal Farmacia Hospitalaria have been registered in the AEMPS and had lower methodological quality than the registered studies promoted by HPD in characteristics such as presence of control group (3,8% vs 27,3%) (p = 0,0072) and the sample size estimation of (19,2% vs 42,8%) (p < 0,05). Conclusion: The management and the methodological and ethical characteristics of the studies promoted by HPD should be improved according to the regulation. The registration in the AEMPS might have a positive impact on the quality of these research protocols.
    37(6):482-488.
Show more