Article

Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview.

Centre for Reviews and Dissemination, University of York, York, United Kingdom.
PLoS Medicine (Impact Factor: 14). 05/2011; 8(5):e1001026. DOI: 10.1371/journal.pmed.1001026
Source: PubMed

ABSTRACT There is considerable debate as to the relative merits of using randomised controlled trial (RCT) data as opposed to observational data in systematic reviews of adverse effects. This meta-analysis of meta-analyses aimed to assess the level of agreement or disagreement in the estimates of harm derived from meta-analysis of RCTs as compared to meta-analysis of observational studies.
Searches were carried out in ten databases in addition to reference checking, contacting experts, citation searches, and hand-searching key journals, conference proceedings, and Web sites. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from RCTs could be directly compared, using the ratio of odds ratios, with the pooled estimate for the same adverse effect arising from observational studies. Nineteen studies, yielding 58 meta-analyses, were identified for inclusion. The pooled ratio of odds ratios of RCTs compared to observational studies was estimated to be 1.03 (95% confidence interval 0.93-1.15). There was less discrepancy with larger studies. The symmetric funnel plot suggests that there is no consistent difference between risk estimates from meta-analysis of RCT data and those from meta-analysis of observational studies. In almost all instances, the estimates of harm from meta-analyses of the different study designs had 95% confidence intervals that overlapped (54/58, 93%). In terms of statistical significance, in nearly two-thirds (37/58, 64%), the results agreed (both studies showing a significant increase or significant decrease or both showing no significant difference). In only one meta-analysis about one adverse effect was there opposing statistical significance.
Empirical evidence from this overview indicates that there is no difference on average in the risk estimate of adverse effects of an intervention derived from meta-analyses of RCTs and meta-analyses of observational studies. This suggests that systematic reviews of adverse effects should not be restricted to specific study types. Please see later in the article for the Editors' Summary.

Full-text

Available from: Su Golder, May 14, 2015
1 Follower
 · 
109 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Meta-analysis is the gold standard for synthesizing evidence on the effectiveness of health care interventions. However, its validity is dependent on the quality of included studies. Here, we investigated whether basic study design (i.e., randomization and timing of data collection) in orthodontic research influences the results of clinical trials. This meta-epidemiologic study used unrestricted electronic and manual searching for meta-analyses in orthodontics. Differences in standardized mean differences (ΔSMD) between interventions and their 95% confidence intervals (CIs) were calculated according to study design through random-effects meta-regression. Effects were then pooled with random-effects meta-analyses. No difference was found between randomized and nonrandomized trials (25 meta-analyses; ΔSMD = 0.07; 95% CI = -0.21, 0.34; P = 0.630). However, retrospective nonrandomized trials reported inflated treatment effects compared with prospective (40 meta-analyses; ΔSMD = -0.30; 95% CI = -0.53, -0.06; P = 0.018). No difference was found between randomized trials with adequate and those with unclear/inadequate generation (25 meta-analyses; ΔSMD = 0.01; 95% CI = -0.25, 0.26; P = 0.957). Finally, subgroup analyses indicated that the results of randomized and nonrandomized trials differed significantly according to scope of the trial (effectiveness or adverse effects; P = 0.005). Caution is warranted when interpreting systematic reviews investigating clinical orthodontic interventions when nonrandomized and especially retrospective nonrandomized studies are included in the meta-analysis. Copyright © 2015 Elsevier Inc. All rights reserved.
    Journal of clinical epidemiology 03/2015; DOI:10.1016/j.jclinepi.2015.03.008 · 5.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: A recent issue of the Royal Statistical Society magazine “Significance” had an interesting article about the human tendency to be over-confident and the authors conclude “At the very least it is important for decision-makers to be aware that people are prone to overconfidence, and that to assume one is not is to unwittingly fall prey to the bias” [1]. From my experience of reviewing medical research articles, I find authors to be very over-confident of the strength of evidence provided by their research. This applies to randomised trials but especially to observational research.In the same issue of Significance, on page 19 “Dr. Fisher” effectively notes this as well, though he describes his change in perspective when moving from author to referee. Being honest, I think it likely that I have been over-confident in my own research or opinion, but I like to think that in my mature years I have become more realistic both as author and as referee!OMOP is an empirically-based project to find ...
    Drug Safety 10/2013; 36(S1):3-4. DOI:10.1007/s40264-013-0096-9 · 2.62 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In their comparative analysis of Randomised Clinical Trials and observational studies, Papanikoloau et al. (2006) assert that “it may be unfair to invoke bias and confounding to discredit observational studies as a source of evidence on harms”. There are two kinds of answers to the question why this is so. One is based on metaphysical assumptions, such as the problem of causal sufficiency, modularity and other statistical assumptions. The other is epistemological and relates to foundational issues and how they determine the constraints we put on evidence. I will address here the latter dimension and present recent proposals to amend evidence hierarchies for the purpose of safety assessment of pharmaceuticals; I then relate these suggestions to a case study: the recent debate on the causal association between paracetamol and asthma. The upshot of this analysis is that different epistemologies impose different constraints on the methods we adopt to collect and evaluate evidence; thus they grant “lower level” evidence on distinct grounds and at different conditions. Appreciating this state of affairs illuminates the debate on the epistemic asymmetry concerning benefits and harms and sets the basis for a foundational, as opposed to heuristic, justification of safety assessment based on heterogeneous evidence.
    12/2014; 1:9-13. DOI:10.1016/j.pmedr.2014.08.002