Article

Observer bias in randomised clinical trials with binary outcomes: systematic review of trials with both blinded and non-blinded outcome assessors.

Nordic Cochrane Centre, Rigshospitalet Department 3343, Blegdamsvej 9, 2100 Copenhagen Ø, Denmark.
BMJ (online) (Impact Factor: 16.38). 02/2012; 344(3):e1119. DOI: 10.1093/ije/dyt270
Source: PubMed

ABSTRACT Background: We wanted to evaluate the impact of nonblinded outcome assessors on estimated treatment effects in time-to-event trials. Methods: Systematic review of randomized clinical trials with both blinded and nonblinded assessors of the same time-to-event outcome. Two authors agreed on inclusion of trials and outcomes. We compared hazard ratios based on nonblinded and blinded assessments. A ratio of hazard ratios (RHR) < 1 indicated that nonblinded assessors generated more optimistic effect estimates. We pooled RHRs with inverse variance random-effects meta-analysis. Results: We included 18 trials. Eleven trials (1969 patients) with subjective outcomes provided hazard ratios, RHR 0.88 (0.69 to 1.12), (I-2 = 44%, P = 0.06), but unconditional pooling was problematic because of qualitative heterogeneity. Four atypical cytomegalovirus retinitis trials compared experimental oral administration with control intravenous administration of the same drug, resulting in bias favouring the control intervention, RHR 1.33 (0.98 to 1.82). Seven trials of cytomegalovirus retinitis, tibial fracture and multiple sclerosis compared experimental interventions with standard control interventions, e.g. placebo, no-treatment or active control, resulting in bias favouring the experimental intervention, RHR 0.73 (0.57 to 0.93), indicating an average exaggeration of nonblinded hazard ratios by 27% (7% to 43%). Conclusions: Lack of blinded outcome assessors in randomized trials with subjective time-to-event outcomes causes high risk of observer bias. Nonblinded outcome assessors typically favour the experimental intervention, exaggerating the hazard ratio by an average of approximately 27%; but in special situations, nonblinded outcome assessors favour control interventions, inducing a comparable degree of observer bias in the reversed direction.

0 Bookmarks
 · 
117 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Blinded outcome assessment is recommended in open-label trials to reduce bias, however it is not always feasible. It is therefore important to find other means of reducing bias in these scenarios. We describe two randomised trials where blinded outcome assessment was not possible, and discuss the strategies used to reduce the possibility of bias. TRIGGER was an open-label cluster randomised trial whose primary outcome was further bleeding. Because of the cluster randomisation, all researchers in a hospital were aware of treatment allocation and so could not perform a blinded assessment. A blinded adjudication committee was also not feasible as it was impossible to compile relevant information to send to the committee in a blinded manner. Therefore, the definition of further bleeding was modified to exclude subjective aspects (such as whether symptoms like vomiting blood were severe enough to indicate the outcome had been met), leaving only objective aspects (the presence versus absence of active bleeding in the upper gastrointestinal tract confirmed by an internal examination).TAPPS was an open-label trial whose primary outcome was whether the patient was referred for a pleural drainage procedure. Allowing a blinded assessor to decide whether to refer the patient for a procedure was not feasible as many clinicians may be reluctant to enrol patients into the trial if they cannot be involved in their care during follow-up. Assessment by an adjudication committee was not possible, as the outcome either occurred or did not. Therefore, the decision pathway for procedure referral was modified. If a chest x-ray indicated that more than a third of the pleural space filled with fluid, the patient could be referred for a procedure; otherwise, the unblinded clinician was required to reach a consensus on referral with a blinded assessor. This process allowed the unblinded clinician to be involved in the patient's care, while reducing the potential for bias. When blinded outcome assessment is not possible, it may be useful to modify the outcome definition or method of assessment to reduce the risk of bias.Trial registration: TRIGGER: ISRCTN85757829. Registered 26 July 2012 http://www.controlled-trials.com/ISRCTN85757829/.TAPPS: ISRCTN47845793. Registered 28 May 2012 http://www.controlled-trials.com/ISRCTN47845793.
    Trials 11/2014; 15(1):456. DOI:10.1186/1745-6215-15-456 · 2.12 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The degree of bias in randomized clinical trials varies depending on whether the outcome is subjective or objective. Assessment of the risk of bias in a clinical trial will therefore often involve categorization of the type of outcome. Our primary aim was to examine how the concepts "subjective outcome" and "objective outcome" are defined in methodological publications and clinical trial reports. To put this examination into perspective, we also provide an overview of how outcomes are classified more broadly.
    Journal of Clinical Epidemiology 09/2014; DOI:10.1016/j.jclinepi.2014.06.020 · 5.48 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The concept of meta-epidemiology has been introduced with considering the methodological limitations of systematic review for intervention trials. The paradigm of meta-epidemiology has shifted from a statistical method into a new methodology to close gaps between evidence and practice. Main interest of meta-epidemiology is to control potential biases in previous quantitative systematic reviews and draw appropriate evidences for establishing evidence-base guidelines. Nowadays, the network meta-epidemiolgy was suggested in order to overcome some limitations of meta-epidemiology. To activate meta-epidemiologic studies, implementation of tools for risk of bias and reporting guidelines such as the Consolidated Standards for Reporting Trials (CONSORT) should be done.
    09/2014; DOI:10.4178/epih/e2014019