Article

QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies

University of Bristol, United Kingdom.
Annals of internal medicine (Impact Factor: 16.1). 10/2011; 155(8):529-36. DOI: 10.1059/0003-4819-155-8-201110180-00009
Source: PubMed

ABSTRACT In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.

Download full-text

Full-text

Available from: Marie Westwood, May 13, 2014
4 Followers
 · 
1,032 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Screening for atrial fibrillation (AF) using 12-lead-electrocardiograms (ECGs) has been recommended; however, the best method for interpreting ECGs to diagnose AF is not known. We compared accuracy of methods for diagnosing AF from ECGs. We searched MEDLINE, EMBASE, CINAHL and LILACS until March 24, 2014. Two reviewers identified eligible studies, extracted data and appraised quality using the QUADAS-2 instrument. Meta-analysis, using the bivariate hierarchical random effects method, determined average operating points for sensitivities, specificities, positive and negative likelihood ratios (PLR, NLR) and enabled construction of Summary Receiver Operating Characteristic (SROC) plots. 10 studies investigated 16 methods for interpreting ECGs (n=55,376 participant ECGs). The sensitivity and specificity of automated software (8 studies; 9 methods) were 0.89 (95% C.I. 0.82-0.93) and 0.99 (95% C.I. 0.99-0.99), respectively; PLR 96.6 (95% C.I. 64.2-145.6); NLR 0.11 (95% C.I. 0.07-0.18). Indirect comparisons with software found healthcare professionals (5 studies; 7 methods) had similar sensitivity for diagnosing AF but lower specificity [sensitivity 0.92 (95% C.I. 0.81-0.97), specificity 0.93 (95% C.I. 0.76-0.98), PLR 13.9 (95% C.I. 3.5-55.3), NLR 0.09 (95% C.I. 0.03-0.22)]. Sub-group analyses of primary care professionals found greater specificity for GPs than nurses [GPs: sensitivity 0.91 (95% C.I. 0.68-1.00); specificity 0.96 (95% C.I. 0.89-1.00). Nurses: sensitivity 0.88 (95% C.I. 0.63-1.00); specificity 0.85 (95% C.I. 0.83-0.87)]. Automated ECG-interpreting software most accurately excluded AF, although its ability to diagnose this was similar to all healthcare professionals. Within primary care, the specificity of AF diagnosis from ECG was greater for GPs than nurses. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
    International Journal of Cardiology 02/2015; 184C:175-183. DOI:10.1016/j.ijcard.2015.02.014 · 6.18 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Objectives With rapidly increasing numbers of publications, assessments of study quality, reporting quality, and classification of studies according to their level of evidence or developmental stage have become key issues in weighing the relevance of new information reported. Diagnostic marker studies are often criticized for yielding highly discrepant and even controversial results. Much of this discrepancy has been attributed to differences in study quality. So far, numerous tools for measuring study quality have been developed, but few of them have been used for systematic reviews and meta-analysis. This is owing to the fact that most tools are complicated and time consuming, suffer from poor reproducibility, and do not permit quantitative scoring. Methods The International Bladder Cancer Network (IBCN) has adopted this problem and has systematically identified the more commonly used tools developed since 2000. Results In this review, those tools addressing study quality (Quality Assessment of Studies of Diagnostic Accuracy and Newcastle-Ottawa Scale), reporting quality (Standards for Reporting of Diagnostic Accuracy), and developmental stage (IBCN phases) of studies on diagnostic markers in bladder cancer are introduced and critically analyzed. Based upon this, the IBCN has launched an initiative to assess and validate existing tools with emphasis on diagnostic bladder cancer studies. Conclusions The development of simple and reproducible tools for quality assessment of diagnostic marker studies permitting quantitative scoring is suggested.
    Urologic Oncology 10/2014; DOI:10.1016/j.urolonc.2013.10.003 · 3.36 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: ABSTRACT Timely detection, staging, and treatment initiation are pertinent to controlling HIV infection. CD4+ cell-based point-of-care (POC) devices offer the potential to rapidly stage patients, and decide on initiating treatment, but a comparative evaluation of their performance has not yet been performed. With this in mind, we conducted a systematic review and metaanalyses. For the period January 2000 to April 2014, 19 databases were systematically searched, 6619 citations retrieved, and 25 articles selected. Diagnostic performance was compared across devices (i.e., PIMA, CyFlow, miniPOC, MBioCD4 System) and across specimens (i.e., capillary blood vs. venous blood). A Bayesian approach was used to meta-analyze the data. The primary outcome, the Bland–Altman (BA) mean bias (which represents agreement between cell counts from POC device and flow cytometry), was analyzed with a Bayesian hierarchical normal model. We performed a headto- head comparison of two POC devices such as PIMA and PointCareNOW CD4. PIMA appears to perform better vs. PointCareNOW with venous samples (BA mean bias: –9.5 cells/μL; 95% CrI: –37.71 to 18.27, vs. 139.3 cells/μL; 95% CrI: –0.85 to 267.4, mean difference = 148.8, 95% CrI: 11.8, 285.8); however, PIMA’s best performed when used with capillary samples (BA mean bias: 2.2 cells/μL; 95% CrI: – 19.32 to 23.6). Sufficient data were available to allow pooling of sensitivity and specificity data only at the 350 cells/μL cutoff. For PIMA device sensitivity 91.6 (84.7–95.5) and specificity was 94.8 (90.1–97.3), respectively. There were not sufficient data to allow comparisons between any other devices. PIMA device was comparable to flow cytometry. The estimated differences between the CD4+ cell counts of the device and the reference was small and best estimated in capillary blood specimens. As the evidence stands, the PointCareNOW device will need to improve prior to widespread use and more data on MBio and MiniPOC are needed. Findings inform implementation of PIMA and improvements in other CD4 POC device prior to recommending widespread use.