Article

QUADAS-2 Group. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies

University of Bristol, United Kingdom.
Annals of internal medicine (Impact Factor: 17.81). 10/2011; 155(8):529-36. DOI: 10.1059/0003-4819-155-8-201110180-00009
Source: PubMed

ABSTRACT

In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.

Download full-text

Full-text

Available from: Marie Westwood, May 13, 2014
  • Source
    • "To examine the potential effect of selection bias on prevalence estimates, we additionally assessed whether the selection of study participants could have influenced the reported results. A study was judged as having a low risk of bias if a consecutive or a random sample of participants in the population with the targeted diagnosis was enrolled or if the study avoided inappropriate exclusions; an unknown risk of bias if no information was provided on the selection of participants or a high risk of bias if a purposive or a convenience sample of participants was enrolled (Whiting et al. 2011). We hypothesized that studies having a high risk of bias in participants' selections (lack of random sampling, targeting of participants because of their increased risk of displaying stereotypy) would produce higher prevalence estimates of stereotypy than studies with a low selection bias. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Although many researchers have examined the prevalence of stereotypy in individuals with developmental disabilities, the results of previous studies have not been aggregated and analyzed methodically. Thus, we conducted a systematic review of studies reporting the prevalence of stereotypy in individuals with developmental disabilities. Our results indicated that the average prevalence of stereotypy across studies was 61% and that individuals with autism spectrum disorders had the highest reported prevalence (i.e., 88%) across specific diagnoses. Children and adults generally had similar overall prevalence measures, but the specific forms varied with age and diagnosis. Studies using the Repetitive Behavior Scale – Revised and the Autism Diagnostic Schedule – Revised generally reported higher estimates of prevalence of specific forms of stereotypy when compared to the Behavior Problem Inventory. However, the latter seemed more sensitive than the Aberrant Behavior Checklist for overall prevalence. Studies with a low risk of bias found a lower prevalence of stereotypy than those with a high risk of bias. Our systematic review underlines the importance of continuing research efforts to improve the assessment and treatment of stereotypy in individuals with developmental disabilities.
    Preview · Article · Jan 2016
  • Source
    • "To assess the methodological quality of the included papers, the two reviewers (JHL, LDB) used an adapted version of the QUADAS-2 tool [16]. The adapted version of this tool can be found in eSupplement 1. Extracted data include: characteristics of the study (first author, year, country, condition studied, number of included patients), methods used (pre-processing, dimension reduction, feature selection, classification algorithm), performance measure(s), and validation method. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Currently, many different methods are being used for pre-processing, statistical analysis and validation of data obtained by electronic nose technology from exhaled air. These various methods, however, have never been thoroughly compared. We aimed to empirically evaluate and compare the influence of different dimension reduction, classification and validation methods found in published studies on the diagnostic performance in several datasets. Our objective was to facilitate the selection of appropriate statistical methods and to support reviewers in this research area. We reviewed the literature by searching Pubmed up to the end of 2014 for all human studies using an electronic nose and methodological quality was assessed using the QUADAS-2 tool tailored to our review. Forty-six studies were evaluated regarding the range of different approaches to dimension reduction, classification and validation. From forty-six reviewed articles only seven applied external validation in an independent dataset, mostly with a case-control design. We asked their authors to share the original datasets with us. Four of the seven datasets were available for re-analysis. Published statistical methods for eNose signal analysis found in the literature review were applied to the training set of each dataset. The performance (area under the receiver operating characteristics curve (ROC-AUC)) was calculated for the training cohort (in-set) and after internal validation (leave-one-out cross validation). The methods were also applied to the external validation set to assess the external validity of the performance. Risk of bias was high in most studies due to non-random selection of patients. Internal validation resulted in a decrease in ROC-AUCs compared to in-set performance: -0.15,-0.14,-0.1,-0.11 in dataset 1 through 4, respectively. External validation resulted in lower ROC-AUC compared to internal validation in dataset 1 (-0.23) and 3 (-0.09). ROC-AUCs did not decrease in dataset 2 (+0.07) and 4 (+0.04). No single combination of dimension reduction and classification methods gave consistent results between internal and external validation sets in this sample of four datasets. This empirical evaluation showed that it is not meaningful to estimate the diagnostic performance on a training set alone, even after internal validation. Therefore, we recommend the inclusion of an external validation set in all future eNose projects in medicine.
    Full-text · Article · Dec 2015 · Journal of Breath Research
  • Source
    • "The following information was extracted from the included studies: the name of the first author, year of publication, study design, number of participants in each group, participants' age and gender, and the major outcomes. Quality of the included studies was evaluated using QUADAS[20]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of this meta-analysis was to evaluate the sensitivity and specificity of computed tomography perfusion (CTP) in diagnosing acute ischemic stroke in patients presenting to the emergency department with stroke-like symptoms.
    Full-text · Article · Nov 2015 · Journal of the Neurological Sciences
Show more