Grading quality of evidence and strength of recommendations for diagnostic tests and strategies.

Department of Epidemiology, Italian National Cancer Institute Regina Elena, 00144 Rome, Italy.
BMJ (online) (Impact Factor: 16.38). 06/2008; 336(7653):1106-10. DOI: 10.1136/bmj.39500.677199.AE
Source: PubMed

ABSTRACT The GRADE system can be used to grade the quality of evidence and strength of recommendations for diagnostic tests or strategies. This article explains how patient-important outcomes are taken into account in this processSummary pointsAs for other interventions, the GRADE approach to grading the quality of evidence and strength of recommendations for diagnostic tests or strategies provides a comprehensive and transparent approach for developing recommendationsCross sectional or cohort studies can provide high quality evidence of test accuracyHowever, test accuracy is a surrogate for patient-important outcomes, so such studies often provide low quality evidence for recommendations about diagnostic tests, even when the studies do not have serious limitationsInferring from data on accuracy that a diagnostic test or strategy improves patient-important outcomes will require the availability of effective treatment, reduction of test related adverse effects or anxiety, or improvement of patients’ wellbeing from prognostic informationJudgments are thus needed to assess the directness of test results in relation to consequences of diagnostic recommendations that are important to patientsIn this fourth article of the five part series, we describe how guideline developers are using GRADE to rate the quality of evidence and move from evidence to a recommendation for diagnostic tests and strategies. Although recommendations on diagnostic testing share the fundamental logic of recommendations on treatment, they present unique challenges. We will describe why guideline panels should be cautious when they use evidence of the accuracy of tests (“test accuracy”) as the basis for recommendations and why evidence of test accuracy often provides low quality evidence for making recommendations.Testing makes a variety of contributions to patient careClinicians use tests that are usually referred to as “diagnostic”—including signs and symptoms, imaging, biochemistry, pathology, and psychological testing—for various purposes.1 These purposes include identifying physiological derangements, establishing prognosis, monitoring illness and response to treatment, and diagnosis. This article …

1 Follower
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Context: Epidemiological studies and animal models demonstrate that endocrine disrupting chemicals (EDCs) contribute to cognitive deficits and neurodevelopmental disabilities. Objective: To estimate neurodevelopmental disability and associated costs that can be reasonably attributed to EDC exposure in the European Union. Design: An expert panel applied a weight-of-evidence characterization adapted from the Intergovernmental Panel on Climate Change. Exposure-response relationships and reference levels were evaluated for relevant EDCs, and biomarker data were organized from peer-reviewed studies to represent European exposure and approximate burden of disease. Cost estimation as of 2010 utilized lifetime economic productivity estimates, lifetime cost estimates for autism spectrum disorder (ASD) and annual costs for attention deficit hyperactivity disorder (ADHD). Setting, Patients and Participants and Intervention: Cost estimation was carried out from a societal perspective, i.e. including direct costs (e.g. treatment costs) and indirect costs such as productivity loss. Results: The panel identified 70-100% probability that polybrominated diphenyl ether (PBDE) and organophosphate (OP) exposures contribute to IQ loss in the European population. PBDE exposures were associated with 873,000 (sensitivity analysis: 148,000-2.02 million) lost IQ points and 3,290 (sensitivity analysis: 3,290-8,080) cases of intellectual disability, at costs of €9.59 billion (sensitivity analysis: €1.58-22.4 billion). OP exposures were associated with 13.0 billion (sensitivity analysis: 4.24-17.1 billion) lost IQ points and 59,300 (sensitivity analysis: 16,500-84,400) cases of intellectual disability, at costs of €146 billion (sensitivity analysis: €46.8-194 billion). ASD causation by multiple EDCs was assigned a 20-39% probability, with 316 (sensitivity analysis: 126-631) attributable cases at a cost of €199 million (sensitivity analysis: €79.7-399 million). ADHD causation by multiple EDCs was assigned a 20-69% probability, with 19,300-31,200 attributable cases at a cost of €1.21-2.86 billion. Conclusions: EDC exposures in Europe contribute substantially to neurobehavioral deficits and disease, with a high probability of >€150 billion costs/year. These results emphasize the advantages of controlling EDC exposure.
  • Source
  • [Show abstract] [Hide abstract]
    ABSTRACT: When the efficacy of a new medical drug is compared against that of an established competitor in a randomized controlled trial, the difference in patient-relevant outcomes, such as mortality, is usually measured directly. In diagnostic research, however, the impact of diagnostic procedures is of an indirect nature as test results do influence downstream clinical decisions, but test performance (as characterized by sensitivity, specificity, and the predictive values of a procedure) is, at best, only a surrogate endpoint for patient outcome and does not necessarily translate into it. Not many randomized controlled trials have been conducted so far in diagnostic research, and, hence, we need alternative approaches to close the gap between test characteristics and patient outcomes. Several informal approaches have been suggested in order to close this gap, and decision modeling has been advocated as a means of obtaining formal approaches. Recently, the expected benefit has been proposed as a quantity that allows a simple formal approach, and we take up this suggestion in this paper. We regard the expected benefit as an estimation problem and consider two approaches to statistical inference. Moreover, using data from a previously published study, we illustrate the possible insights to be gained from the application of formal inference techniques to determine the expected benefit. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
    Biometrical Journal 03/2015; DOI:10.1002/bimj.201400020 · 1.24 Impact Factor

Full-text (2 Sources)

Available from
May 22, 2014