Article

Better information for better health care: The Evidence-based Practice Center program and the Agency for Healthcare Research and Quality

Agency for Healthcare Research and Quality, Rockville, Maryland 20850, USA.
Annals of internal medicine (Impact Factor: 16.1). 07/2005; 142(12 Pt 2):1035-41.
Source: PubMed

ABSTRACT To provide decision makers with the best available evidence, the Agency for Healthcare Research and Quality established a network of Evidence-based Practice Centers across North America. The centers perform systematic reviews on important questions posed by partner organizations about clinical, organizational, and policy interventions in healthcare. The Agency works closely with partners and other decision maker s to help translate that evidence into practice or policy. In this paper, we review important lessons we have learned over the past 7 years about how to increase the efficiency and impact of systematic reviews. Lessons concern selecting the right topics and scope, working effectively with partners, and balancing consistency and flexibility in methods. We examine continuing evolutions of the program and the impact of planned work on comparative effectiveness performed as part of the Medicare Modernization Act of 2003.

0 Followers
 · 
55 Views
 · 
0 Downloads
  • Source
    • "“Grading” refers to the assessment of the strength of the body of evidence supporting a given statement or conclusion rather than to the quality of an individual study.1 Grading can be valuable for providing information to decisionmakers, such as guideline panels, clinicians, caregivers, insurers and patients who wish to use an evidence synthesis to promote improved patient outcomes.1,2 In particular, such grades allow decisionmakers to assess the degree to which any decision can be based on bodies of evidence that are of high, moderate, or only low strength of evidence. "
    [Show abstract] [Hide abstract]
    ABSTRACT: INTRODUCTION Grading the strength of a body of diagnostic test evidence involves challenges over and above those related to grading the evidence from health care intervention studies. This chapter identifies challenges and outlines principles for grading the body of evidence related to diagnostic test performance. CHALLENGES Diagnostic test evidence is challenging to grade because standard tools for grading evidence were designed for questions about treatment rather than diagnostic testing; and the clinical usefulness of a diagnostic test depends on multiple links in a chain of evidence connecting the performance of a test to changes in clinical outcomes. PRINCIPLES Reviewers grading the strength of a body of evidence on diagnostic tests should consider the principle domains of risk of bias, directness, consistency, and precision, as well as publication bias, dose response association, plausible unmeasured confounders that would decrease an effect, and strength of association, similar to what is done to grade evidence on treatment interventions. Given that most evidence regarding the clinical value of diagnostic tests is indirect, an analytic framework must be developed to clarify the key questions, and strength of evidence for each link in that framework should be graded separately. However if reviewers choose to combine domains into a single grade of evidence, they should explain their rationale for a particular summary grade and the relevant domains that were weighed in assigning the summary grade.
    Journal of General Internal Medicine 06/2012; 27 Suppl 1(S1):S47-55. DOI:10.1007/s11606-012-2021-9 · 3.42 Impact Factor
  • Source
    • "Specifically, nonrandomized designs may increase the evidence base regarding long-term outcomes and safety. Nonrandomized studies may also be used to identify current limitations in evidence, recommend the types of studies that would provide stronger evidence, and guide future research [5] [6]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: To develop and test a study design classification tool. We contacted relevant organizations and individuals to identify tools used to classify study designs and ranked these using predefined criteria. The highest ranked tool was a design algorithm developed, but no longer advocated, by the Cochrane Non-Randomized Studies Methods Group; this was modified to include additional study designs and decision points. We developed a reference classification for 30 studies; 6 testers applied the tool to these studies. Interrater reliability (Fleiss' κ) and accuracy against the reference classification were assessed. The tool was further revised and retested. Initial reliability was fair among the testers (κ=0.26) and the reference standard raters κ=0.33). Testing after revisions showed improved reliability (κ=0.45, moderate agreement) with improved, but still low, accuracy. The most common disagreements were whether the study design was experimental (5 of 15 studies), and whether there was a comparison of any kind (4 of 15 studies). Agreement was higher among testers who had completed graduate level training versus those who had not. The moderate reliability and low accuracy may be because of lack of clarity and comprehensiveness of the tool, inadequate reporting of the studies, and variability in tester characteristics. The results may not be generalizable to all published studies, as the test studies were selected because they had posed challenges for previous reviewers with respect to their design classification. Application of such a tool should be accompanied by training, pilot testing, and context-specific decision rules.
    Journal of clinical epidemiology 04/2011; 64(8):861-71. DOI:10.1016/j.jclinepi.2011.01.010 · 5.48 Impact Factor
  • Source
    • "These reports are designed to accurately and transparently summarize a body of literature with the goal of helping clinicians, policymakers , and patients make well-informed decisions about health care. Reviews should provide clear judgments about the strength of the evidence that underlies conclusions to enable decision makers to use them effectively [2]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: To establish guidance on grading strength of evidence for the Evidence-based Practice Center (EPC) program of the U.S. Agency for Healthcare Research and Quality. Authors reviewed authoritative systems for grading strength of evidence, identified domains and methods that should be considered when grading bodies of evidence in systematic reviews, considered public comments on an earlier draft, and discussed the approach with representatives of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) working group. The EPC approach is conceptually similar to the GRADE system of evidence rating; it requires assessment of four domains: risk of bias, consistency, directness, and precision. Additional domains to be used when appropriate include dose-response association, presence of confounders that would diminish an observed effect, strength of association, and publication bias. Strength of evidence receives a single grade: high, moderate, low, or insufficient. We give definitions, examples, mechanisms for scoring domains, and an approach for assigning strength of evidence. EPCs should grade strength of evidence separately for each major outcome and, for comparative effectiveness reviews, all major comparisons. We will collaborate with the GRADE group to address ongoing challenges in assessing the strength of evidence.
    Journal of clinical epidemiology 08/2009; 63(5):513-23. DOI:10.1016/j.jclinepi.2009.03.009 · 5.48 Impact Factor
Show more