Audit and feedback: effects on professional practice and healthcare outcomes.

Department of Family Medicine, Women’s College Hospital, Toronto, Canada. 2Norwegian Knowledge Centre for the Health Services,Oslo, .
Cochrane database of systematic reviews (Online) (Impact Factor: 5.94). 01/2012; 6:CD000259. DOI: 10.1002/14651858.CD000259.pub3
Source: PubMed

ABSTRACT Audit and feedback is widely used as a strategy to improve professional practice either on its own or as a component of multifaceted quality improvement interventions. This is based on the belief that healthcare professionals are prompted to modify their practice when given performance feedback showing that their clinical practice is inconsistent with a desirable target. Despite its prevalence as a quality improvement strategy, there remains uncertainty regarding both the effectiveness of audit and feedback in improving healthcare practice and the characteristics of audit and feedback that lead to greater impact.
To assess the effects of audit and feedback on the practice of healthcare professionals and patient outcomes and to examine factors that may explain variation in the effectiveness of audit and feedback.
We searched the Cochrane Central Register of Controlled Trials (CENTRAL) 2010, Issue 4, part of The Cochrane Library., including the Cochrane Effective Practice and Organisation of Care (EPOC) Group Specialised Register (searched 10 December 2010); MEDLINE, Ovid (1950 to November Week 3 2010) (searched 09 December 2010); EMBASE, Ovid (1980 to 2010 Week 48) (searched 09 December 2010); CINAHL, Ebsco (1981 to present) (searched 10 December 2010); Science Citation Index and Social Sciences Citation Index, ISI Web of Science (1975 to present) (searched 12-15 September 2011).
Randomised trials of audit and feedback (defined as a summary of clinical performance over a specified period of time) that reported objectively measured health professional practice or patient outcomes. In the case of multifaceted interventions, only trials in which audit and feedback was considered the core, essential aspect of at least one intervention arm were included.
All data were abstracted by two independent review authors. For the primary outcome(s) in each study, we calculated the median absolute risk difference (RD) (adjusted for baseline performance) of compliance with desired practice compliance for dichotomous outcomes and the median percent change relative to the control group for continuous outcomes. Across studies the median effect size was weighted by number of health professionals involved in each study. We investigated the following factors as possible explanations for the variation in the effectiveness of interventions across comparisons: format of feedback, source of feedback, frequency of feedback, instructions for improvement, direction of change required, baseline performance, profession of recipient, and risk of bias within the trial itself. We also conducted exploratory analyses to assess the role of context and the targeted clinical behaviour. Quantitative (meta-regression), visual, and qualitative analyses were undertaken to examine variation in effect size related to these factors.
We included and analysed 140 studies for this review. In the main analyses, a total of 108 comparisons from 70 studies compared any intervention in which audit and feedback was a core, essential component to usual care and evaluated effects on professional practice. After excluding studies at high risk of bias, there were 82 comparisons from 49 studies featuring dichotomous outcomes, and the weighted median adjusted RD was a 4.3% (interquartile range (IQR) 0.5% to 16%) absolute increase in healthcare professionals' compliance with desired practice. Across 26 comparisons from 21 studies with continuous outcomes, the weighted median adjusted percent change relative to control was 1.3% (IQR = 1.3% to 28.9%). For patient outcomes, the weighted median RD was -0.4% (IQR -1.3% to 1.6%) for 12 comparisons from six studies reporting dichotomous outcomes and the weighted median percentage change was 17% (IQR 1.5% to 17%) for eight comparisons from five studies reporting continuous outcomes. Multivariable meta-regression indicated that feedback may be more effective when baseline performance is low, the source is a supervisor or colleague, it is provided more than once, it is delivered in both verbal and written formats, and when it includes both explicit targets and an action plan. In addition, the effect size varied based on the clinical behaviour targeted by the intervention.
Audit and feedback generally leads to small but potentially important improvements in professional practice. The effectiveness of audit and feedback seems to depend on baseline performance and how the feedback is provided. Future studies of audit and feedback should directly compare different ways of providing feedback.

  • [Show abstract] [Hide abstract]
    ABSTRACT: Despite effective treatments to reduce cardiovascular disease risk, their translation into practice is limited. Using a parallel arm cluster-randomized controlled trial in 60 Australian primary healthcare centers, we tested whether a multifaceted quality improvement intervention comprising computerized decision support, audit/feedback tools, and staff training improved (1) guideline-indicated risk factor measurements and (2) guideline-indicated medications for those at high cardiovascular disease risk. Centers had to use a compatible software system, and eligible patients were regular attendees (Aboriginal and Torres Strait Islander people aged ≥35 years and others aged ≥45 years). Patient-level analyses were conducted using generalized estimating equations to account for clustering. Median follow-up for 38 725 patients (mean age, 61.0 years; 42% men) was 17.5 months. Mean monthly staff support was <1 hour/site. For the coprimary outcomes, the intervention was associated with improved overall risk factor measurements (62.8% versus 53.4% risk ratio; 1.25; 95% confidence interval, 1.04-1.50; P=0.02), but there was no significant differences in recommended prescriptions for the high-risk cohort (n=10 308; 56.8% versus 51.2%; P=0.12). There were significant treatment escalations (new prescriptions or increased numbers of medicines) for antiplatelet (17.9% versus 2.7%; P<0.001), lipid-lowering (19.2% versus 4.8%; P<0.001), and blood pressure-lowering medications (23.3% versus 12.1%; P=0.02). In Australian primary healthcare settings, a computer-guided quality improvement intervention, requiring minimal support, improved cardiovascular disease risk measurement but did not increase prescription rates in the high-risk group. Computerized quality improvement tools offer an important, albeit partial, solution to improving primary healthcare system capacity for cardiovascular disease risk management. Australian New Zealand Clinical Trials Registry No. 12611000478910. © 2015 American Heart Association, Inc.
    Circulation Cardiovascular Quality and Outcomes 01/2015; · 5.66 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Our nation's suboptimal health care quality and unsustainable costs can be linked to the failure to implement evidence-based interventions. Implementation is the bridge between the decision to adopt a strategy and its sustained use in practice. The purpose of this case report is: 1) to outline the historical implementation of an evidence-based quality improvement project; 2) describe the program's future direction, employing a systems perspective to identify implementation barriers; and 3) provide implications for the profession as it works toward closing the evidence to practice gap. UPMC Centers for Rehab Services is a large, multi-center physical therapy organization. In 2005 they implemented a Low Back Initiative utilizing evidence-based protocols to guide clinical decision making. The initial implementation strategy used a multifaceted approach. Formative evaluations were used repeatedly to identify barriers to implementation. Barriers may exist outside the organization; they can be created internally; they may result from personnel; or be a direct function of the research evidence. Since the program launch, three distinct improvement cycles have been utilized to address identified implementation barriers. Implementation is an iterative process requiring evaluation, measurement and refinement. During this period behavior change is actualized, as clinicians become increasingly proficient and committed to their use of new evidence. Successfully incorporating evidence into routine practice requires a systems perspective to account for the complexity of the clinical setting. The value the profession provides can be enhanced by improving the implementation of evidence-based strategies. Achieving this outcome will require a concerted effort in all areas of the profession. New skills will be needed by leaders, researchers, managers, and clinicians. © 2015 American Physical Therapy Association.
    Physical Therapy 01/2015; · 3.25 Impact Factor
  • European Journal of Hospital Pharmacy 12/2014; 22(1):32-37. · 0.47 Impact Factor


Available from
Jan 6, 2015