Article

Instruments for evaluating education in evidence-based practice - A systematic review

Department of Medicine, University of Alabama School of Medicine, and Department of Veterans Affairs Medical Center, Birmingham, AL 35233, USA.
JAMA The Journal of the American Medical Association (Impact Factor: 30.39). 10/2006; 296(9):1116-27. DOI: 10.1001/jama.296.9.1116
Source: PubMed

ABSTRACT Evidence-based practice (EBP) is the integration of the best research evidence with patients' values and clinical circumstances in clinical decision making. Teaching of EBP should be evaluated and guided by evidence of its own effectiveness.
To appraise, summarize, and describe currently available EBP teaching evaluation instruments.Data Sources and
We searched the MEDLINE, EMBASE, CINAHL, HAPI, and ERIC databases; reference lists of retrieved articles; EBP Internet sites; and 8 education journals from 1980 through April 2006. For inclusion, studies had to report an instrument evaluating EBP, contain sufficient description to permit analysis, and present quantitative results of administering the instrument.
Two raters independently abstracted information on the development, format, learner levels, evaluation domains, feasibility, reliability, and validity of the EBP evaluation instruments from each article. We defined 3 levels of instruments based on the type, extent, methods, and results of psychometric testing and suitability for different evaluation purposes.
Of 347 articles identified, 115 were included, representing 104 unique instruments. The instruments were most commonly administered to medical students and postgraduate trainees and evaluated EBP skills. Among EBP skills, acquiring evidence and appraising evidence were most commonly evaluated, but newer instruments evaluated asking answerable questions and applying evidence to individual patients. Most behavior instruments measured the performance of EBP steps in practice but newer instruments documented the performance of evidence-based clinical maneuvers or patient-level outcomes. At least 1 type of validity evidence was demonstrated for 53% of instruments, but 3 or more types of validity evidence were established for only 10%. High-quality instruments were identified for evaluating the EBP competence of individual trainees, determining the effectiveness of EBP curricula, and assessing EBP behaviors with objective outcome measures.
Instruments with reasonable validity are available for evaluating some domains of EBP and may be targeted to different evaluation needs. Further development and testing is required to evaluate EBP attitudes, behaviors, and more recently articulated EBP skills.

Full-text

Available from: Terrence Shaneyfelt, Apr 20, 2015
2 Followers
 · 
321 Views
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: An educational intervention was implemented at the University of Michigan starting in 2008, in which anesthesiology interns complete a dedicated month-long didactic rotation in evidence-based medicine (EBM) and research methodology. We sought to assess its utility. Scores on a validated EBM test before and after the rotation were compared and assessed for significance of improvement. A survey was also given to gauge satisfaction with the quality of the rotation and self-reported improvement in understanding of EBM topics. Fourteen consecutive interns completed the research rotation during the study period. One hundred percent completed both the pre- and postrotation test. The mean pretest score was 7.78 ± 2.46 (median = 7.5, 0-15 scale, and interquartile range 7.0-10.0) and the mean posttest score was 10.00 ± 2.35 (median = 9.5, interquartile range 8.0-12.3), which represented a statistically significant increase (P = 0.011, Wilcoxon signed-rank test). All fourteen of the residents "agreed" or "strongly agreed" that they would recommend the course to future interns and that the course increased their ability to critically review the literature. Our findings demonstrate that this can be an effective means of improving understanding of EBM topics and anesthesiology research.
    Anesthesiology Research and Practice 01/2015; 2015:1-4. DOI:10.1155/2015/623959
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: BackgroundA validated and reliable instrument was developed to knowledge, attitudes and behaviours with respect to evidence-based practice (EBB-KABQ) in medical trainees but requires further adaptation and validation to be applied across different health professionals.MethodsA modified 33-item evidence-based practice scale (EBP-KABQ) was developed to evaluate EBP perceptions and behaviors in clinicians. An international sample of 673 clinicians interested in treatment of pain (mean age¿=¿45 years, 48% occupational therapists/physical therapists, 25% had more than 5 years of clinical training) completed an online English version of the questionnaire and demographics. Scaling properties (internal consistency, floor/ceiling effects) and construct validity (association with EBP activities, comparator constructs) were examined. A confirmatory factor analysis was used to assess the 4-domain structure EBP knowledge, attitudes, behavior, outcomes/decisions).ResultsThe EBP-KABQ scale demonstrated high internal consistency (Cronbach¿s alpha¿=¿0.85), no evident floor/ceiling effects, and support for a priori construct validation hypotheses. A 4-factor structure provided the best fit statistics (CFI =0.89, TLI =0.86, and RMSEA¿=¿0.06).Conclusions The EBP-KABQ scale demonstrates promising psychometric properties in this sample. Areas for improvement are described.
    BMC Medical Education 12/2014; 14(1):263. DOI:10.1186/s12909-014-0263-4 · 1.41 Impact Factor