Validation of an instrument to assess evidence-based practice knowledge, attitudes, access, and confidence in the dental environment

Educational and Faculty Development, Dental School, University of Texas Health Science Center at San Antonio, 7703 Floyd Curl Drive, San Antonio, TX 78229, USA.
Journal of dental education (Impact Factor: 0.97). 02/2011; 75(2):131-44.
Source: PubMed


This article reports the validation of an assessment instrument designed to measure the outcomes of training in evidence-based practice (EBP) in the context of dentistry. Four EBP dimensions are measured by this instrument: 1) understanding of EBP concepts, 2) attitudes about EBP, 3) evidence-accessing methods, and 4) confidence in critical appraisal. The instrument-the Knowledge, Attitudes, Access, and Confidence Evaluation (KACE)-has four scales, with a total of thirty-five items: EBP knowledge (ten items), EBP attitudes (ten), accessing evidence (nine), and confidence (six). Four elements of validity were assessed: consistency of items within the KACE scales (extent to which items within a scale measure the same dimension), discrimination (capacity to detect differences between individuals with different training or experience), responsiveness (capacity to detect the effects of education on trainees), and test-retest reliability. Internal consistency of scales was assessed by analyzing responses of second-year dental students, dental residents, and dental faculty members using Cronbach coefficient alpha, a statistical measure of reliability. Discriminative validity was assessed by comparing KACE scores for the three groups. Responsiveness was assessed by comparing pre- and post-training responses for dental students and residents. To measure test-retest reliability, the full KACE was completed twice by a class of freshman dental students seventeen days apart, and the knowledge scale was completed twice by sixteen faculty members fourteen days apart. Item-to-scale consistency ranged from 0.21 to 0.78 for knowledge, 0.57 to 0.83 for attitude, 0.70 to 0.84 for accessing evidence, and 0.87 to 0.94 for confidence. For discrimination, ANOVA and post hoc testing by the Tukey-Kramer method revealed significant score differences among students, residents, and faculty members consistent with education and experience levels. For responsiveness to training, dental students and residents demonstrated statistically significant changes, in desired directions, from pre- to post-test. For the student test-retest, Pearson correlations for KACE scales were as follows: knowledge 0.66, attitudes 0.66, accessing evidence 0.74, and confidence 0.76. For the knowledge scale test-retest by faculty members, the Pearson correlation was 0.79. The construct validity of the KACE is equivalent to that of instruments that assess similar EBP dimensions in medicine. Item consistency for the knowledge scale was more variable than for other KACE scales, a finding also reported for medically oriented EBP instruments. We conclude that the KACE has good discriminative validity, responsiveness to training effects, and test-retest reliability.

Download full-text


Available from: William D Hendricson,
31 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Teaching the steps of evidence-based practice (EBP) has become standard curriculum for health professions at both student and professional levels. Determining the best methods for evaluating EBP learning is hampered by a dearth of valid and practical assessment tools and by the absence of guidelines for classifying the purpose of those that exist. Conceived and developed by delegates of the Fifth International Conference of Evidence-Based Health Care Teachers and Developers, the aim of this statement is to provide guidance for purposeful classification and development of tools to assess EBP learning. This paper identifies key principles for designing EBP learning assessment tools, recommends a common taxonomy for new and existing tools, and presents the Classification Rubric for EBP Assessment Tools in Education (CREATE) framework for classifying such tools. Recommendations are provided for developers of EBP learning assessments and priorities are suggested for the types of assessments that are needed. Examples place existing EBP assessments into the CREATE framework to demonstrate how a common taxonomy might facilitate purposeful development and use of EBP learning assessment tools. The widespread adoption of EBP into professional education requires valid and reliable measures of learning. Limited tools exist with established psychometrics. This international consensus statement strives to provide direction for developers of new EBP learning assessment tools and a framework for classifying the purposes of such tools.
    BMC Medical Education 10/2011; 11(1):78. DOI:10.1186/1472-6920-11-78 · 1.22 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This article describes the evolution of thinking, primarily over the past fifteen years, within the academic dentistry community concerning teaching and learning strategies to facilitate students' acquisition of competence. Readers are encouraged to consider four issues. First, looking back to the time of the Institute of Medicine report Dental Education at the Crossroads: Challenges and Change fifteen years ago, in the mid-1990s, where did we think we would be now, in 2011, in regard to the structure of the predoctoral curriculum and use of specific educational methodologies, and to what extent have those predictions come true? The author's own crystal ball predictions from the 1990s are used to kick off a discussion of what connected and what did not among numerous advocated educational reforms, many of them transformative in nature. Second, what is the nature of the evidence supporting our ongoing search for educational best practices, and why are advocacy for educational best practices and prediction of down-the-road outcomes so treacherous? This section distinguishes types of evidence that provide limited guidance for dental educators from evidence that is more helpful for designing educational strategies that might make a difference in student learning, focusing on factors that provide a "perfect intersection" of student, teacher, educational method, and learning environment. Third, readers are asked to revisit four not-so-new teaching/learning methods that are still worthy of consideration in dental education in light of best evidence, upcoming events, and technology that has finally matched its potential. Fourth, a specific rate-limiting factor that hinders the best efforts of both teachers and students in virtually all U.S. dental schools is discussed, concluding with a plea to find a better way so that the good works of dental educators and their students can be more evident.
    Journal of dental education 01/2012; 76(1):118-41. · 0.97 Impact Factor
  • Source

    Evidence-based dentistry 03/2012; 13(1):2-3. DOI:10.1038/sj.ebd.6400834
Show more