Validation of an instrument to assess evidence-based practice knowledge, attitudes, access, and confidence in the dental environment

Educational and Faculty Development, Dental School, University of Texas Health Science Center at San Antonio, 7703 Floyd Curl Drive, San Antonio, TX 78229, USA.
Journal of dental education (Impact Factor: 0.97). 02/2011; 75(2):131-44.
Source: PubMed


This article reports the validation of an assessment instrument designed to measure the outcomes of training in evidence-based practice (EBP) in the context of dentistry. Four EBP dimensions are measured by this instrument: 1) understanding of EBP concepts, 2) attitudes about EBP, 3) evidence-accessing methods, and 4) confidence in critical appraisal. The instrument-the Knowledge, Attitudes, Access, and Confidence Evaluation (KACE)-has four scales, with a total of thirty-five items: EBP knowledge (ten items), EBP attitudes (ten), accessing evidence (nine), and confidence (six). Four elements of validity were assessed: consistency of items within the KACE scales (extent to which items within a scale measure the same dimension), discrimination (capacity to detect differences between individuals with different training or experience), responsiveness (capacity to detect the effects of education on trainees), and test-retest reliability. Internal consistency of scales was assessed by analyzing responses of second-year dental students, dental residents, and dental faculty members using Cronbach coefficient alpha, a statistical measure of reliability. Discriminative validity was assessed by comparing KACE scores for the three groups. Responsiveness was assessed by comparing pre- and post-training responses for dental students and residents. To measure test-retest reliability, the full KACE was completed twice by a class of freshman dental students seventeen days apart, and the knowledge scale was completed twice by sixteen faculty members fourteen days apart. Item-to-scale consistency ranged from 0.21 to 0.78 for knowledge, 0.57 to 0.83 for attitude, 0.70 to 0.84 for accessing evidence, and 0.87 to 0.94 for confidence. For discrimination, ANOVA and post hoc testing by the Tukey-Kramer method revealed significant score differences among students, residents, and faculty members consistent with education and experience levels. For responsiveness to training, dental students and residents demonstrated statistically significant changes, in desired directions, from pre- to post-test. For the student test-retest, Pearson correlations for KACE scales were as follows: knowledge 0.66, attitudes 0.66, accessing evidence 0.74, and confidence 0.76. For the knowledge scale test-retest by faculty members, the Pearson correlation was 0.79. The construct validity of the KACE is equivalent to that of instruments that assess similar EBP dimensions in medicine. Item consistency for the knowledge scale was more variable than for other KACE scales, a finding also reported for medically oriented EBP instruments. We conclude that the KACE has good discriminative validity, responsiveness to training effects, and test-retest reliability.

Download full-text


Available from: William D Hendricson, Oct 11, 2015
31 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Teaching the steps of evidence-based practice (EBP) has become standard curriculum for health professions at both student and professional levels. Determining the best methods for evaluating EBP learning is hampered by a dearth of valid and practical assessment tools and by the absence of guidelines for classifying the purpose of those that exist. Conceived and developed by delegates of the Fifth International Conference of Evidence-Based Health Care Teachers and Developers, the aim of this statement is to provide guidance for purposeful classification and development of tools to assess EBP learning. This paper identifies key principles for designing EBP learning assessment tools, recommends a common taxonomy for new and existing tools, and presents the Classification Rubric for EBP Assessment Tools in Education (CREATE) framework for classifying such tools. Recommendations are provided for developers of EBP learning assessments and priorities are suggested for the types of assessments that are needed. Examples place existing EBP assessments into the CREATE framework to demonstrate how a common taxonomy might facilitate purposeful development and use of EBP learning assessment tools. The widespread adoption of EBP into professional education requires valid and reliable measures of learning. Limited tools exist with established psychometrics. This international consensus statement strives to provide direction for developers of new EBP learning assessment tools and a framework for classifying the purposes of such tools.
    BMC Medical Education 10/2011; 11(1):78. DOI:10.1186/1472-6920-11-78 · 1.22 Impact Factor
  • Source
    Evidence-based dentistry 03/2012; 13(1):2-3. DOI:10.1038/sj.ebd.6400834
  • [Show abstract] [Hide abstract]
    ABSTRACT: The purposes of this study were to describe the questionnaire development process for evaluating elements of an evidence-based practice (EBP) curriculum in a chiropractic program and to report on initial reliability and validity testing for the EBP knowledge examination component of the questionnaire. The EBP knowledge test was evaluated with students enrolled in a doctor of chiropractic program in the University of Western States. The initial version was tested with a sample of 374 and a revised version with a sample of 196 students. Item performance and reliability were assessed using item difficulty, item discrimination, and internal consistency. An expert panel assessed face and content validity. The first version of the knowledge examination demonstrated a low internal consistency (Kuder-Richardson 20 = 0.55), and a few items had poor item difficulty and discrimination. This resulted in an expansion in the number of items from 20 to 40, as well as a revision of the poorly performing items from the initial version. The Kuder-Richardson 20 of the second version was 0.68; 32 items had item difficulties of between 0.20 and 0.80, and 26 items had item discrimination values of 0.20 or greater. A questionnaire for evaluating a revised EBP-integrated curriculum was developed and evaluated. Psychometric testing of the EBP knowledge component provided some initial evidence for acceptable reliability and validity.
    Journal of manipulative and physiological therapeutics 11/2012; 35(9):692-700. DOI:10.1016/j.jmpt.2012.10.011 · 1.48 Impact Factor
Show more