[Show abstract][Hide abstract] ABSTRACT: Instruction in evidence-based medicine (EBM) has been widely incorporated into medical school curricula with little evidence of its effectiveness. Our goal was to create, implement, and validate a computer-based assessment tool that measured medical students' EBM skills.
As part of a required objective structured clinical examination, we developed a specific case scenario in which students (a) asked a structured clinical question using a standard framework, (b) generated effective MEDLINE search terms to answer a specific question, and (c) elected the most appropriate of 3 abstracts generated from a search justifying which best applies to the patient scenario.
Between the 3 blinded raters, there was very good interrater reliability with 84, 94, and 96% agreement on the scoring for each component, respectively (k = .64, .82, and .91, respectively). In addition, students found the station appropriately difficult for their level of training.
This computer-based tool appears to measure several EBM skills independently and combines simple administration and scoring. Its generalizability to other cases and settings requires further study.
No preview · Article · Feb 2006 · Teaching and Learning in Medicine
[Show abstract][Hide abstract] ABSTRACT: To create a feasible, valid, and reliable tool to measure third-year medical students' skills in evidence-based medicine (EBM).
EBM skills-asking clinical questions, finding appropriate medical information resources, and appraising and applying them to patients-involve higher-order critical thinking abilities and are essential to being a competent physician. Students at our institution must pass a required OSCE exam at the end of their third year. As part of this exam, we developed a new 20-minute computer-based station to assess students' EBM skills. Using a specific case scenario, we asked the students to (1) ask a question using the population/intervention/comparison/outcome (PICO) framework; (2) generate appropriate search terms, given a specific question; and (3) select an appropriate abstract to answer a given question and state why two other abstracts were not appropriate. Prior to the assessment, we determined grading and passing criteria for each of the three components and for the station overall. Of the 140 students who completed the station, the percentages that passed the components were 71%, 81%, and 49% respectively, with only 29% passing all three parts. Preliminary analysis of psychometric properties of the station shows very good to excellent interrater reliability, with 65%, 67%, and 94% agreement on the scoring for the components, and kappas of.64,.82, and.94, respectively.
Although there are many curricula for teaching EBM concepts, there are few tools to measure whether students are competent in applying their EBM skills. Our pilot station appears to be an innovative and promising tool to measure several EBM skills independently. By being computer-based, it is relatively simple to administer, grade, and evaluate. While preliminary data show good inter-rater reliability with our use of a single case, future work will include further testing of reliability and assessment of different types of cases. We will also use the results of this assessment to drive continuous improvement in our EBM curriculum. The students who completed this pilot station had not received an extensive formal EBM curriculum, whereas future groups will. We also will explore whether scores on our station correlate with those on other OSCE stations that also assess critical thinking skills, or if scores correlate with a student's clinical grades or overall class standing. We hope to test these hypotheses: (1) skills used in EBM are useful and valid measures of critical thinking abilities in learners and (2) tools such as ours will help to measure these essential competencies.
No preview · Article · Dec 2002 · Academic Medicine