Article

Preparing first-year radiology residents and assessing their readiness for on-call responsibilities: results over 5 years.

Department of Radiology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA 02215, USA.
American Journal of Roentgenology (Impact Factor: 2.74). 02/2009; 192(2):539-44. DOI: 10.2214/AJR.08.1631
Source: PubMed

ABSTRACT The objective of our study was to evaluate the preparedness of postgraduate year (PGY)-2 residents for independent call responsibilities and the impact of the radiology residency training program on call preparedness using an objective DICOM-based simulation module over a 5-year period.
A month-long emergency radiology lecture series, conducted over 5 consecutive years, was designed and given to radiology residents at all levels. A DICOM-based, interactive, computer-based testing module with actual emergency department cases was developed and administered at the end of the lecture series. Comparison was made between first-year and upper-level resident test scores using a Student's t test, generalized estimating equations, and individual fixed effects to determine PGY-2 residents' before-call preparedness and the effectiveness of the simulation module to assess call preparedness. Resident scoring on the simulation module was also plotted as a function of progression through their residency program to evaluate the impact of the training program on call preparedness.
Over 5 years, 45 PGY-2, 34 PGY-3, 32 PGY-4, and 35 PGY-5 residents attended the lecture series and completed the computer-based testing module. PGY-2 residents scored an average of 71% +/- 15% (SD), PGY-3 residents scored 79% +/- 11%, PGY-4 residents scored 84% +/- 10%, and PGY-5 residents scored 86% +/- 11% of the total points possible. A statistically significant (p < 0.05) difference in scoring on the simulation module was identified between the PGY-2 residents and each upper-level class over the 5-year period and during 4 of 5 examination years analyzed separately. A trend toward higher average scores for each cohort of residents as they progressed through residency training was identified.
Over a 5-year period, first-year radiology residents scored significantly lower than upper-level colleagues on an emergency radiology simulation module, suggesting a significant improvement in the ability of residents to interpret typical on-call imaging studies after the PGY-2 year.

0 Bookmarks
 · 
73 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: There is no standardized curriculum currently available at most institutions for establishing procedural competency in trainees performing cervicocerebral angiography. The purpose of this study was to evaluate a simple learning program to supplement the teaching of basic cervicocerebral angiography. An 11-session interactive curriculum was implemented covering anatomic, clinical, and radiographic topics for the novice cervicocerebral angiographer. The target learner was the neuroradiology fellow. Data were gathered regarding fellow comfort level on topics relating to cervicocerebral angiography by using a 5-point Likert scale. Improvement in scores on knowledge-based questions after completion of the curriculum was calculated (McNemar test). Trainee-perceived utility of the program was also recorded by using a 5-point Likert scale. Focus sessions were held at the completion of the curriculum to gather feedback regarding the strengths and weaknesses of the program from participants. Ten subjects were enrolled in this pilot study for 3 years. Topics where participants reported a poor initial comfort level (4 or higher) included selection of injection rates and volumes and reformation of reverse-curve catheters. Trainees demonstrated a statistically significant change in the distribution of scores of 29.3% (49.4%-78.7% correct response rate, P < .0001). The average perceived utility was 1.5 (1 = most useful, 5 = least useful). This simple learning program was a useful adjunct to the training of fellows in diagnostic cervicocerebral angiography, resulting in quantitative improvements in knowledge.
    American Journal of Neuroradiology 01/2012; 33(6):1041-5. · 3.17 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Although the apprenticeship model of medical training has been in use for centuries, there are several problems with its use. The fundamental ethical principle of nonmaleficence requires that no preventable harm come to patients involved in the training process. In addition, changing medical practice patterns with shorter hospital stays and duty-hour restrictions are making it difficult for trainees to be exposed to enough patients to prepare them to deal with the many possible scenarios they may face in practice. Despite these limitations, the apprenticeship model cannot be completely rejected because it is essential for trainees to perfect their technique by caring for real patients with the guidance of experienced practitioners. Simulation-based training can allow novices to learn from their mistakes in a safe environment and in accordance with the principles of deliberate practice, thus allowing simulation to be a bridge to help get trainees from the novice state, in which they have a higher risk of causing harm, to a more experienced state in which they are more likely to do what is needed for patients.
    Journal of the American College of Radiology: JACR 06/2013;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Drawing upon an integrated model proposed by Bennett and Bejar (1998), this review study examined how 66 computer-based science assessments (CBSAs) in basic science and medicine took advantage of advanced technologies. The model regarded a CBSA as an integrated system, included several assessment components (e.g., assessment purpose, measured construct, test and task design, examinee interface, and scoring procedure), and emphasized the interplay among these components. Accordingly, this study systematically analyzed the item presentations of interactive multimedia, the constructs measured, the response formats in formative and summative assessments, the scoring procedures, the adaptive test activities administrated based on the algorithms related to the item response theory (IRT) and other rules beyond IRT, and the strategies for the automatic provision of informative hints and feedbacks in the CBSAs. Our analysis revealed that although only 19 out of 66 assessments took advantage of dynamic and interactive media for item presentations, CBSAs with these media could measure integrated under- standing of science phenomena and complex problem-solving skills. However, we also found that lim- itations in automated scoring may lead to infrequent use of the automated provision of hints and feedbacks with open-ended and extended responses. These findings suggest the interrelatedness of the assessment components, and thus we argue that designers should repeatedly consider the relationships between components of CBSAs to ensure the validity of the assessments. Finally, we also indicate the issues for future research in computer-based assessments.
    Computers & Education 07/2013; 68:388-403. · 2.63 Impact Factor