Article

Clinical Skills Verification in General Psychiatry: Recommendations of the ABPN Task Force on Rater Training.

Academic Psychiatry (Impact Factor: 0.81). 09/2012; 36(5):363-8. DOI: 10.1176/appi.ap.10040061
Source: PubMed

ABSTRACT OBJECTIVE The American Board of Psychiatry and Neurology (ABPN) announced in 2007 that general psychiatry training programs must conduct Clinical Skills Verification (CSV), consisting of observed clinical interviews and case presentations during residency, as one requirement to establish graduates' eligibility to sit for the written certification examination. To facilitate implementation of these requirements, the ABPN convened a task force to prepare training materials for faculty and programs to guide them in the CSV process. This article reviews the specific requirements for the CSV experience within general residency programs, and briefly describes the recommendations of the task force for faculty training and program implementation. METHODS Materials prepared by the ABPN Task Force include background information on the intent of the observed interview, a literature review on assessment methods, aids to train faculty in direct observation of clinical work, directions for effective feedback, notes regarding special issues for cross-cultural trainees, clarification of performance standards, and recommendations for structuring and conducting the assessments. RESULTS Recommendations of the task force include the use of a variety of clinical settings for CSV assessments, flexibility in the duration of CSV interviews, use of formative and summative feedback after each CSV assessment, and frequent use of the CSV across all years of training. Formal faculty training is recommended to help establish performance parameters, increase interrater reliability, and improve the quality of feedback. CONCLUSIONS The implementation of the CSV process provides psychiatry training programs with an excellent opportunity to assess how interviewing skills are taught and evaluated. In the process, psychiatry educators have an opportunity to establish performance parameters that will guide the training of residents in patient interaction and evaluation.

0 Followers
 · 
119 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In the setting of clinical medical education, feedback refers to information describing students' or house officers' performance in a given activity that is intended to guide their future performance in that same or in a related activity. It is a key step in the acquisition of clinical skills, yet feedback is often omitted or handled improperly in clinical training. This can result in important untoward consequences, some of which may extend beyond the training period. Once the nature of the feedback process is appreciated, however, especially the distinction between feedback and evaluation and the importance of focusing on the trainees' observable behaviors rather than on the trainees themselves, the educational benefit of feedback can be realized. This article presents guidelines for offering feedback that have been set forth in the literature of business administration, psychology, and education, adapted here for use by teachers and students of clinical medicine.
    JAMA The Journal of the American Medical Association 09/1983; 250(6):777-81. DOI:10.1001/jama.250.6.777 · 30.39 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The objectives of this study were to (1) establish the utility of an assessment tool for participants in a laparoscopic colectomy course and (2) to determine the accuracy of technical skill self-assessment in this group. Twenty-two surgeons enrolled in a 2-day course participated. During the animal laboratory, each participant's operative performance was videotaped. Participants completed a global rating scale (GRS) instrument to self-assess their performances. By using the same GRS, 2 trained raters independently assessed each performance by videotape review. For the trained raters, the GRS showed excellent interrater reliability (r = .76, P < .001). There was no correlation between trained rater scores and self-assessment scores. Furthermore, the trained rater scores (mean, 2.62 and 2.99) were significantly lower than the self-assessment scores (4.05, P < .001). Surgeons consistently overestimated their performance during a laparoscopic colectomy course as measured by reliable GRS. This finding highlights the issue of credentialing and the importance of preceptorship for surgeons completing such courses.
    The American Journal of Surgery 06/2006; 191(5):677-81. DOI:10.1016/j.amjsurg.2006.01.041 · 2.41 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study assessed the reliability of surgical resident self-assessment in comparison with faculty and standardized patient (SP) assessments during a structured educational module focused on perioperative management of a simulated adverse event. Seven general surgery residents participated in this module. Residents were assessed during videotaped preoperative and postoperative SP encounters and when dissecting a tumor off of a standardized inanimate vena cava model in a simulated operating room. Preoperative and postoperative assessments by SPs correlated significantly (P < .05) with faculty assessments (r = .75 and r = .79, respectively), but not resident self-assessments. Coefficient alpha was greater than .70 for all assessments except resident preoperative self-assessments. Faculty and SP assessments can provide reliable data useful for formative feedback. Although resident self-assessment may be useful for the formative assessment of technical skills, results suggest that in the absence of training, residents are not reliable self-assessors of preoperative and postoperative interactions with SPs.
    American journal of surgery 01/2008; 195(1):1-4. DOI:10.1016/j.amjsurg.2007.08.048 · 2.41 Impact Factor
Show more