Development of a Tool for Global Rating of Endoscopic Surgical Skills (GRESS) for Assessment of Otolaryngology Residents

Department of Otolaryngology, Head & Neck Surgery, College of Medicine, Umm-al-Qura University, Makkah, Saudi Arabia.
B-ENT (Impact Factor: 0.08). 11/2012; 8(3):191-5.
Source: PubMed


To develop a valid and reliable assessment tool for endoscopic sinus surgery (ESS).
Data were collected prospectively in an observational study through evaluations at two tertiary academic institutions, i.e. St. Paul's Sinus Centre, St. Paul's Hospital, Vancouver, British Columbia, Canada, and King Fahd Medical City, Riyadh, Saudi Arabia, from December 2006 to December 2009. A 2-page evaluation form was developed in conjunction with the Objective Assessment of Technical Skills Surgery (OSATS) evaluation form developed by Reznick et al in Toronto to assess residents' surgical skills. A Likert scale (1-5 where 5 = excellent) was used for evaluations. The Global Rating of Endoscopic Surgical Skills (GRESS) evaluation instrument was designed with input from academic otolaryngologists, fellowship-trained rhinologists, and experts in medical education. The experts' comments were incorporated, establishing face and content validity. Residents from various levels of training were assessed objectively using this instrument. Internal consistency was evaluated using Cronbach's alpha. Test-retest and inter-rater reliability was measured using intra-class correlation.
A total of 31 assessments were completed by 15 residents. GRESS showed high reliability in the context of internal consistency (alpha = 0.99), test-retest (0.95, CI = 0.83-0.98), and inter-rater reliability (0.86, CI = 0.31-0.98).
This pilot study demonstrated that GRESS is a valid and reliable assessment tool for operating room performance.

Download full-text


Available from: Ameen Alherabi
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objective Structured Assessments of Technical Skills have been developed to measure the skill of surgical trainees. Our aim was to develop an Objective Structured Assessments of Technical Skills specifically for trainees learning robotic surgery. This is a multiinstitutional study conducted in eight academic training programs. We created an assessment form to evaluate robotic surgical skill through five inanimate exercises. Gynecology, general surgery, and urology residents, Fellows, and faculty completed five robotic exercises on a standard training model. Study sessions were recorded and randomly assigned to three blinded judges who scored performance using the assessment form. Construct validity was evaluated by comparing scores between participants with different levels of surgical experience; interrater and intrarater reliability were also assessed. We evaluated 83 residents, nine Fellows, and 13 faculty totaling 105 participants; 88 (84%) were from gynecology. Our assessment form demonstrated construct validity with faculty and Fellows performing significantly better than residents (mean scores 89±8 faculty, 74±17 Fellows, 59±22 residents; P<.01). In addition, participants with more robotic console experience scored significantly higher than those with fewer prior console surgeries (P<.01). Robotic Objective Structured Assessments of Technical Skills demonstrated good interrater reliability across all five drills (mean Cronbach's α 0.79±0.02). Intrarater reliability was also high (mean Spearman's correlation 0.91±0.11). We developed a valid and reliable assessment form for robotic surgical skill. When paired with standardized robotic skill drills, this form may be useful to distinguish between levels of trainee performance. LEVEL OF EVIDENCE:: II.
    No preview · Article · May 2014 · Obstetrics and Gynecology