[show abstract][hide abstract] ABSTRACT: BACKGROUND: Testing of knowledge is an important component in a successful skills curriculum. Nonetheless, structured testing of basic procedure-relevant knowledge in the surgical domains is not ordinary practice. A regional need assessment showed insufficient knowledge regarding basic laparoscopy for first-year residents in obstetrics and gynecology. This study therefore aimed to develop and validate a framework for a theoretical knowledge test, a multiple-choice test, in basic theory related to laparoscopy. METHODS: The content of the multiple-choice test was determined by conducting informal conversational interviews with experts in laparoscopy. The subsequent relevance of the test questions was evaluated using the Delphi method involving regional chief physicians. Construct validity was tested by comparing test results from three groups with expected different clinical competence and knowledge levels: senior medical students, first-year residents, and chief physicians. RESULTS: The four conversational interviews resulted in the development of 47 test questions, which were narrowed down to 37 test questions after two Delphi rounds involving 12 chief physicians. Significant differences were found between the test scores from the senior medical students (n = 14) and the first-year residents (n = 52) (median test scores, 18 vs. 24, respectively; p = 0.001), and between the first-year residents and the chief physicians (n = 12) (median test scores, 24 vs. 33, respectively; p = 0.001). Internal consistency (Cronbach's alpha) was 0.82. There was no evidence of differential item functioning between the three groups tested. CONCLUSIONS: A newly developed knowledge test in basic laparoscopy proved to have content and construct validity. The formula for the development and validation of a theoretical test could potentially be used for any topics that require structured testing of knowledge.
[show abstract][hide abstract] ABSTRACT: Several studies have found a positive effect on the learning curve as well as the improvement of basic psychomotor skills in the operating room after virtual reality training. Despite this, the majority of surgical and gynecological departments encounter hurdles when implementing this form of training. This is mainly due to lack of knowledge concerning the time and human resources needed to train novice surgeons to an adequate level. The purpose of this trial is to investigate the impact of instructor feedback regarding time, repetitions and self-perception when training complex operational tasks on a virtual reality simulator.
The study population consists of medical students on their 4th to 6th year without prior laparoscopic experience. The study is conducted in a skills laboratory at a centralized university hospital. Based on a sample size estimation 98 participants will be randomized to an intervention group or a control group. Both groups have to achieve a predefined proficiency level when conducting a laparoscopic salpingectomy using a surgical virtual reality simulator. The intervention group receives standardized instructor feedback of 10 to 12 min a maximum of three times. The control group receives no instructor feedback. Both groups receive the automated feedback generated by the virtual reality simulator. The study follows the CONSORT Statement for randomized trials. Main outcome measures are time and repetitions to reach the predefined proficiency level on the simulator. We include focus on potential sex differences, computer gaming experience and self-perception.
The findings will contribute to a better understanding of optimal training methods in surgical education.
BMC Medical Education 02/2012; 12:7. · 1.41 Impact Factor
[show abstract][hide abstract] ABSTRACT: It is known that structured assessment of an operation can provide trainees with useful knowledge and potentially shorten their learning curve. However, methods for objective assessment have not been widely adopted into the clinical setting. This might be because of a lack of expertise using an assessment tool. The aim of this present study was to investigate if a validated laparoscopic procedure-specific assessment tool could be used by doctors with different levels of experience.
The study was conducted as an observer-blinded, prospective cohort study. Three video recordings of a right-side laparoscopic salpingectomy were distributed to ten chief physicians, eight residents (fourth year trainees), and two expert assessors (all in gynecology) in order to be assessed using a validated procedure-specific assessment tool. The three salpingectomies were selected because they easily showed the different operational levels: novice, intermediate, and expert. The two expert assessors, i.e., our gold standard, were familiar with the OSA-LS assessment scale, but the chief physicians and the residents were not. All participants were blinded to the fact that surgeons with different experience had performed the salpingectomies.
No significant differences between the residents and chief physicians were observed in any of the three assessed operations: novice, p = 0.63; intermediate, p = 0.93; and expert, p = 0.93. The chief physicians and residents matched our gold standard in assessing the intermediate operation (p = 0.177), but not the novice operation (p = 0.005) or the expert operation (p = 0.001).
Residents and chief physicians generated similar performance scores when assessing operations using a laparoscopic procedure-specific assessment scale, and they could distinguish performance levels between the surgeons. They matched the assessment score of our expert on the intermediate operation. We conclude that a procedure-specific assessment scale can be used by both residents and chief physicians when giving formative feedback.