[show abstract][hide abstract] ABSTRACT: In surgery, preoperative handover of surgical trauma patients is a process that must be made as safe as possible. We sought to determine vital clinical information to be transferred between patient care teams and to develop a standardized handover checklist.
We conducted standardized small-group interviews about trauma patient handover. Based on this information, we created a questionnaire to gather perspectives from all Canadian Orthopaedic Association (COA) members about which topics they felt would be most important on a handover checklist. We analyzed the responses to develop a standardized handover checklist.
Of the 1106 COA members, 247 responded to the questionnaire. The top 7 topics felt to be most important for achieving patient safety in the handover were comorbidities, diagnosis, readiness for the operating room, stability, associated injuries, history/mechanism of injury and outstanding issues. The expert recommendations were to have handover completed the same way every day, all appropriate radiographs available, adequate time, all appropriate laboratory work and more time to spend with patients with more severe illness.
Our main recommendations for safe handover are to use standardized checklists specific to the patient and site needs. We provide an example of a standardized checklist that should be used for preoperative handovers. To our knowledge, this is the first checklist for handover developed by a group of experts in orthopedic surgery, which is both manageable in length and simple to use.
Canadian journal of surgery. Journal canadien de chirurgie 02/2014; 57(1):8-14. · 1.63 Impact Factor
[show abstract][hide abstract] ABSTRACT: The use of multisource feedback (MSF) or 360-degree evaluation has become a recognized method of assessing physician performance in practice. The purpose of the present systematic review was to investigate the reliability, generalizability, validity, and feasibility of MSF for the assessment of physicians.
The authors searched the EMBASE, PsycINFO, MEDLINE, PubMed, and CINAHL databases for peer-reviewed, English-language articles published from 1975 to January, 2013. Studies were included if they met the following inclusion criteria: used one or more MSF instruments to assess physician performance in practice; reported psychometric evidence of the instrument(s) in the form of reliability, generalizability coefficients, and construct or criterion-related validity; and provided information regarding the administration or feasibility of the process in collecting the feedback data.
Of the 96 full-text articles assessed for eligibility, 43 articles were included. The use of MSF has been shown to be an effective method for providing feedback to physicians from a multitude of specialties about their clinical and nonclinical (i.e., professionalism, communication, interpersonal relationship, management) performance. In general, assessment of physician performance was based on the completion of the MSF instruments by 8 medical colleagues, 8 coworkers, and 25 patients to achieve adequate reliability and generalizability coefficients of α ≥ 0.90 and Ep ≥ 0.80, respectively.
The use of MSF employing medical colleagues, coworkers, and patients as a method to assess physicians in practice has been shown to have high reliability, validity, and feasibility.
Academic medicine: journal of the Association of American Medical Colleges 01/2014; · 2.34 Impact Factor
[show abstract][hide abstract] ABSTRACT: The purpose of this study was to conduct a meta-analysis on the construct and criterion validity of multi-source feedback (MSF) to assess physicians and surgeons in practice.
In this study, we followed the guidelines for the reporting of observational studies included in a meta-analysis. In addition to PubMed and MEDLINE databases, the CINAHL, EMBASE, and PsycINFO databases were searched from January 1975 to November 2012. All articles listed in the references of the MSF studies were reviewed to ensure that all relevant publications were identified. All 35 articles were independently coded by two authors (AA, TD), and any discrepancies (eg, effect size calculations) were reviewed by the other authors (KA, AD, CV).
Physician/surgeon performance measures from 35 studies were identified. A random-effects model of weighted mean effect size differences (d) resulted in: construct validity coefficients for the MSF system on physician/surgeon performance across different levels in practice ranged from d=0.14 (95% confidence interval [CI] 0.40-0.69) to d=1.78 (95% CI 1.20-2.30); construct validity coefficients for the MSF on physician/surgeon performance on two different occasions ranged from d=0.23 (95% CI 0.13-0.33) to d=0.90 (95% CI 0.74-1.10); concurrent validity coefficients for the MSF based on differences in assessor group ratings ranged from d=0.50 (95% CI 0.47-0.52) to d=0.57 (95% CI 0.55-0.60); and predictive validity coefficients for the MSF on physician/surgeon performance across different standardized measures ranged from d=1.28 (95% CI 1.16-1.41) to d=1.43 (95% CI 0.87-2.00).
The construct and criterion validity of the MSF system is supported by small to large effect size differences based on the MSF process and physician/surgeon performance across different clinical and nonclinical domain measures.
Advances in medical education and practice. 01/2014; 5:39-51.
[show abstract][hide abstract] ABSTRACT: Surgical simulators provide a safe environment to learn and practise psychomotor skills. A goal for these simulators is to achieve high levels of fidelity. The purpose of this study was to develop a reliable surgical simulator fidelity questionnaire and to assess whether a newly developed virtual haptic simulator for fixation of an ulna has comparable levels of fidelity as Sawbones.
Simulator fidelity questionnaires were developed. We performed a stratified randomized study with surgical trainees. They performed fixation of the ulna using a virtual simulator and Sawbones. They completed the fidelity questionnaires after each procedure.
Twenty-two trainees participated in the study. The reliability of the fidelity questionnaire for each separate domain (environment, equipment, psychological) was Cronbach α greater than 0.70, except for virtual environment. The Sawbones had significantly higher levels of fidelity than the virtual simulator (p < 0.001) with a large effect size difference (Cohen d < 1.3).
The newly developed fidelity questionnaire is a reliable tool that can potentially be used to determine the fidelity of other surgical simulators. Increasing the fidelity of this virtual simulator is required before its use as a training tool for surgical fixation. The virtual simulator brings with it the added benefits of repeated, independent safe use with immediate, objective feedback and the potential to alter the complexity of the skill.
Canadian journal of surgery. Journal canadien de chirurgie 08/2013; 56(4):E91-7. · 1.63 Impact Factor
[show abstract][hide abstract] ABSTRACT: The aim of the present study was to conduct a systematic literature review of multisource feedback (MSF) instruments and to summarize the evidence of feasibility, reliability, generalizability, validity and other psychometric characteristics of the instruments. Accordingly, we conducted a systematic literature review for English-language studies published from 1975 to 2012 using the following data bases: MEDLINE, EMBASE, CINAHL, PubMed and PsychINFO. The following terms were used in the search: multisource-feedback, 360 degree evaluation, and assessment of medical professionalism. Forty-eight studies conducted in Canada, the United States, the United Kingdom, Netherlands, China and elsewhere met the inclusion criteria. The results indicate that MSF has adequate evidence of validity, reliability, and feasibility for providing health practitioners with quality improvement data (both formative and summative assessment) as part of an overall strategy of maintaining competence and certification. Professional psychology has not adopted MSF as a systematic competence–based method for evaluating, maintaining, and assuring competent practice of psychology and instead relies on self-assessment as the primary quality assurance approach for its public accountability. We make recommendations to adopt a MSF system of competence-based assessment of practicing psychologists by regulatory and licensing authorities in Canada and the United States.
Professional Psychology:Research and Practice. 08/2013; 44(4):193-207.
[show abstract][hide abstract] ABSTRACT: PURPOSE: Interprofessional simulation-based team training is strongly endorsed as a potential solution for improving teamwork in health care delivery. Unfortunately, there are few teamwork evaluation instruments. The present study developed and tested the psychometric characteristics of the newly developed KidSIM Team Performance Scale checklist. METHOD: A quasi-experimental research design engaging a convenience sample of 196 undergraduate medical, nursing, and respiratory therapy students was completed in the 2010-2011 academic year. Multidisciplinary student teams participated in a simulation-based curriculum that included the completion of two acute illness management scenarios, resulting in 282 independent reviews by evaluators from medicine, nursing, and respiratory therapy. The authors investigated the underlying factors of the performance checklist and examined the performance scores of an experimental and a control team-training-curriculum group. RESULTS: Participation in the supplemental team training curriculum was related to higher team performance scores (P < .001). All teams at Time 2 achieved higher scores than at Time 1 (P <.05). The reliability coefficient for the total performance scale was α = 0.90. Factor analysis supported a three-factor solution (accounting for 67.9% of the variance) with an emphasis on roles and responsibilities (five items) and communication (six items) subscale factors. CONCLUSIONS: When simulation is used in acute illness management training, the KidSIM Team Performance Scale provides reliable, valid score interpretation of undergraduates' team process based on communication effectiveness and identification of roles and responsibilities. Implementation of a supplementary team training curriculum significantly enhances students' performance in multidisciplinary simulation-based scenarios at the undergraduate level.
Academic medicine: journal of the Association of American Medical Colleges 05/2013; · 2.34 Impact Factor
[show abstract][hide abstract] ABSTRACT: Surgical trainees develop surgical skills using various techniques, with simulators providing a safe learning environment. Fracture fixation is the most common procedure in orthopaedic surgery, and residents may benefit from simulated fracture fixation. The performance of residents on a virtual simulator that allows them to practice the surgical fixation of fractures by providing a sense of touch (haptics) has not yet been compared with their performance using other methods of practicing fracture fixation, such as a Sawbones simulator model. The purpose of this study was to assess whether residents performed similarly on a newly developed virtual simulator compared with a Sawbones simulator fracture fixation model.
A stratified, randomized controlled study involving twenty-two orthopaedic surgery residents was performed. The residents were randomized to first perform surgical fixation of the ulna on either the virtual or the Sawbones simulator, after which they performed the same procedure on the other simulator. Their performance was evaluated by examiners experienced in fracture fixation who completed a task-specific checklist, global rating scale (GRS) form, and time-to-completion record for each participant on each simulator.
Both simulators distinguished between differing experience levels, demonstrating construct validity; for the Sawbones simulator, the Cohen d value (effect size) was >0.90, and for the virtual simulator, d was >1.10 (p < 0.05 for both). The participants achieved significantly better scores on the virtual simulator compared with the Sawbones simulator (p < 0.05) for all measures except time to completion. The GRS scores showed a high level of internal consistency (Cronbach α, >0.80). However, Pearson product-moment correlation analysis showed no significant correlations between the results on the two simulators; therefore, concurrent validity was not achieved.
The newly developed virtual ulnar surgical fixation simulator, which incorporates haptics, shows promise for helping surgical trainees learn and practice basic skills, but it did not attain the same standards as the current standard Sawbones simulator. The procedural measures used to assess resident performance demonstrated good reliability and validity, and both the Sawbones and the virtual simulator showed evidence of construct validity.
The Journal of Bone and Joint Surgery 05/2013; 95(9):e601-6. · 3.23 Impact Factor
[show abstract][hide abstract] ABSTRACT: PURPOSE: To conduct a meta-analysis of published studies to determine the construct and criterion validity of the mini-clinical evaluation exercise (mini-CEX) to measure clinical performance. METHOD: The authors included all peer-reviewed studies published from 1995 to 2012 that reported the relationship between participants' performance on the mini-CEX and on other standardized academic and clinical performance measures. Moderator variables and performance and standardized exam measures were extracted and reviewed independently using a standardized coding protocol. RESULTS: Performance measures from 11 studies were identified. A random-effects model of weighted mean effect size differences (d) resulted in (1) construct validity coefficients for the mini-CEX on the trainees' performance across different residency year levels ranging from d = 0.25 (95% confidence intervals [CI]: 0.04-0.46) to d = 0.50 (95% CI: 0.31-0.70), and (2) concurrent validity coefficients for the mini-CEX based on personnel ratings ranging from d = 0.23 (95% CI: 0.04-0.50) to d = 0.50 (95% CI: 0.34-0.65).Also, a random-effects model of weighted correlation effect size differences (r) resulted in predictive validity coefficients for the mini-CEX on trainees' performance across different standardized measures ranging from r = 0.26 (95% CI: 0.16-0.35) to r = 0.85 (95% CI: 0.47-0.96). CONCLUSIONS: The construct and criterion validity of the mini-CEX was supported by small to large effect size differences based on measures between trainees' achievement and clinical skills performance, indicating that it is an important instrument for the direct observation of trainees' clinical performance.
Academic medicine: journal of the Association of American Medical Colleges 01/2013; · 2.34 Impact Factor
[show abstract][hide abstract] ABSTRACT: Background: There is a question of whether a single assessment tool can assess the key competencies of residents as mandated by the Royal College of Physicians and Surgeons of Canada CanMEDS roles framework.Objective:The objective of the present study was to investigate the reliability and validity of an emergency medicine (EM) in-training evaluation report (ITER).Method:ITER data from 2009 to 2011 were combined for residents across the 5 years of the EM residency training program. An exploratory factor analysis with varimax rotation was used to explore the construct validity of the ITER. A total of 172 ITERs were completed on residents across their first to fifth year of training.Results:A combined, 24-item ITER yielded a five-factor solution measuring the CanMEDs role Medical Expert/Scholar, Communicator/Collaborator, Professional, Health Advocate and Manager subscales. The factor solution accounted for 79% of the variance, and reliability coefficients (Cronbach alpha) ranged from α = 0.90 to 0.95 for each subscale and α = 0.97 overall. The combined, 24-item ITER used to assess residents' competencies in the EM residency program showed strong reliability and evidence of construct validity for assessment of the CanMEDS roles.Conclusion:Further research is needed to develop and test ITER items that will differentiate each CanMEDS role exclusively.
CJEM: Canadian journal of emergency medical care = JCMU: journal canadien de soins medicaux d'urgence 01/2013; 15:1-7. · 1.05 Impact Factor
[show abstract][hide abstract] ABSTRACT: INTRODUCTION: Existing attitude scales on interprofessional education (IPE) focus on students' attitudes toward the concepts of teamwork and opportunities for IPE but fail to examine student perceptions of the learning modality that also plays an important role in the teaching and learning process. The purpose of this present study was to test the psychometric characteristics of the KidSIM Attitude Towards Teamwork in Training Undergoing Designed Educational Simulation (ATTITUDES) questionnaire developed to measure student perceptions of and attitudes toward IPE, teamwork, and simulation as a learning modality. METHODS: A total of 196 medical, nursing, and respiratory therapy students received a 3-hour IPE curriculum module that focused on 2 simulation-based team training scenarios in emergency and intensive care unit settings. Each multiprofessional group of students completed the 30-item ATTITUDES questionnaire before participating in the IPE curriculum and the same questionnaire again as a posttest on completion of the high-fidelity simulation, team-based learning sessions. RESULTS: The internal reliability of the ATTITUDES questionnaire was α = 0.95. The factor analysis supports a 5-factor solution accounting for 61.6% of the variance: communication (8 items), relevance of IPE (7 items), relevance of simulation (5 items), roles and responsibilities (6 items), and situation awareness (4 items). Aggregated and profession-specific analysis of students' responses using paired sample t tests showed significant differences from the pretest to the posttest for all questionnaire items and subscale measures (P < 0.001). CONCLUSIONS: The KidSIM ATTITUDES questionnaire provides a reliable and construct valid measure of student perceptions of and attitudes toward IPE, teamwork, and simulation as a learning modality.
Simulation in healthcare: journal of the Society for Simulation in Healthcare 08/2012; · 1.64 Impact Factor
[show abstract][hide abstract] ABSTRACT: There is increasing interest in using simulators for laparoscopic surgery training, and simulators have rapidly become an integral part of surgical education.
We searched MEDLINE, EMBASE, Cochrane Library, and Google Scholar for randomized controlled studies that compared the use of different types of simulators. The inclusion criteria were peer-reviewed published randomized clinical trials that compared simulators versus standard apprenticeship surgical training of surgical trainees with little or no prior laparoscopic experience. Of the 551 relevant studies found, 17 trials fulfilled all inclusion criteria. The effect sizes (ES) with 95 % confidence intervals [CI] were calculated for multiple psychometric skill outcome measures.
Data were combined by means of both fixed- and random-effects models. Meta-analytic combined effect size estimates showed that novice students who trained on simulators were superior in their performance and skill scores (d = 1.98, 95 % CI: 1.20-2.77; P < 0.01), were more careful in handling various body tissue (d = 1.08, 95 % CI: 0.36-1.80; P < 0.01), and had a higher accuracy score in conducting laparoscopic tasks (d = 1.38, 95 % CI: 0.30-2.47; P < 0.05).
Simulators have been shown to provide better laparoscopic surgery skills training for trainees than the traditional standard apprenticeship approach to skill development. Surgical residency programs are highly encouraged to adopt the use of simulators in teaching laparoscopic surgery skills to novice students.
[show abstract][hide abstract] ABSTRACT: Student evaluation of teaching is ubiquitous to teaching in colleges and universities around the world. Since the implementation of student evaluations in the 1970s in the US, considerable research has been devoted to their appropriate use as a means of judging the effectiveness of teaching. The present article aims to (1) examine the evidence for the reliability, validity, and utility of student ratings; (2) provide seven guidelines for ways to identify effective instruction, given that the purpose of student evaluation is to assess effective teaching; and (3) conclude with recommendations for the integration of student ratings into the continuous evaluation of veterinary medical education.
Journal of Veterinary Medical Education 01/2012; 39(1):71-8. · 0.65 Impact Factor
[show abstract][hide abstract] ABSTRACT: To identify and empirically investigate the dimensions of leadership in medical education and healthcare professions.
A population-based design with a focus group and a survey were used to identify the perceived competencies for effective leadership in medical education.
The focus group, consisting of five experts from three countries (Austria n=1; Germany n=2; Switzerland n=2), was conducted (all masters of medical education), and the survey was sent to health professionals from medical schools and teaching hospitals in six countries (Austria, Canada, Germany, Switzerland, the UK and the USA).
The participants were educators, physicians, nurses and other health professionals who held academic positions in medical education. A total of 229 completed the survey: 135 (59.0%) women (mean age=50.3 years) and 94 (41.0%) men (mean age=51.0 years).
A 63-item survey measuring leadership competencies was developed and administered via electronic mail to participants.
Exploratory principal component analyses yielded five factors accounting for 51.2% of the variance: (1) social responsibility, (2) innovation, (3) self-management, (4) task management and (5) justice orientation. There were significant differences between physicians and other health professionals on some factors (Wilk's λ=0.93, p<0.01). Social responsibility was rated higher by other health professionals (M=71.09) than by physicians (M=67.12), as was innovation (health professionals M=80.83; physicians M=76.20) and justice orientation (health professionals M=21.27; physicians M=20.46).
The results of the principal component analyses support the theoretical meaningfulness of these factors, their coherence, internal consistency and parsimony in explaining the variance of the data. Although there are some between-group differences, the competencies appear to be stable and coherent.
BMJ Open 01/2012; 2(2):e000812. · 1.58 Impact Factor
[show abstract][hide abstract] ABSTRACT: Although there is no clear consensus about the process of screening for developmental dysplasia of the hip (DDH), there are six common risk factors associated with DDH in patients less than 6 months of age (breech presentation, sex, family history, first-born, side of hip, and mode of delivery).
A meta-analysis of published studies was conducted to identify the relative risk ratio of the six commonly known risk factors. A total of 31 primary studies consisting of 20,196 DDH patients met the following inclusion criteria: (1) contained empirical data on at least one common risk factor, (2) were peer-reviewed from an English language scientific journal, (3) included patients less or equal to 6 months of age, and (4) identified method of diagnosis (e.g., ultrasound, radiographs or clinical examination).
Fixed effect and random effects models with 95% confidence intervals were calculated for each of the six risk factors. Reported relative risk ratio (RR) for each factor in newborns was: breech presentation 3.75 (95% CI: 2.25-6.24), females 2.54 (95% CI: 2.11-3.05), left hip side 1.54 (95% CI: 1.25-1.90), first born 1.44 (95% CI: 1.12-1.86), and family history 1.39 (95% CI: 1.23-1.57). A non-significant RR value of 1.22 (95% CI: 0.46-3.23) was found for mode of delivery.
Results suggest that ultrasound and radiology screening methods be used to confirm DDH in newborns that present with one or a combination of the following common risk factors: breech presentation, female, left hip affected, first born and family history of DDH.
European journal of radiology 11/2011; 81(3):e344-51. · 2.65 Impact Factor
[show abstract][hide abstract] ABSTRACT: Breast cancer is the most common cancer diagnosed in women. The present study evaluated the family physicians' (FPs) understanding of adjuvant hormonal therapies for an early breast cancer. FPs were invited to attend teaching workshops on this topic, which utilized a pretest, didactic and interactive teaching, and posttest format. FPs (n = 23) showed an improvement (p < 0.001) in pretest to posttest score. It is clear that, with a targeted teaching, FPs can quickly become more knowledgeable on the topic of hormonal therapies in breast cancer, with the potential of applying this information in their own practice.
Journal of Cancer Education 03/2010; 25(4):493-6. · 0.88 Impact Factor
[show abstract][hide abstract] ABSTRACT: In this study, the self-report Youth Resiliency: Assessing Development Strengths (YR:ADS) questionnaire is used with adolescents from seven junior and senior high schools (N = 2,991) to investigate the function of resiliency profiles as a model for understanding why adolescents engage in bullying and acts of aggression and how having these developmental strengths reduces victimization. In support of a protective-protective model of resiliency, the linear relationship with each behaviour indicator shows that the interactive risk and outcome relationship decreases with each strength or resiliency factor present. These results suggest that adolescents with positive situational and internal factors in their daily lives are inclined to lead prosocial or constructive lifestyles. The ability to negotiate risk or engagement in at-risk behaviours needs to shift the future research from simply identifying protective factors, to an understanding of how the development of resiliency processes allow some individuals to cope more effectively than others.
Canadian Journal of School Psychology 01/2010; 25(1):101-113.
[show abstract][hide abstract] ABSTRACT: Although the validity of students' ratings of instruction has been documented, several student and course characteristics may be related to the ratings students give their instructors.
The purpose of this study was to examine student ratings obtained from the Universal Student Ratings of Instruction (USRI) instrument. These responses were compared to various student characteristics. Also, teaching characteristics that were most closely associated with the ratings were determined.
A total of 1738 USRI forms were completed by graduate students enrolled in medical science courses from 1999 to 2006 in the Faculty of Medicine at a Canadian university.
Between group comparisons showed that negative student perceptions about the course (i.e., did not have the freedom to select), perceiving the course workload as high, and low grade expectations held were related to negative student ratings of overall quality of instruction. In terms of the student and teaching characteristics, organization of course material and perceptions of whether students felt they learned a lot in the course were most closely related to global ratings of instructional quality.
Implications for teaching focus on improving the organization and delivery of course content that meets the learning objectives of graduate students in medical sciences.
Medical Teacher 01/2010; 32(4):327-32. · 1.82 Impact Factor
[show abstract][hide abstract] ABSTRACT: Background: The Script Concordance (SC) approach was used as an alternative test format to measure the presence of knowledge organization reflective in one's clinical reasoning skills (i.e., diagnostic, investigation and treatment knowledge).
[show abstract][hide abstract] ABSTRACT: The assessment of ethical problem solving in medicine has been controversial and challenging. The purposes of this study were: (i) to create a new instrument to measure doctors' decisions on and reasoning approach towards resolving ethical problems; (ii) to evaluate the scores generated by the new instrument for their reliability and validity, and (iii) to compare doctors' ethical reasoning abilities between countries and among medical students, residents and experts.
This study used 15 clinical vignettes and the think-aloud method to identify the processes and components involved in ethical problem solving. Subjects included volunteer ethics experts, postgraduate Year 2 residents and pre-clerkship medical students. The interview data were coded using the instruments of the decision score and Ethical Reasoning Inventory (ERI). The ERI assessed the quality of ethical reasoning for a particular case (Part I) and for an individual globally across all the vignettes (Part II).
There were 17 Canadian and 32 Taiwanese subjects. Based on the Canadian standard, the decision scores between Taiwanese and Canadian subjects differed significantly, but made no discrimination among the three levels of expertise. Scores on the ERI Parts I and II, which reflect doctors' reasoning quality, differed between countries and among different levels of expertise in Taiwan, providing evidence of construct validity. In addition, experts had a greater organised knowledge structure and considered more relevant variables in the process of arriving at ethical decisions than did residents or students. The reliability of ERI scores was 0.70-0.99 on Part I and 0.75-0.80 on Part II.
Expertise in solving ethical problems could not be differentiated by the decisions made, but could be differentiated according to the reasoning used to make those decisions. The difference between Taiwanese and Canadian experts suggests that cultural considerations come into play in the decisions that are made in the course of providing humane care to patients.
Medical Education 12/2009; 43(12):1188-97. · 3.55 Impact Factor