Article

Faculty Development for Simulation Programs: Five Issues for the Future of Debriefing Training

Authors:
  • Alberta Children's Hospital, University of Calgary
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Statement: Debriefing is widely recognized as a critically important element of simulation-based education. Simulation educators obtain and/or seek debriefing training from various sources, including workshops at conferences, simulation educator courses, formal fellowships in debriefings, or through advanced degrees. Although there are many options available for debriefing training, little is known about how faculty development opportunities should be structured to maintain and enhance the quality of debriefing within simulation programs. In this article, we discuss 5 key issues to help shape the future of debriefing training for simulation educators, specifically the following: (1) Are we teaching the appropriate debriefing methods? (2) Are we using the appropriate methods to teach debriefing skills? (3) How can we best assess debriefing effectiveness? (4) How can peer feedback of debriefing be used to improve debriefing quality within programs? (5) How can we individualize debriefing training opportunities to the learning needs of our educators?

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The use of simulation-based learning (SBL) for achieving various learning goals is spreading around the world. In countries with a shorter tradition for SBL, many instructor courses for facilitators and simulation faculty are conducted by groups from countries with longstanding practice of SBL, for example, many European countries, North America and Australia [1,2]. The essential role of facilitators in participants' learning in SBL, especially during debriefing, is recognised across settings [1][2][3][4][5][6][7]. ...
... In countries with a shorter tradition for SBL, many instructor courses for facilitators and simulation faculty are conducted by groups from countries with longstanding practice of SBL, for example, many European countries, North America and Australia [1,2]. The essential role of facilitators in participants' learning in SBL, especially during debriefing, is recognised across settings [1][2][3][4][5][6][7]. The Society of Simulation in Healthcare defines a facilitator as an "individual that helps to bring about an outcome (such as learning, productivity, or communication) by providing indirect or unobtrusive assistance, guidance, or supervision" [8]. ...
... The International Nursing Association for Clinical Simulation and Learning (INACSL) describes standards for the professional integrity of simulation facilitators and uses four criteria for the description: "foster and role model attributes of profes-sional integrity at all times", "follow standards of practice, guidelines, prin-ciples, and ethics of one's profession", "create and maintain a safe learning environment", and "require confidentiality of the performances and scenario content based on institutional policy and procedures" [27]. Like these frameworks, our study emphasised the complex nature of the role as simulation facilitator [1,[26][27][28]. ...
Article
Full-text available
Background: Simulation-based learning (SBL), used for achieving various learning goals, is spreading around the world. However, it is still open, to what extend SBL needs to be adapted to local cultures. This study aimed to explore how various stakeholder groups perceive what constitutes a competent simulation facilitator across three different countries. Methods: We conducted an interview study with learners, facilitators, and facilitator trainers. Semi-structured interviews with 75 participants underwent content analysis. Participants were recruited from Denmark, Korea, and Australia. Interviews focused on characteristics of simulation faculty, as well as educational behaviours. Interviews were audio-recorded, translated to English, transcribed, and content analysed by inductively developing codes using the Nvivo software. In the first coding round, each interview was treated separately. In the analysis round, the individual codes between countries and stakeholder groups were compared to identify similarities and differences. Results: Our study shows high demands for the simulation facilitator role. A competent simulation facilitator should possess the following characteristics: (1) subject matter expertise, (2) personal approach and traits, (3) self-awareness and reflection, and (4) communication skills. Educational behaviours comprised (1) supporting a safe learning environment, 2) working goal-oriented with the course, (3) engaging before the course with preparation, (4) leading scenarios, and (5) facilitating debriefings. Comparative analysis showed similar wishes towards simulation facilitators from the different stakeholders in different countries, though the same terms might mean different details in the various settings. Conclusions: These findings offer guidance for learning needs analysis and the establishment of faculty development programmes. The study also shows that the personal characteristics are an important aspect of the facilitator role above and beyond displaying educational behaviours.
... Faculty development has been identified as a crucial component of any system to assist staff members in preparing for the challenges and everchanging responsibilities (Cheng, Grant, et al., 2015). From the literature, faculty development can be defined as a 'range of activities that institutions use to renew or assist faculty in their roles and can include workshops, seminar series, courses, fellowships, and individualized feedback' (Cheng, Grant, et al., 2015;Leslie et al., 2013, p.7). ...
... Faculty development has been identified as a crucial component of any system to assist staff members in preparing for the challenges and everchanging responsibilities (Cheng, Grant, et al., 2015). From the literature, faculty development can be defined as a 'range of activities that institutions use to renew or assist faculty in their roles and can include workshops, seminar series, courses, fellowships, and individualized feedback' (Cheng, Grant, et al., 2015;Leslie et al., 2013, p.7). ...
... The literature surrounding faculty development for implementing simulation programmes have very been sparse (Cheng, Grant, et al., 2015;Shaddeau & Deering, 2016), however from the limited literature surrounding the topic, it is clear that successful implementation requires multiple factors and stakeholder involvement (Haji et al., 2014). ...
Thesis
Full-text available
The thesis reports on the role of medical simulation in developing undergraduate acute care clinical skills competencies in South Africa. The study aim was to explore the role, including the limits and possibilities, of medical simulation as a pedagogical method in an undergraduate acute care clinical skills curriculum within a South African tertiary education environment. The study consisted of three phases, during phase one, I conducted a modified Delphi study to identify the acute care clinical skills competencies undergraduate medical students need to acquire to prepare them, in the role as newly qualified clinicians, for managing acute care cases within a South African in-hospital environment. Phase two explored what acute care clinical competencies would lend themselves to a medical simulation modality within a South African tertiary education environment by conducting a qualitative focus group discussion with curriculum experts. Phase three explored the role, including the limits and possibilities, of medical simulation as an educational modality in developing acute care clinical skills curriculum within a South African tertiary education environment through conducting semi-structured interviews with simulation experts within South Africa The findings of the phase one Delphi study contributed to developing a comprehensive list of undergraduate acute care clinical skills competencies required for the South African environment. The findings of the phase two focus group discussion challenged how medical simulation was being implemented in South Africa by proposing a framework within which medical simulation pedagogy can be used to develop competencies under the following themes, namely, foundational competencies, contextually standardised competencies, assessments, and teamwork. The third phase of semi-structured interviews with South African simulation experts sought to explore the role limits and possibilities of simulation within South Africa as a resource-constrained environment. A major finding was the value of conceptualising medical simulation as a pedagogy rather than a technology. The social and cognitive constructivist theories, and key related concepts informing this study, supported this new way of framing medical simulation. A further key finding in this phase was the importance of staff development in simulation pedagogy and less emphasis on the acquisition of expensive equipment in resource constrained environments. The implications of adopting this view have the potential for an earlier introduction and smoother transition of medical students into the clinical environment, which may lead to them being better prepared for clinical practice with resultant reduced morbidity and improved patient safety.
... As a result, they are useful for illustrating concepts to novices and providing a shared mental model of what a good debriefing looks like. However, they are not easily integrated into debriefer feedback, mentoring or coaching [11]. While these tools appear to be widely adopted in the training of debriefers, validation studies were limited to analysis of delayed reviews of recorded debriefings [8][9][10]12]. ...
... Current faculty development programmes often use the tools listed above as an aid to achieve improved debriefings [11]. In many programmes, feedback to new debriefers follows direct observation (or video review) by more experienced colleagues. ...
... In the context of the current remote learning climate, a recent study recognised that current methods of faculty development lack a structured approach [30]. We agree that structure is clearly an important factor that faculty development programmes might lack [11]. The quantitative approaches described in our work may assist with providing this structure at the point of care by allocating our attention to observing debriefings in a focused manner. ...
Article
Full-text available
Background Debriefing is an essential skill for simulation educators and feedback for debriefers is recognised as important in progression to mastery. Existing assessment tools, such as the Debriefing Assessment for Simulation in Healthcare (DASH), may assist in rating performance but their utility is limited by subjectivity and complexity. Use of quantitative data measurements for feedback has been shown to improve performance of clinicians but has not been studied as a focus for debriefer feedback. Methods A multi-centre sample of interdisciplinary debriefings was observed. Total debriefing time, length of individual contributions and demographics were recorded. DASH scores from simulation participants, debriefers and supervising faculty were collected after each event. Conversational diagrams were drawn in real-time by supervising faculty using an approach described by Dieckmann. For each debriefing, the data points listed above were compiled on a single page and then used as a focus for feedback to the debriefer. Results Twelve debriefings were included (µ = 6.5 simulation participants per event). Debriefers receiving feedback from supervising faculty were physicians or nurses with a range of experience ( n = 7). In 9/12 cases the ratio of debriefer to simulation participant contribution length was ≧ 1:1. The diagrams for these debriefings typically resembled a fan-shape. Debriefings ( n = 3) with a ratio < 1:1 received higher DASH ratings compared with the ≧ 1:1 group ( p = 0.038). These debriefings generated star-shaped diagrams. Debriefer self-rated DASH scores (µ = 5.08/7.0) were lower than simulation participant scores (µ = 6.50/7.0). The differences reached statistical significance for all 6 DASH elements. Debriefers evaluated the ‘usefulness’ of feedback and rated it ‘highly’ (µ= 4.6/5). Conclusion Basic quantitative data measures collected during debriefings may represent a useful focus for immediate debriefer feedback in a healthcare simulation setting.
... Immediately after the simulation activity, various aspects can be explored with learners during the debriefing or feedback session which is an important phase that should not be dissociated from any SBE activity [13,20,21]. Although under-reported in the simulation literature, the learners' potential feeling of deception is something they may indirectly report during the reaction phase of the debriefing. ...
... At the very latest, if the aspect of deception does not emerge during the debriefing reaction phase, it should be discussed during the analytical phase [13] in a very tactful manner to ensure their psychological safety [62,65], justifying the approach used so the educators remain trusted by the learners and so they maintain faith in the educational technique adopted. It is hence strongly advised that educators receive some formal training in relation to SBE and debriefing [20,62]. ...
Article
Full-text available
The level of performance of every clinician and of the overall multiprofessional team relies on the skills and expertise they have individually and collectively acquired through education, training, self-directed learning, and reflection. Simulation-based education (SBE) is playing an increasingly important role in that respect, and it is sometimes said that it is an art to facilitate. Many explanations can justify this assertion. Although there is generally an emphasis on making everything as realistic or “high-fidelity” as possible, it is often futile and this is where the art of simulation comes into play with an element of modulation of realism linked to the intended learning objectives. The atmosphere created by the educators; how the learners are made to engage and interact; how physical, technical, and contextual elements are simulated or represented; and what type of technology is used need to be appropriately adapted to contribute to the immersiveness of any SBE activity. Although it inevitably carries a negative connotation, some form of “deception” is more commonly used than one may think for the benefit of learners during SBE. High levels of realism are sometimes achieved by making learners believe something works or reacts as would be expected in real life, whereas it is achieved in a totally different manner. Learners do not need to know, see, or understand these “tricks of the trade”, shortcuts, or artistic or technological aspects, and this can be considered a form of benevolent deception. Similarly, information may be withheld to recreate a realistic situation and push learners to demonstrate specific learning outcomes, but it needs to be practised with caution and be justifiable. These forms of “positive” deception are part of most SBE activities and are used to help learners bridge the reality gap so they can suspend disbelief more easily, exercise critical thinking, and treat the simulation more realistically without damaging the trust they place in their educators. This article will discuss how aspects of SBE activities are often manipulated, modified, or hidden from learners to facilitate the learning experience and present a simulation fidelity model encompassing the environmental, patient, semantical, and phenomenal dimensions.
... However, it needs to be interpreted depending on the student's experience with debriefing. Instead, the self-assessment can help significantly to improve debriefing skills if accompanied by the feedback of an external person 22,23 or a faculty debriefer. 14 The perspectives of the different experimented evaluators can help enhance the skills of the novice debriefer in a better way because of a higher capability to ensure the quality of a debriefing session. ...
... 14 The perspectives of the different experimented evaluators can help enhance the skills of the novice debriefer in a better way because of a higher capability to ensure the quality of a debriefing session. 22,23 Feedback by senior trainers is considered to create a shared mental model of the debriefing methodology. Using all 3 perspectives could provide a clearer understanding of the competence of debriefing for the instructor to improve specific abilities. ...
Introduction: Virtual debriefing is a cardinal element to achieve the effectiveness of telesimulation. There are different instruments to assess face-to-face debriefing to determine the degree of effectiveness of debriefing; Debriefing Assessment for Simulation in Healthcare (DASH) instrument is one of them. This study aims to describe and compare the evaluation of raters, instructors, and students during a virtual debriefing using the DASH. Methods: A cross-sectional study was performed evaluating the virtual debriefing of 30 instructors after a heart failure telesimulation scenario. The evaluation was conducted by 30 instructors, 338 undergraduate students in the seventh semester, and 7 simulation raters. The 3 versions of the DASH instrument in Spanish were applied, respectively. Results: Two comparisons were made, student versus instructor and rater versus instructor. Cronbach α was 0.97 for each version. The averages of the results on the DASH instrument were: 6.61 (3.34-7.0), 5.95 (4.65-7.0), and 4.84 (2.68-6.02) for student, rater, and instructor versions, respectively. The size effect between student and debriefer perspectives was 0.42. In contrast, the size effect between instructor and rater was 0.72. All differences were significant. Conclusions: There are different rates between the persons who use the DASH. In this study, from the perspective of the instructor and rater, the difference was 1 point with a wide range, in contrast with the difference between instructor and student, which is relatively minor. It is necessary to consider the perspectives of experts in the subject to achieve a virtual debriefing of high quality and improve a debriefing by using the DASH.
... These elements include establishing rules during the debriefing, such as confidentiality, that encourages students to participate, and the use of open-ended questions. These types of questions help facilitate self-reflection and stimulate the participation of students [11]. Furthermore, during the debriefing, it is important to create an environment that encourages student participation and fosters motivation to learn, while maintaining their attention throughout the entire debriefing phase [12]. ...
Article
Full-text available
Background The audience response systems are being implemented to support active learning in nursing degree programs. The benefits of audience response systems have been studied in lecture-based classes and seminars, but their advantages or inconveniences when included in the debriefing phase of a high-fidelity clinical simulation have not been explored. The study aim was to discover student´s experience about using of interactive questions during debriefing, and the self-perceived effects on attention, participation and motivation. Methods A Mixed-methods study was used exploratory sequential design in a university. The participants were 4th-year students enrolled in the Nursing Degree in a university in Southern Spain. (1) Qualitative phase: a phenomenological approach was utilized, and focus groups were used for data-collection. (2) Quantitative phase: cross-sectional descriptive study using a questionnaire designed “ad hoc”, on the experiences on the use of interactive questions in the debriefing phase and the Debriefing Experience Scale. Results (1) Qualitative phase: the students highlighted the facilitating role of the interactive questions during the reflection part of the debriefing, and mentioned that the interactive questions helped with stimulating attention, participation, and motivation during the analytical part of the debriefing; (2) Quantitative phase: it was observed that the best evaluated dimension was “Motivation”, with a mean of 4.7 (SD = 0.480), followed by the dimension “Participation”, with a mean of 4.66 (SD = 0.461), and lastly, the dimension “Attention”, with a mean of 4.64 (SD = 0.418). Conclusions The use of interactive questions contributed the attention, participation, and motivation of the students during the debriefing, contributing towards a highly satisfactory experience of high-fidelity clinical simulation.
... 26 The critical requirement in simulation-based education is the presence of faculty who are trained and experienced in debriefing and providing effective feedback. [27][28][29] Simulation allows for the creation of a variety of case scenarios on demand and in a safe environment. 29,30 In trauma settings, simulation training is an invaluable tool to develop multidisciplinary preparedness for massive casualty incidents, which are rare and unpredictable. ...
Article
Full-text available
Training and education for trauma anesthesiology have been predicated on 2 primary pathways: learning through peripheral "complex, massive transfusion cases"-an assumption that is flawed due to the unique demands, skills, and knowledge of trauma anesthesiology-or learning through experiential education, which is also incomplete due to its unpredictable and variable exposure. Residents may receive training from senior physicians who may not maintain a trauma-focused continuing medical education. Further compounding the issue is the lack of fellowship-trained clinicians and standardized curricula. The American Board of Anesthesiology (ABA) provides a section for trauma education in its Initial Certification in Anesthesiology Content Outline. However, many trauma-related topics also fall under other subspecialties, and the outline excludes "nontechnical" skills. This article focuses on the training of anesthesiology residents and proposes a tier-based approach to teaching the ABA outline by including lectures, simulation, problem-based learning discussions, and case-based discussions that are proctored in conducive environments by knowledgeable facilitators.
... 25,26 The literature suggests that most of the learning during simulation came from either the debrief or the reflective learning. 27 Previous research notes that debriefing can lead to a closer relationship between facilitators and participants of the debrief, 28 and in the case of our participants, the peer-led debriefing created closer relationships to those peers. ...
Article
Context Previous research has found simulation with debriefing to be helpful in developing self-confidence, improving clinical competence, identifying knowledge deficits, and implementing knowledge into practice in the short term. However, the long-term implications of simulation curation and participation are unknown. Objective The purpose of this study was to evaluate the long-term effect of large-scale simulation curation and participation as part of an advanced-practice athletic training course. Design Qualitative phenomenological approach. Setting Video-conferencing software. Patients or Other Participants From among 60 potential participants, 11 individuals participated in a long-term, follow-up interview to explore their recollections, perceptions, and subsequent experiences from curating and participation in large-scale simulation. Main Outcome Measures Deidentified transcripts were checked for accuracy and sent for member checking. Subsequently, a 3-person data analysis team used several sequenced rounds of review, aligned with consensual qualitative research, to the analyze data. Trustworthiness was established with member checking, multianalyst triangulation, and auditing. Results Three domains emerged from the data: emotional reaction, improvements to practice, and the value of debriefing. When the emotional reaction domain was described, learners focused on the reality, overload, and need to maintain composure of the encounter. Within their clinical practice, improvements were made primarily regarding mindset, teaching, collaboration, emergency planning, and triage. Finally, learners noted the value of debriefing as humbling, a time for realized deficiency, and a time of emotional decompression. Conclusions Simulation-based learning in advanced-practice clinicians leads to perceived increase in skills such as intra- and interpersonal skills and emotional readiness. Long-term effects of simulation demonstrated that learners could translate these skills into clinical practice even 2 to 3 years post experience. Finally, the use of debriefing is a critical component to both the learner's skill acquisition and translation of knowledge in all simulation-based experiences.
... Other researchers, including Kolbe et al in this issue, have called for the development of debriefing skills in faculty, trainers and clinical team leaders. 13 14 We strongly endorse that perspective. As Kolbe et al note, many clinicians and faculty find it difficult to lead debriefs. ...
... A recent study recognized that traditional methods of SBME faculty development lack a structured approach to achieve expertise and proposed the use of Debrief-LiveⓇ, a virtual teaching environment that allows faculty to review their debriefing performances by observing recorded videos and scoring themselves [26]. Direct observation of debriefers by experienced faculty, faculty mentoring to achieve debriefing expertise, and targeted coaching conversations using an agreed-upon approach may all have some role in assisting with the development of skill in debriefing [5,25,27]. Moreover, the use of quantitative scoring systems have the potential to provide conversational substrate for all of these approaches, and may help debriefers improve at all levels of experience. ...
Article
Full-text available
Background Various rating tools aim to assess simulation debriefing quality, but their use may be limited by complexity and subjectivity. The Debriefing Assessment in Real Time (DART) tool represents an alternative debriefing aid that uses quantitative measures to estimate quality and requires minimal training to use. The DART is uses a cumulative tally of instructor questions (IQ), instructor statements (IS) and trainee responses (TR). Ratios for IQ:IS and TR:[IQ + IS] may estimate the level of debriefer inclusivity and participant engagement. Methods Experienced faculty from four geographically disparate university-affiliated simulation centers rated video-based debriefings and a transcript using the DART. The primary endpoint was an assessment of the estimated reliability of the tool. The small sample size confined analysis to descriptive statistics and coefficient of variations (CV%) as an estimate of reliability. Results Ratings for Video A ( n = 7), Video B ( n = 6), and Transcript A ( n = 6) demonstrated mean CV% for IQ (27.8%), IS (39.5%), TR (34.8%), IQ:IS (40.8%), and TR:[IQ + IS] (28.0%). Higher CV% observed in IS and TR may be attributable to rater characterizations of longer contributions as either lumped or split. Lower variances in IQ and TR:[IQ + IS] suggest overall consistency regardless of scores being lumped or split. Conclusion The DART tool appears to be reliable for the recording of data which may be useful for informing feedback to debriefers. Future studies should assess reliability in a wider pool of debriefings and examine potential uses in faculty development.
... 49 56 Addressing these gaps in our current empirical understanding of what makes debriefing conversations effective is important for developing and targeting debriefing faculty development efforts for clinical faculty. 46 47 57 Actionable knowledge on how to achieve in-depth reflection in debriefing may help mitigate workload during debriefing, 40 enhance debriefing skills and debriefing quality and thus contribute to safe patient care. ...
Article
Background Debriefings help teams learn quickly and treat patients safely. However, many clinicians and educators report to struggle with leading debriefings. Little empirical knowledge on optimal debriefing processes is available. The aim of the study was to evaluate the potential of specific types of debriefer communication to trigger participants’ reflection in debriefings. Methods In this prospective observational, microanalytic interaction analysis study, we observed clinicians while they participated in healthcare team debriefings following three high-risk anaesthetic scenarios during simulation-based team training. Using the video-recorded debriefings and INTERACT coding software, we applied timed, event-based coding with DE-CODE, a coding scheme for assessing debriefing interactions. We used lag sequential analysis to explore the relationship between what debriefers and participants said. We hypothesised that combining advocacy (ie, stating an observation followed by an opinion) with an open-ended question would be associated with participants’ verbalisation of a mental model as a particular form of reflection. Results The 50 debriefings with overall 114 participants had a mean duration of 49.35 min (SD=8.89 min) and included 18 486 behavioural transitions. We detected significant behavioural linkages from debriefers’ observation to debriefers’ opinion (z=9.85, p<0.001), from opinion to debriefers’ open-ended question (z=9.52, p<0.001) and from open-ended question to participants’ mental model (z=7.41, p<0.001), supporting our hypothesis. Furthermore, participants shared mental models after debriefers paraphrased their statements and asked specific questions but not after debriefers appreciated their actions without asking any follow-up questions. Participants also triggered reflection among themselves, particularly by sharing personal anecdotes. Conclusion When debriefers pair their observations and opinions with open-ended questions, paraphrase participants’ statements and ask specific questions, they help participants reflect during debriefings.
... 16 Expert simulation educators often do not ascribe to any specific debriefing method but rather use the method most relevant to the learning objectives, scenario, location, and overall situation. 17 With only 40% of survey respondents indicating that faculty receive professional development in simulation debriefing, it is not surprising that 60% do not use any structured debriefing method. Without training or mentorship, faculty may be unaware of structured debriefing methods, their proper use, or lack confidence in their debriefing facilitation skills. ...
Article
Full-text available
Background: Teaching and learning using simulation-based methods is increasing in health professions education; however, the prevalence of simulation use in respiratory care programs to date has not been explored. Methods: All 412 Commission on Accreditation for Respiratory Care (CoARC)-accredited entry-into-practice respiratory care programs were e-mailed a survey inquiring about simulation use as an educational tool in their programs. Results: Of the initial 412 programs contacted, 124 returned the survey, for a 30% response rate. More than three-quarters of programs reported using simulation including 87% of associate degree programs, 75% of bachelor's degree programs, and 100% of master's degree programs. Simulation modalities differed by course and program as did length of simulation activities and debriefings. Simulation hours may not be substituted for learner's clinical time under CoARC guidelines, and 69% of respondents agreed with this stance; however, 66% of responding programs have mandatory simulation learning activities, and 68% believe the amount of simulation should be increased. The survey also revealed respiratory care faculty have limited training in the use of simulation. Conclusions: Simulation-based teaching and learning is widespread and varied, but there is a lack of faculty development in its use among respiratory care programs.
... En effet, les émotions notamment négatives ressenties lors de la situation peuvent généralement signifier un sentiment d'échec ou deDescription • Expression des impressions et ressenties du participant Analyse • Identifier les forces et faiblesses de chaque participant Synthèse• Synthèse des apprentissages apportés par la situation déficit de performances. De plus, les participants qui ne verbalisent pas leurs émotions seront probablement plus en retrait lors du débriefing(Cheng, Grant, Dieckmann, Arora, Robinson & Eppich, 2015). Cet examen des réactions est une étape intermédiaire avant l'analyse en profondeur qui permettra de comprendre ce qui s'est passé. ...
Thesis
Les liens entre le stress et la santé ont fait l’objet d’un grand nombre de recherches, notamment auprès de la population étudiante en formation de santé. En effet, les attentes de l’université peuvent être génératrices de stress et avoir des conséquences néfastes sur la santé physique et mentale des étudiants (e.g., problèmes de sommeil, dépression, troubles du comportement alimentaire, conduites à risques, mauvaise hygiène de vie, etc.) Reconnues comme étant particulièrement exigeantes et stressantes, ces formations ont la particularité de confronter les étudiants à des sources de stress cliniques en plus des sources de stress académiques qui peuvent être rencontrées par tout étudiant. Or, peu de contenus de formation permettent aux étudiants d’apprendre à faire face à ces difficultés. Il paraît donc primordial de préparer les étudiants à s’adapter à l’environnement de travail complexe dans lequel ils évolueront professionnellement en leur fournissant des outils et des savoirs, notamment sur le plan psychologique et émotionnel. Pour cela, le concept d’intelligence émotionnelle semble pertinent. En effet, l’acquisition de compétences intra et interpersonnelles telles que la gestion, l’identification ou l’expression des émotions est essentielle et vecteur de bien-être au travail. Il nous paraît alors évident que l’acquisition de ces compétences puisse trouver sa place au sein des formations de santé. Par ailleurs, des pédagogies dites actives, engageant l’étudiant à réfléchir et à s’adapter en faisant face à des situations réelles, semblent plus efficaces et pourraient être pertinentes, notamment pour l’acquisition de ces compétences émotionnelles. Dans ce cadre, des activités telles que la simulation ou encore l’activité physique semblent être des outils pertinents à utiliser pour mettre à contribution les émotions des étudiants. D’ailleurs, plusieurs études mettent en évidence les bénéfices de ces activités sur le plan psychologique et émotionnel. Cependant, les recherches sur ce domaine restent assez peu développées et les modalités d’intervention assez différentes (i.e., programmes courts axés sur des conférences ou entretiens, ou programmes longs axés sur de la théorie et de la pratique). Ainsi, cette thèse de doctorat vise à démontrer l’influence de programmes interventionnels sur le niveau d’IE d’étudiants inscrits en formation de santé. Cette recherche consiste à comparer différents programmes d’intervention afin de déterminer les modalités les plus efficaces pour permettre leur inclusion dans la formation.
... Initial reports highlight the role of self-debriefing or within-team debriefings (including learner self-assessment) focusing on nontechnical skills such as teamwork and communication, [173][174][175] although the generalizability to specific resuscitation and associated BLS and advanced life support remains unclear. 161,176 Additional Considerations Although cognitive load theory informs all aspects of instructional design in simulation, it also has specific relevance for feedback and debriefing. 177 When exploring performance gaps and diagnosing learning needs, 136 educators should consider the cognitive load of learners; during debriefings, they should attend to emotional reactions and provide their feedback in manageable chunks so that learners can process and act on it. ...
Article
Aim: This study evaluated the impact of a single dose of training in Debriefing for Meaningful Learning (DML) on learner knowledge outcomes and time spent in debriefing. Background: Regulatory bodies recommend that faculty who debrief receive training and competence assessment to ensure positive student learning outcomes, yet there is little literature describing the training needed. There is also little understanding of the impact of a single training on the length of debriefing, debriefer skill, and learner outcomes. Method: Following training, debriefers submitted a recorded debriefing for assessment by experts; their learners completed knowledge assessment tests at three time points. Results: Longer debriefing time led to higher DML Evaluation Scale scores. Learner knowledge scores improved and later decayed. Conclusion: The results of this study contribute to the evidence about the importance of training to debrief well, the impact of training on the length of debriefing time, and subsequent learner outcomes.
Article
Between 2013 and 2017, the Association of SP Educators (ASPE), a global organization of educators dedicated to the work of human simulation, developed Standards of Best Practice (SOBP) for working with human role players in simulation. These individuals are known by diverse terms, including simulated or standardized patients or participants (SPs). This study had two aims: (1) to understand the ways in which the ASPE SOBP are relevant to the practices of SP educators around the world, and (2) to identify improvements to the ASPE SOBP from a global perspective. This qualitative study was undertaken between January 2020 and July 2022. Subjects consented to audio-recorded interviews. A collaborative, inductive coding approach was adopted, followed by thematic analysis, aligned with the methods described by Braun and Clarke. Themes were further updated following reflexive conversations amongst the investigators at meetings over the course of several months and were aligned with the study aims. Twelve SP educators from six continents participated. Four primary themes were identified (each with multiple subthemes): influencing SP educator practices; advancing professionalization; identifying challenges to implementation; and bridging gaps in the ASPE SOBP. A diverse group of SP educators from around the world identified the ASPE SOBP in general as relevant and applicable to their practice. The standards provided both guidance and flexibility for working with SPs in a safe, effective and quality-based way. At the same time there were challenges noted and recommendations made that can help to inform future iterations of the standards.
Article
Objectives: Debriefing is an integral component of simulation education, and effective debriefing education is required to maintain effective simulation programs. However, many educators report financial and logistical barriers to accessing formal debriefing training. Due to limited educator development opportunities, simulation program leaders are often compelled to utilize educators with insufficient debriefing training, which can limit the impact of simulation-based education. To address these concerns, the SAEM Simulation Academy Debriefing Workgroup authored the Workshop in Simulation Debriefing for Educators in Medicine (WiSDEM), a freely available, concise, and ready-to-deploy debriefing curriculum with a target audience of novice educators without formal debriefing training. In this study, we describe the development, initial implementation, and evaluation of the WiSDEM curriculum. Methods: The Debriefing Workgroup iteratively developed the WiSDEM curriculum by expert consensus. The targeted level of content expertise was introductory. The curriculum's educational impact was assessed by surveying participants on their impressions of the curriculum and their confidence and self-efficacy in mastery of the material. Additionally, facilitators of the WiSDEM curriculum were surveyed on its content, usefulness, and future applicability. Results: The WiSDEM curriculum was deployed during the SAEM 2022 Annual Meeting as a didactic presentation. Thirty-nine of 44 participants completed the participant survey, and four of four facilitators completed the facilitator survey. Participant and facilitator feedback on the curriculum content was positive. Additionally, participants agreed that the WiSDEM curriculum improved their confidence and self-efficacy in future debriefing. All surveyed facilitators agreed that they would recommend the curriculum to others. Conclusions: The WiSDEM curriculum was effective at introducing basic debriefing principles to novice educators without formal debriefing training. Facilitators felt that the educational materials would be useful for providing debriefing training at other institutions. Consensus-driven, ready-to-deploy debriefing training materials such as the WiSDEM curriculum can address common barriers to developing basic debriefing proficiency in educators.
Article
With the onset of the COVID-19 pandemic, telehealth became a widely used method to provide patient care. Providers had to quickly learn how to adapt traditional clinical care to the virtual environment. The existing literature focuses on the technological aspects of telehealth with only a few publications addressing optimization of communication, with even fewer looking at the use of simulation to fill the knowledge gap in this area. Simulation training is one such avenue that can be used to practice virtual encounters. This review outlines how to effectively use simulation as an educational method to teach clinical skills needed for effective telehealth communication. The experiential nature of simulation provides learners with an opportunity to adapt their clinical skills to a telehealth encounter, and an opportunity to practice challenges unique to a telehealth environment, such as patient privacy, patient safety, technology disruption, and performance of an examination virtually. The goal of this review is to discuss how simulation may be used to train providers for best practices in telehealth.
Article
Debriefing is challenging and daunting for the novice debriefer. Debriefing literature is focused on the debrief conversation and the process of debriefing, with little guidance provided on how to observe the scenario and prepare for the debrief. Research in the simulation learner observer role reveals that engagement with the scenario through directed observation primes the learner to notice actions and events which may have been otherwise overlooked. Similarly, using the scenario learning objectives to prime and focus their attention, novice debriefers can develop their own observation tool to support data gathering during the scenario. By engaging with the learning objectives to clarify what they may expect to see or hear if the learning objectives are achieved, debriefers are better positioned to notice and collect relevant observational data and frame a debriefing conversation anchored to the scenario objectives.
Article
Background: Simulation-based learning (SBL) is often used in healthcare education. Professional development has been identified as crucial to the success of SBL. Effective, high-quality SBL requires facilitators who are multiskilled and have a range of SBL-related knowledge, skills and attitudes, which require time and practice to acquire. However, investment in facilitators' competence is often limited, particularly at smaller institutions without an associated simulation centre. Objectives: The purpose of this study is to describe how a smaller university college with limited resources and limited facilitation experience has initiated continuing professional development and how this continuing professional development has contributed to maintaining and developing SBL facilitators' competence. Method: Participatory action research has been used to improve the practice of SBL facilitators at a university college in Norway. The evaluations and reflections of 10 facilitators engaged in professional development and 44 national simulation conference participants have been analysed by way of Vaismoradi's qualitative content analysis. Results: A culture of participation and engagement and a clear professional development structure are both of crucial importance in the implementation and maintenance of continuing professional development in SBL. When these are present, not only does facilitation become more transparent, but facilitators become more aware of their own strengths and weaknesses, manage to address these and perceive an improvement in their confidence and competence. Conclusions: Facilitators at smaller institutions without an associated simulation centre can improve their competence and confidence in SBL beyond the initial course, despite the absence of experienced mentors. The results indicate the importance of engaging in ongoing training and self-reflection based on peer feedback, the facilitators' own experience and up-to-date literature. Implementing and maintaining professional development at smaller institutions requires a clear structure, clear expectations and a culture of participation and development.
Article
This scoping review aims to examine and map the current state of faculty development for healthcare simulation educators. This review will include an exploration of the range and type of faculty development programs designed to enhance simulation-based education (SBE). Simulation has become a staple method for educating health professionals, but no standard approaches exist for training simulation instructors, both for initial training and ongoing professional development. As this education modality continues to expand, there is a need to better understand what interventions and approaches improve the knowledge, skills, abilities and other attributes (KSAOs) for those who are responsible for the design, delivery and evaluation of simulation-based educational sessions. This scoping review will consider empirical research and other relevant published works that address faculty development for simulation educators in health professions education. This will include faculty development interventions, conceptual and theoretical frameworks, recommendations for implementation and other discussions of issues related to faculty development for SBE. These may include experimental, quasi-experimental, observational, qualitative studies, commentaries and perspectives. The following electronic databases will be searched: Medline (Ovid); EMBASE (Ovid); CINAHL (EBSCO); ERIC (EBSCO); PsycInfo (Ovid); and Web of Science without time limits. Reference lists of eligible studies will be back-searched, and Google Scholar and Scopus will be used for forward citation tracking. The findings will be summarized in tabular form and a narrative synthesis, to inform recommendations and areas for future research and practice.
Chapter
Full-text available
In the use of simulation activity as a learning approach, the debriefing phase is considered as crucial to achieve learning. In debriefing, the participants reflect and discuss what happened in the scenario. Feedback is an important factor, and research shows that there must be certain conditions present to achieve learning from feedback. The facilitator and the structure used impact on these conditions. In this chapter, we will present a new structure for debriefing in simulation based on a study focusing on how the structure affects the facilitator role.
Chapter
Full-text available
This chapter provides information for teachers in higher education who are interested in collaborative learning combined with the use of immersive virtual reality (VR). It presents an introduction to VR and experiences from implementing and using VR in training midwifery students on the master’s level and radiography students in anatomy on the bachelor’s level.
Chapter
Full-text available
Educational institutions can elevate student perspective and activation so that playful learning is formed by looking for new teaching possibilities. Didactical methods need to provide a safe environment where students can focus on interpersonal interactions with patients while being aware of how their own emotions can influence their situational awareness and decisions. The authors believe that relevant scenarios in a 360-degree video format will be beneficial for nursing students, specifically in preparation for the clinical setting. The potential of 360-degree video in virtual reality (VR) gives the instructor flexibility to create systematic, experiential learning and shapes emotional learning in collaboration with students. 360-Degree video can be seen as a playful way to learn in new situations. Playfulness of this kind can affect teachers and students motivation, as well as the opportunity to promote learning. This field lacks studies exploring the use of 360-degree videos in psychiatric simulation settings. This chapter will provide knowledge about the practical use of 360-degree video in VR, insight into technical potential, as well as challenges. Background information on why this method is suitable for promoting nursing students’ competence in mental health work will be presented. Another function of this chapter is to give an introduction and inspire exploration of 360-degree video in VR in professional education, with particular focus on how this can be used as a tool for nursing students in psychiatric simulation settings, like the VR-SIMI model, which is discussed later in the chapter.
Chapter
Full-text available
Palliative care promotes quality of life for seriously ill and dying patients and their loved ones. An ageing population with more complex chronic and life-limiting conditions will increase the demand for competence in the field. Interprofessional cooperation will be a critical factor in achieving this. Such cooperation within the field of nursing is critical because of registered nurses’ (RNs) role and function in patient- and family-centred care. A project focusing on learning interdisciplinary teamwork using simulation as a learning approach was established. Two groups of students participated in the project: one group consisted of 17 nursing associates who were participating in a 2-year part-time study programme in cancer care and palliative care at a vocational college. The second group was made up of 28 RNs, a social worker and learning disability nurses, all postgraduate students taking part in a part-time interdisciplinary programme in palliative care at master’s degree level. Simulation activity is usually conducted with participants physically present, but because of the COVID-19 pandemic situation, this was not possible. A pilot project was conducted where simulation activity was tried out as online learning. RNs and nursing associates (NA) participated, and their cooperation was focused on palliative/end-of-life care. They were all trained clinicians in two different study programmes. In this chapter, we present how simulation activity with participants physically present was transformed into an online learning situation. A brief presentation of students’ and teachers’ reflections on the pedagogical advantages and disadvantages of such a transition is also included.
Chapter
Full-text available
Over several years, simulation has become an established teaching method in study programs of nursing and other health disciplines. Simulation exercises have a theoretical foundation in a number of perspectives on how adults acquire knowledge, through experience-based learning, reflection-on-action and reflection-in-action, and an emphasis on the sociocultural context. As part of the learning process of simulation, the opportunity for feedback and feed forward is crucial in the learning process. The individual facilitator is particularly important in this understanding of learning. The role of the facilitator is aimed at guiding the student toward learning with the help of didactic and pedagogical methods. Learning in itself is a process that could be defined as a transformation that is not based on biological maturation. In this chapter, we look closer at the nature of train the trainer courses, what separates a facilitator from a lecturer, the significance of a common language and framework, as well as how the side effects and synergies of the facilitator’s skills might benefit academic staff in nursing and other health education programs.
Chapter
Full-text available
In this chapter, results from three different studies about simulation as a learning method are presented and discussed alongside relevant pedagogical theory and other research. These studies were conducted at a university in Norway from 2018 to 2020. The studies used a qualitative approach, and reflection notes and focus group interviews were used to collect data. The participants (81 students in total) were bachelor of nursing students 2nd year public health nursing students, and students in teacher training (healthcare). The interviews were conducted shortly after the students had performed the simulation activity. The students expressed that the simulation provided a higher degree of realism and seriousness than skill training did. They felt that the simulation was an educational method in which they experienced realistic feelings and stress in a serious situation. The students learned that in an emergency, good communication is important to make the right decision. The simulation was perceived as one of the most effective ways to prepare themselves for the profession of nursing. They experienced learning through describing the course of events during the debriefing process and obtaining feedback from other students to reveal the gaps in their knowledge.
Chapter
Full-text available
This chapter reports from an experimental study carried out at University College Copenhagen. In the experiment, fifth-semester nursing students were subjected to an intensified simulation intervention, combined with supporting elements designed to increase collaborative and peer learning. One supporting element was a series of peer-to-peer sessions in which students in small groups trained technical nursing skills for mastery learning, e.g., duodenal tube placement. We describe the study and analyze how the students perceive strengths and weaknesses of the peer-to-peer format and how these perceptions seem to be linked to the students’ perceptions of learning and authority. We discuss how the method may be a positive addition to simulation training in nursing education and particularly how the role of the teacher should be conceived.
Article
Simulation is a technique where the situation or a process is imitated. This process makes one to pretend the scenario. Aviation industry is one of the oldest industry which uses simulation technique to simulate the aviation technique. The main aim of having simulation lab is to make students acquire psychomotor skills before they actually treat the patients. Presently the National Medical Council of India has made it mandatory to have a skill lab but desirable to have a simulated mannequin for the training of medical students before actually treating the patients. Various advantages and disadvantages are appreciated in simulation technique. It is not very far that Dental council of India may make compulsory training of certain simulated modules in dentistry before treating the patient. Simulation lab in the pandemic like COVID has really helped people to undergo extensive training before treating patients, especially one who were catering the COVID ward or intensive care units. The simulation in dentistry is not a very old technique, certain technique are already in practice, but this article tries to highlight the necessity and gray areas where simulation can be improved for the benefit of students to learn and for the benefit of patients in view of safety.
Chapter
Full-text available
This chapter describes the planning, implementation, and evaluation of non-technical skills simulations in an operating room (OR) nursing program in Norway. Three scenarios of preoperative preparations in the OR were simulated, each of which was followed by facilitated debriefing sessions. These sessions consisted of three phases: description, analysis, and application. To achieve the highest standard of care and ensure patient safety in the OR, it is necessary for Norwegian OR nurses to be proficient in the responsibilities and functions of both circulating and scrub nurse roles, including teamwork and non-technical skills. With respect to the three domains of educational purpose—qualification, socialization, and subjectification—the simulation activities aimed to enhance knowledge, ability, and understanding of non-technical skills in the OR, thus socializing the students to their new profession while also encouraging their independence. The Norwegian adaptation of the Scrub Practitioners’ List of Intraoperative Non-Technical Skills (SPLINTS-no) behavioral rating tool was used by the students for reflection and learning throughout the teaching activities. The majority of participants agreed or fully agreed that the three phases of debriefing were helpful for their learning. Simulation of non-technical skills in the preoperative OR can therefore be a valuable learning experience for OR nursing students, through the use of both low- and high-fidelity simulations in partnership with nurse anesthetist students.
Chapter
Full-text available
The number of patients with cancer is increasing, resulting in complex disease patterns and multifaceted patient needs. Both the municipal health service and specialist health services face growing challenges. Oncology nursing students must be trained to see situations comprehensively, act when a patient’s situation worsens, and communicate effectively with the patient and the patient’s relatives. Much of this training will take place during the students’ practice studies, but these periods are short, and there are many aspects of nursing for students to focus on. Simulation offers one option for promoting learning in nursing education. Simulation is an effective supplement to traditional lectures in oncology nursing programs, giving students the opportunity to rehearse their skills and learn where they need more practice.
Thesis
Les professionnels de santé sont particulièrement impactés par les problèmes liés au stress. Ceux qui exercent en anesthésie réanimation ont un risque majoré. Le secteur de la santé et la santé des individus sont au cœur du changement (Rhéaume, 2002 ; Bedin, 2013 ; Marcel, 2014). La formation des praticiens a évolué pour une approche par les compétences et intègre la simulation comme méthode de développement de ces compétences. Dans ce contexte, nous avons essayé de mettre en évidence l’impact de la simulation pleine échelle de situations critiques (Pastré, 2010) sur le stress, l’anxiété, l’épuisement professionnel et in fine sur la qualité de vie au travail des participants. Pour cela, nous avons réalisé, dans le cadre d’une démarche quasi-expérimentale, une phase exploratoire, avec 50 étudiants en soins infirmiers, pour tester trois échelles relatives à l’anxiété et au burn out. Lors de la phase 2, nous avons assisté à des séances de simulation auprès de soignants exerçant en anesthésie réanimation pendant un an et testé 209 professionnels. Ils ont renseigné les échelles avant et après la séance de simulation. Enfin, dans la phase 3, nous avons cherché à mettre en évidence l’effet de la simulation sur le stress clinique de 215 étudiants en soins infirmiers. Nous montrons les effets positifs que semble avoir la simulation sur le stress, l’épuisement professionnel, l’anxiété et les traits de personnalité anxieux des participants. D’autre part, il semblerait que la simulation puisse permettre une diminution du stress clinique des étudiants en soins infirmiers et que cet effet soit stable quatre mois après la séance de simulation. La simulation aurait ainsi un effet préventif sur le stress. Ces résultats permettent d’appuyer des propositions de développement de cette méthode pédagogique en formation initiale et continue.
Article
Full-text available
Background Debriefing is an important learning component of simulation-based education (SBE) for nursing students. The evidence-based, scripted, and structured debriefing model—Promoting Excellence and Reflective Learning in Simulation (PEARLS) is meeting the standard of best practice by using a blended approach in the debriefing process with appropriate integration of feedback, debriefing, and/or guided reflection. Evidence demonstrating that PEARLS promotes better outcomes than other debriefing strategies is lacking. Our study compared PEARLS to a standard debriefing on nursing students’ professional competence and clinical judgment abilities. Methods A quasi-experimental design was applied to compare differences in the effects of PEARLS (intervention group) and standard debriefing (control group) on nursing students’ self-reported professional competence and clinical judgment in SBE and clinical placement. Results No significant differences in nursing students’ self-reported professional competence or clinical judgment were found between the two groups. Professional competence and clinical judgment increased significantly within the intervention group, but not the control group. Conclusion The results provide some support for implementation of PEARLS debriefing in nursing education. Faculty should receive the training and resources necessary for implementation.
Chapter
Full-text available
I hear and I forget. I see and I remember. I do and I understand. —Confucius 551–479 BCE Simulation is increasingly used in nursing education to supplement clinical and didactic learning activities. Simulation is a technique for practice and learning that can be used in many different disciplines as well as for trainees. Simulation is a technique (not a technology) aiming at replacing real experiences with guided ones; that is, it represents a context in which students can exercise and explore various aspects of a specific practical skill. Accordingly, simulation-based learning signifies a useful approach to develop health professionals’ knowledge, skills, and attitudes while protecting patients from unnecessary risks. Simulation involves learning situations that take place under the supervision of an expert or lecturer and is commonly applied as an active learning method in different health disciplines like nursing, social education, radiography, and medicine. This chapter concentrates on historical and pedagogical perspectives of simulation as a learning method in nursing education. Simulation as a learning method builds on pedagogical adult learning theory, with an emphasis on David A. Kolb and Donald Schön’s concepts experience-based learning , reflection-on-action , and reflection-in-action . Simulation-based learning is appropriate for topics such as patient safety, teamwork, and quality of health services. The literature states that simulation contributes positively to nursing students’ situational awareness, their ability to formulate and predict possible consequences of action implemented, decision-making, communication, and teamwork.
Article
Competition for nursing clinical sites has intensified as universities have increased enrollment to meet the growing demand for nurses. Hospital mandates have reduced opportunities for nursing students to practice clinical skills, leading nursing programs to use simulation to help learners gain the knowledge, skills, and attitudes needed for transition to clinical practice. Simulation offers a safe learning environment and a guarantee that every student will have experience with critical clinical encounters. To be effective, health-care simulation must be facilitated by trained educators knowledgeable in simulation pedagogy, including theory-based simulation frameworks and debriefing models. Training simulation facilitators is often challenged by financial and time constraints. The need for an easily accessible, asynchronous, cost-effective method of providing foundational simulation training led a group of experienced simulationists to develop a series of seven online educational modules grounded in simulation best practices.
Article
Background The ability to debrief is considered an essential clinical and simulation teaching skill because of the deep learning cultivated. Regulatory bodies identify the need for debriefing training and professional development followed by formative and summative assessment. Method The Debriefing for Meaningful Learning Evaluation Scale (DMLES) is a behaviorally anchored rating scale developed to assess 20 behaviors aligned with Debriefing for Meaningful Learning (DML). Participants from five baccalaureate pre–licensure nursing programs were recruited to receive DML training, then facilitate and record a debriefing for subjective and objective assessment using the DMLES. Results A total of 52 debriefers submitted 81 recorded debriefings. DMLES subjective ratings at two time points were higher than that of expert raters of the same debriefings demonstrating statistically significant differences between subjective and objective mean scores. Conclusions The difference between subjective and objective scores demonstrated the Dunning-Kruger Effect (DKE), a subjective overestimation of skill performance when compared to objective assessment. The potential for DKE is an important consideration for determining assessment methods.
Article
Debriefing for Meaningful Learning (DML) is a method of debriefing grounded in the theory of reflection used following a simulation or clinical learning experience to engage participants in an interactive dialogue aimed at examining and evaluating their thinking and decision-making processes. With increasing adoption of DML worldwide, a sustainable training program for nurse educators is needed. Attending conferences and workshops that provide training is challenging for many nurse educators because of time and cost constraints. One promising solution is the train-the-trainer (TTT) model. In this article, the development and implementation of a TTT model of DML debriefer training, adaptable to both academic and clinical nursing professional development, is described. [J Contin Educ Nurs. 2022;53(7):321-327.].
Article
Introduction To better support the development of domain expertise, there has been considerable interest in the instructional strategies that afford opportunities for problem solving. Simulation-based learning includes case-based activities that support learners' transition into practice through experience and reflective debriefing. Review of the Literature Simulation-based learning serves as a means for learners to practice psychomotor, affective, and cognitive skills. Although the literature suggests this is an effective strategy for supporting problem-solving skills, much of the literature is focused on measuring cognitive outcomes within a single case. As such, very few studies have simultaneously investigated (a) how debriefing in simulation-based education supports problem solving and (b) the degree to which cognitive and affective learning outcomes are variable throughout the problem-solving process. Methodology To address this gap, this repeated-measures study examined the role of debriefing strategies in simulation-based learning experiences, which was designed to practice both cognitive and affective skills. In the control condition, participants were given 2 simulation-based laboratory instructional sessions and then assessed on cognitive (problem representation and solution generation) and affective learning (perceived self-efficacy) outcomes. In the comparison condition, participants engaged in debriefing sessions after each of the 2 simulation-based laboratory instructional sessions. For the cognitive outcomes, participants were assessed using the Think-Aloud Standardized Patient Examination instrument, which measured problem representation and solution generation. Acute care confidence survey was used to measure perceived self-efficacy. Results The study found statistically significant gaps in early-stage measures of cognitive problem representation, but no difference was found in solution generation or perceived self-efficacy measures. Conclusion This study reinforces the importance of debriefing in simulation-based education. Specifically, this study suggests that cognitive learning outcomes are variable during the problem-solving process, but affective learning outcomes are more fixed and may need more targeted debriefing.
Article
Contexte : Le débriefing dans le secteur de la santé est souvent considéré comme un processus de réflexion centré sur l’apprenant nécessaire pour la construction des connaissances, où le formateur joue un rôle de facilitateur. Cependant, diverses études soulignent l’importance que le débriefing soit explicite pour les apprenants novices. Cet article examine l’efficacité de différents types de débriefing, simulation, pour des apprenants expérimentés qui travaillent dans des équipes d’urgence. Objectif : Cette étude vise à vérifier si le débriefing explicite est aussi efficace pour les professionnels expérimentés qu’il l’a été pour les apprenants novices. Méthodes : Cette étude prospective randomisée monocentrique a été réalisée dans le cadre d’un programme de simulation basé sur des interventions d’urgence lors d’un arrêt cardiaque. Les connaissances déclarées de chaque participant, leur auto-efficacité et l’efficacité de l’équipe ont été mesurées avant et après la formation par simulation. Résultats : Les résultats montrent un effet bénéfique sur les connaissances et l’auto-efficacité pour les deux types de débriefing. Les soins apportés aux patients ont aussi été optimisés pour les deux types de débriefing. Conclusion : Les deux types de débriefing post-simulation sont efficaces pour les équipes d’urgence expérimentées.
Book
Full-text available
Review of ten years activities in Medical Simulation Centre Ljubljana
Chapter
Developed by the leading experts in neonatal simulation, this innovative new resource delivers neonatology health care providers and educators essential guidance on designing, developing, and implementing simulation-based neonatal education programs. Available for purchase at https://shop.aap.org/neonatal-simulation-a-practical-guide-paperback/ (NOTE: This book features a full text reading experience. Click a chapter title to access content.)
Article
Statement The optimal strategy for training cardiopulmonary resuscitation (CPR) for healthcare professionals remains to be determined. This scoping review aimed to describe the emerging evidence for CPR training for healthcare professionals. We screened 7605 abstracts and included 110 studies in this scoping review on CPR training for healthcare professionals. We assessed the included articles for evidence for the following topics: training duration, retraining intervals, e-learning, virtual reality/augmented reality/gamified learning, instructor-learner ratio, equipment and manikins, other aspects of contextual learning content, feedback devices, and feedback/debriefing. We found emerging evidence supporting the use of low-dose, high-frequency training with e-learning to achieve knowledge, feedback devices to perform high-quality chest compressions, and in situ team simulations with debriefings to improve the performance of provider teams.
Article
This article shares one university’s experience in deploying the Community Action Poverty Simulation (CAPS) for interprofessional learning. Little guidance exists for implementing large-scale interprofessional simulations in higher education, especially with students from various healthcare professions. We will present our efforts implementing CAPS with students from a variety of healthcare majors. First, we provide background about our institution and CAPS in an effort to describe our implementation initiative including learners and facilitators. Second, we cover the adaptation of CAPS to include best practices in simulation. Specifically, we review prebriefing and debriefing, as well as logistics in implementing the poverty simulation as an interprofessional activity at a large university. Finally, we discuss learning and institutional outcomes related to CAPS.
Article
Debriefing, a facilitator-guided reflection of an educational experience or critical incident, is an important tool in improving the safety and quality of practice in the NICU. Unlike feedback, which is often a one-way discussion, debriefing is a purposeful, 2-way reflective discussion which is based on experiential learning theory. The purpose of this article is to review the theoretical basis of debriefing and describe styles and tools for debriefing that can be applied in the NICU.
Article
Background Debriefing after simulation facilitates reflection and the ability to transfer learning into clinical practice. Aim To evaluate faculty debriefing to inform and guide the development of ongoing resources to support adjunct faculty members and to maximise learning potential for students. Method Faculty debriefing styles were evaluated following a high-fidelity simulation involving 720 first year undergraduate nursing students. One audio recording of a debrief was randomly selected from each of 12 faculty members. If a pre-brief was completed prior to the simulation, this was also selected as it is a component of the debrief and sets the tone prior to the simulation. Initial analysis was undertaken using the validated observational structured assessment of debriefing tool (OSAD) to briefly highlight different debriefing styles. This facilitated the thematic analysis of the data using Braun and Clarkes (2006) six step process. Results Nursing faculty used open-ended questions, silence and clinical feedback with varying levels of expertise and success. Conclusions Further research into the effect on the variation in debriefing styles, and it's related impact on student learning outcomes is needed.
Article
The value of simulation-based education can be lost without a structured and purposeful guided debrief where nursing students and health care professionals are able to think critically and reflect on the experiential learning. Debriefing enhances peer-to-peer learning and aids the nursing student in formulating best practice for the next time when this encounter may occur in the clinical setting. Debriefing should be led by a trained facilitator using evidence-based methods to ensure a safe learning environment for nursing students. Debriefing is an essential learning tool that should be considered for application in the classroom, clinical, and laboratory settings.
Article
Increasingly, clinicians and clinical educators are using simulation-based education (SBE) as a valued educational modality within healthcare organizations. Typically, these facilitators have minimal time to dedicate to uninterrupted formal education and development. To enhance effective use of SBE, facilitators need learner centered opportunities that promote the development and sustainment of expert SBE skills, knowledge, attitudes and behaviors. Implementing a mentorship approach for facilitator development is a feasible option. This article proposes that the Cognitive Apprenticeship model (Collins et al., 1991) can be adapted and utilized for SBE facilitator mentorship.
Article
Full-text available
Background Simulation is an important educational tool to improve medical training and patient safety. Debriefing after simulation is crucial to maximise learning and to translate the lessons learnt to improve real clinical performance, and thus to reduce medical error. Currently there are few tools to improve performance debriefing and learning after simulations of serious paediatric situations. Purpose The purpose of this study was to develop a tool to guide and assess debriefings after simulations of serious paediatric situations, applying the current evidence base and user-based research. Study design A literature review and semistructured interviews (performed in 2010) to identify important features of a paediatric simulation debriefing. Emergent theme analysis was used to identify key components of an effective debriefing which could be used as a tool for assessing debriefing effectiveness. Results The literature review identified 34 relevant studies. Interviews were carried out with 16 paediatricians, both debriefing facilitators and learners. In total, 307 features of a debriefing were identified. These were grouped into eight dimensions representing the key components of a paediatric debriefing: the facilitator's approach, learning environment, engagement of learners, reaction, descriptive reflection, analysis, diagnosis and application. These eight dimensions were used to create a tool, the Objective Structured Assessment of Debriefing (OSAD). Each dimension can be scored on a five-point Likert scale containing descriptions for scores 1, 3 and 5 to serve as anchors and aid scoring. Conclusions The study identified the important features of a paediatric simulation debriefing, which were developed into the OSAD tool. OSAD offers a structured approach to paediatric simulation debriefing, and is based on evidence from published literature and views of simulation facilitators and learners. OSAD may be used as a guide or assessment tool to improve the quality of debriefing after paediatric simulation.
Article
Full-text available
Over the past two decades, there has been an exponential and enthusiastic adoption of simulation in healthcare education internationally. Medicine has learned much from professions that have established programs in simulation for training, such as aviation, the military and space exploration. Increased demands on training hours, limited patient encounters, and a focus on patient safety have led to a new paradigm of education in healthcare that increasingly involves technology and innovative ways to provide a standardized curriculum. A robust body of literature is growing, seeking to answer the question of how best to use simulation in healthcare education. Building on the groundwork of the Best Evidence in Medical Education (BEME) Guide on the features of simulators that lead to effective learning, this current Guide provides practical guidance to aid educators in effectively using simulation for training. It is a selective review to describe best practices and illustrative case studies. This Guide is the second part of a two-part AMEE Guide on simulation in healthcare education. The first Guide focuses on building a simulation program, and discusses more operational topics such as types of simulators, simulation center structure and set-up, fidelity management, and scenario engineering, as well as faculty preparation. This Guide will focus on the educational principles that lead to effective learning, and include topics such as feedback and debriefing, deliberate practice, and curriculum integration - all central to simulation efficacy. The important subjects of mastery learning, range of difficulty, capturing clinical variation, and individualized learning are also examined. Finally, we discuss approaches to team training and suggest future directions. Each section follows a framework of background and definition, its importance to effective use of simulation, practical points with examples, and challenges generally encountered. Simulation-based healthcare education has great potential for use throughout the healthcare education continuum, from undergraduate to continuing education. It can also be used to train a variety of healthcare providers in different disciplines from novices to experts. This Guide aims to equip healthcare educators with the tools to use this learning modality to its full capability.
Article
Full-text available
Background: This pilot study evaluated the effect of videotape-facilitated human patient simulator (HPS) practice and guidance on clinical performance indicators. Method: Nursing and nurse anesthetist students in the treatment group (n = 20) participated in HPS practice and guidance using videotape-facilitated debriefing, and the control group (n = 20) participated in HPS practice and guidance using oral debriefing alone. Results: Students in the intervention group were significantly more likely to demonstrate desirable behaviors concerning patient identification, team communication, and vital signs. The role students played in the simulation significantly impacted their performance. When scores of both the intervention and control groups were combined, team leaders, airway managers, and nurse anesthetists had higher mean total performance scores than crash cart managers, recorders, or medication nurses. Conclusion: Video-facilitated simulation feedback is potentially a useful tool in increasing desirable clinical behaviors in a simulated environment. © 2010 International Nursing Association for Clinical Simulation and Learning.
Article
Full-text available
Although feedback has been identified as a key instructional feature in simulation based medical education (SBME), we remain uncertain as to the magnitude of its effectiveness and the mechanisms by which it may be effective. We employed a meta-analysis and critical narrative synthesis to examine the effectiveness of feedback for SBME procedural skills training and to examine how it works in this context. Our results demonstrate that feedback is moderately effective during procedural skills training in SBME, with a pooled effect size favoring feedback for skill outcomes of 0.74 (95 % CI 0.38-1.09; p < .001). Terminal feedback appears more effective than concurrent feedback for novice learners' skill retention. Multiple sources of feedback, including instructor feedback, lead to short-term performance gains although data on long-term effects is lacking. The mechanism by which feedback may be operating is consistent with the guidance hypothesis, with more research needed to examine other mechanisms such as cognitive load theory and social development theory.
Article
Full-text available
Background: Among nonmedical educators, longitudinal faculty development programs most effectively increase teachers' abilities. Despite most medical settings providing little reinforcement for newinstructional skills, teaching improvement programs infrequently have explicit ongoing activities. We carried out a longitudinal program for medical educators designed to reinforce and firmly establish new teaching skills. Description: We conducted a longitudinal(18 months of biweekly 1-hr meetings) faculty development program. Its activities followed an initial structured seminar series, and the ongoing meetings involved reviewing video tapes of participants' teaching, enacting and debriefing role-plays, and a modified Balint group for medical educators. Evaluation: We assessed the program's process and outcomes using attendance, self-reported teaching behaviors, perceived program usefulness, educational administrative responsibilities, and qualitative analysis of audiotapes and session notes. Conclusions: Participants maintained high attendanceduring18months of meetings. Ratings of usefulness were high, comparable to other faculty development activities with established utility, and qualitative data support unique benefits of the ongoing meetings. The longitudinal component built on the initial seminar series; it seemed to enhance collegial support, to allow observation of instructional outcomes to reinforce newinstructional skills, and to foster greater involvement in the institution's teaching activities. Teaching improvement programs for physician educators have taken several forms, from workshops and expert consultations to month-long minisabbaticals. However, most are single or sequenced interventions, without an explicit long-term component. We present the structure of an 18-month program for clinician teachers and report observations supporting the program's utility. We hope that this descriptive study promotes greater attention to and subsequent prospective research of longitudinal faculty development.
Article
Full-text available
Background: Although technology-enhanced simulation is increasingly used in health professions education, features of effective simulation-based instructional design remain uncertain. Aims: Evaluate the effectiveness of instructional design features through a systematic review of studies comparing different simulation-based interventions. Methods: We systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. We included original research studies that compared one simulation intervention with another and involved health professions learners. Working in duplicate, we evaluated study quality and abstracted information on learners, outcomes, and instructional design features. We pooled results using random effects meta-analysis. Results: From a pool of 10 903 articles we identified 289 eligible studies enrolling 18 971 trainees, including 208 randomized trials. Inconsistency was usually large (I( 2) > 50%). For skills outcomes, pooled effect sizes (positive numbers favoring the instructional design feature) were 0.68 for range of difficulty (20 studies; p < 0.001), 0.68 for repetitive practice (7 studies; p = 0.06), 0.66 for distributed practice (6 studies; p = 0.03), 0.65 for interactivity (89 studies; p < 0.001), 0.62 for multiple learning strategies (70 studies; p < 0.001), 0.52 for individualized learning (59 studies; p < 0.001), 0.45 for mastery learning (3 studies; p = 0.57), 0.44 for feedback (80 studies; p < 0.001), 0.34 for longer time (23 studies; p = 0.005), 0.20 for clinical variation (16 studies; p = 0.24), and -0.22 for group training (8 studies; p = 0.09). Conclusions: These results confirm quantitatively the effectiveness of several instructional design features in simulation-based education.
Article
Full-text available
INTRODUCTION: This study examined the reliability of the scores of an assessment instrument, the Debriefing Assessment for Simulation in Healthcare (DASH), in evaluating the quality of health care simulation debriefings. The secondary objective was to evaluate whether the instrument's scores demonstrate evidence of validity. METHODS: Two aspects of reliability were examined, interrater reliability and internal consistency. To assess interrater reliability, intraclass correlations were calculated for 114 simulation instructors enrolled in webinar training courses in the use of the DASH. The instructors reviewed a series of 3 standardized debriefing sessions. To assess internal consistency, Cronbach α was calculated for this cohort. Finally, 1 measure of validity was examined by comparing the scores across 3 debriefings of different quality. RESULTS: Intraclass correlation coefficients for the individual elements were predominantly greater than 0.6. The overall intraclass correlation coefficient for the combined elements was 0.74. Cronbach α was 0.89 across the webinar raters. There were statistically significant differences among the ratings for the 3 standardized debriefings (P < 0.001). CONCLUSIONS: The DASH scores showed evidence of good reliability and preliminary evidence of validity. Additional work will be needed to assess the generalizability of the DASH based on the psychometrics of DASH data from other settings.
Article
Full-text available
Peer observation of Teaching involves observers providing descriptive feedback to their peers on learning and teaching practice as a means to improve quality of teaching. This study employed and assessed peer observation as a constructive, developmental process for members of a Pediatric Teaching Faculty. This study describes how peer observation was implemented as part of a teaching faculty development program and how it was perceived by teachers. The PoT process was divided into 4 stages: pre-observation meeting, observation, post-observation feedback and reflection. Particular care was taken to ensure that teachers understood that the observation and feedback was a developmental and not an evaluative process. Twenty teachers had their teaching peer observed by trained Faculty members and gave an e-mail 'sound-bite' of their perceptions of the process. Teaching activities included lectures, problem-based learning, small group teaching, case-based teaching and ward-based teaching sessions. Teachers were given detailed verbal and written feedback based on the observer's and students' observations. Teachers' perceptions were that PoT was useful and relevant to their teaching practice. Teachers valued receiving feedback and viewed PoT as an opportunity for insight and reflection. The process of PoT was viewed as non-threatening and teachers thought that PoT enhanced the quality of their teaching, promoted professional development and was critical for Faculty development. This study demonstrated that PoT can be used in a constructive way to improve course content and delivery, to support and encourage medical teachers, and to reinforce good teaching.
Article
Full-text available
Feedback is an important feature of simulation-based education. This study investigated the optimal timing of feedback for technical skills learning in novices. Thirty novice endoscopists were pretested on a colonoscopy simulator task. Participants then received feedback either during (concurrent) or after (terminal) each of their 12 practice trials. Effectiveness of training was assessed using an immediate posttest and one week later on retention and transfer tests. Measures included execution time and blinded expert assessments. Both groups performed similarly on the pre-, post-, and retention tests. At transfer, the terminal feedback group performed significantly better as measured by execution time, checklist, and global rating scores. The concurrent feedback group's performance decreased significantly on the transfer test as compared with the posttest and retention test. Not all feedback conditions seem equally effective. The use of terminal feedback resulted in better learning as demonstrated by superior performance during transfer.
Article
Full-text available
Peer observation of teaching (PoT) is most commonly done as a way of evaluating educators in lecture or small group teaching. Teaching in the clinical environment is a complex and hectic endeavor that requires nimble and innovative teaching on a daily basis. Most junior faculty start their careers with little formal training in education and with limited opportunity to be observed or to observe more experienced faculty. Formal PoT would potentially ameliorate these challenges. This article describes a collaborative peer observation process that a group of 11 clinician educators is using as a longitudinal faculty development program. Results: The process described in this article provides detailed and specific teaching feedback for the observed teaching attending while prompting the observing faculty to reflect on their own teaching style and to borrow effective teaching techniques from the observation. This article provides detailed examples from written feedback obtained during collaborative peer observation to emphasize the richness of this combined experience.
Article
Full-text available
1969 to 2003, 34 years. Simulations are now in widespread use in medical education and medical personnel evaluation. Outcomes research on the use and effectiveness of simulation technology in medical education is scattered, inconsistent and varies widely in methodological rigor and substantive focus. Review and synthesize existing evidence in educational science that addresses the question, 'What are the features and uses of high-fidelity medical simulations that lead to most effective learning?'. The search covered five literature databases (ERIC, MEDLINE, PsycINFO, Web of Science and Timelit) and employed 91 single search terms and concepts and their Boolean combinations. Hand searching, Internet searches and attention to the 'grey literature' were also used. The aim was to perform the most thorough literature search possible of peer-reviewed publications and reports in the unpublished literature that have been judged for academic quality. Four screening criteria were used to reduce the initial pool of 670 journal articles to a focused set of 109 studies: (a) elimination of review articles in favor of empirical studies; (b) use of a simulator as an educational assessment or intervention with learner outcomes measured quantitatively; (c) comparative research, either experimental or quasi-experimental; and (d) research that involves simulation as an educational intervention. Data were extracted systematically from the 109 eligible journal articles by independent coders. Each coder used a standardized data extraction protocol. Qualitative data synthesis and tabular presentation of research methods and outcomes were used. Heterogeneity of research designs, educational interventions, outcome measures and timeframe precluded data synthesis using meta-analysis. HEADLINE RESULTS: Coding accuracy for features of the journal articles is high. The extant quality of the published research is generally weak. The weight of the best available evidence suggests that high-fidelity medical simulations facilitate learning under the right conditions. These include the following: providing feedback--51 (47%) journal articles reported that educational feedback is the most important feature of simulation-based medical education; repetitive practice--43 (39%) journal articles identified repetitive practice as a key feature involving the use of high-fidelity simulations in medical education; curriculum integration--27 (25%) journal articles cited integration of simulation-based exercises into the standard medical school or postgraduate educational curriculum as an essential feature of their effective use; range of difficulty level--15 (14%) journal articles address the importance of the range of task difficulty level as an important variable in simulation-based medical education; multiple learning strategies--11 (10%) journal articles identified the adaptability of high-fidelity simulations to multiple learning strategies as an important factor in their educational effectiveness; capture clinical variation--11 (10%) journal articles cited simulators that capture a wide variety of clinical conditions as more useful than those with a narrow range; controlled environment--10 (9%) journal articles emphasized the importance of using high-fidelity simulations in a controlled environment where learners can make, detect and correct errors without adverse consequences; individualized learning--10 (9%) journal articles highlighted the importance of having reproducible, standardized educational experiences where learners are active participants, not passive bystanders; defined outcomes--seven (6%) journal articles cited the importance of having clearly stated goals with tangible outcome measures that will more likely lead to learners mastering skills; simulator validity--four (3%) journal articles provided evidence for the direct correlation of simulation validity with effective learning. While research in this field needs improvement in terms of rigor and quality, high-fidelity medical simulations are educationally effective and simulation-based education complements medical education in patient care settings.
Article
Full-text available
Recent literature describes "cognitive dispositions to respond" (CDRs) that may lead physicians to err in their clinical reasoning. To assess learner perception of high-fidelity mannequin-based simulation and debriefing to improve understanding of CDRs. Emergency medicine (EM) residents were exposed to two simulations designed to bring out the CDR concept known as "vertical line failure." Residents were then block-randomized to a technical/knowledge debriefing covering the medical subject matter or a CDR debriefing covering vertical line failure. They then completed a written survey and were interviewed by an ethnographer. Four investigators blinded to group assignment reviewed the interview transcripts and coded the comments. The comments were qualitatively analyzed and those upon which three out of four raters agreed were quantified. A random sample of 84 comments was assessed for interrater reliability using a kappa statistic. Sixty-two residents from two EM residencies participated. Survey results were compared by technical (group A, n = 32) or cognitive (group B, n = 30) debriefing. There were 255 group A and 176 group B comments quantified. The kappa statistic for coding the interview comments was 0.42. The CDR debriefing group made more, and qualitatively richer, comments regarding CDR concepts. The technical debriefing group made more comments on the medical subjects of cases. Both groups showed an appreciation for the risk of diagnostic error. Survey data indicate that technical debriefing was better received than cognitive debriefing. The authors theorize that an understanding of CDRs can be facilitated through simulation training based on the analysis of interview comments.
Article
Full-text available
In medical education programmes which rely on clinical teachers spread across diverse sites, the application of peer observation of teaching offers the potential of both supporting teachers and maintaining quality. This paper reports on a questionnaire survey carried out with general practitioner (GP) teachers of medical undergraduate students from King's College London School of Medicine at Guy's, King's College and St Thomas' Hospitals. The aim of the study was to determine GP teachers' views on a proposed programme of peer observation of their teaching. The majority of GP teachers identified benefits of the proposed scheme with 69% saying it would help improve the education of future doctors. However, despite seeing the benefits, less than half wished to take part in the programme. Two thirds cited time and paperwork as major disincentives to taking part and 62% said that they felt it would make them feel under scrutiny. No associations were found between measures of workload and willingness to take part. This suggests that a fundamental fear of scrutiny and criticism may be the main hurdle to be overcome in implementing the scheme. Imposing peer observation on GP teachers in the form proposed could create suspicion and distance between the university department and practice-based GP teachers and may even result in a loss of teachers. The introduction of peer observation is more likely to be successful if GPs' apprehensions are addressed. Using peer observation to strengthen the process of quality assurance may undermine its role in the support and development of clinical teachers.
Article
Full-text available
Preparing healthcare professionals for teaching is regarded as essential to enhancing teaching effectiveness. Although many reports describe various faculty development interventions, there is a paucity of research demonstrating their effectiveness. To synthesize the existing evidence that addresses the question: "What are the effects of faculty development interventions on the knowledge, attitudes and skills of teachers in medical education, and on the institutions in which they work?" The search, covering the period 1980-2002, included three databases (Medline, ERIC and EMBASE) and used the keywords: staff development; in-service training; medical faculty; faculty training/development; continuing medical education. Manual searches were also conducted. Articles with a focus on faculty development to improve teaching effectiveness, targeting basic and clinical scientists, were reviewed. All study designs that included outcome data beyond participant satisfaction were accepted. From an initial 2777 abstracts, 53 papers met the review criteria. Data were extracted by six coders, using the standardized BEME coding sheet, adapted for our use. Two reviewers coded each study and coding differences were resolved through discussion. Data were synthesized using Kirkpatrick's four levels of educational outcomes. Findings were grouped by type of intervention and described according to levels of outcome. In addition, 8 high-quality studies were analysed in a 'focused picture'. The majority of the interventions targeted practicing clinicians. All of the reports focused on teaching improvement and the interventions included workshops, seminar series, short courses, longitudinal programs and 'other interventions'. The study designs included 6 randomized controlled trials and 47 quasi-experimental studies, of which 31 used a pre-test-post-test design. Despite methodological limitations, the faculty development literature tends to support the following outcomes: Overall satisfaction with faculty development programs was high. Participants consistently found programs acceptable, useful and relevant to their objectives. Participants reported positive changes in attitudes toward faculty development and teaching. Participants reported increased knowledge of educational principles and gains in teaching skills. Where formal tests of knowledge were used, significant gains were shown. Changes in teaching behavior were consistently reported by participants and were also detected by students. Changes in organizational practice and student learning were not frequently investigated. However, reported changes included greater educational involvement and establishment of collegiate networks. Key features of effective faculty development contributing to effectiveness included the use of experiential learning, provision of feedback, effective peer and colleague relationships, well-designed interventions following principles of teaching and learning, and the use of a diversity of educational methods within single interventions. Methodological issues: More rigorous designs and a greater use of qualitative and mixed methods are needed to capture the complexity of the interventions. Newer methods of performance-based assessment, utilizing diverse data sources, should be explored, and reliable and valid outcome measures should be developed. The maintenance of change over time should also be considered, as should process-oriented studies comparing different faculty development strategies. Faculty development activities appear highly valued by participants, who also report changes in learning and behavior. Notwithstanding the methodological limitations in the literature, certain program characteristics appear to be consistently associated with effectiveness. Further research to explore these associations and document outcomes, at the individual and organizational level, is required.
Article
Identified are four groups of faculty development practices based on their use among 756 colleges and universities: traditional practices, instructional assistance practices, those that emphasized the assessment of faculty, and those that involve many faculty in improvement activities. Practices judged effective, and the organization and funding of programs are also discussed.
Article
Statement: We describe an integrated conceptual framework for a blended approach to debriefing called PEARLS [Promoting Excellence And Reflective Learning in Simulation]. We provide a rationale for scripted debriefing and introduce a PEARLS debriefing tool designed to facilitate implementation of the new framework. The PEARLS framework integrates 3 common educational strategies used during debriefing, namely, (1) learner self-assessment, (2) facilitating focused discussion, and (3) providing information in the form of directive feedback and/or teaching. The PEARLS debriefing tool incorporates scripted language to guide the debriefing, depending on the strategy chosen. The PEARLS framework and debriefing script fill a need for many health care educators learning to facilitate debriefings in simulation-based education. The PEARLS offers a structured framework adaptable for debriefing simulations with a variety in goals, including clinical decision making, improving technical skills, teamwork training, and interprofessional collaboration.
Book
The fully updated Crisis Management in Anesthesiology continues to provide updated insights on the latest theories, principles, and practices in anesthesiology. From anesthesiologists and nurse anesthetists to emergency physicians and residents, this medical reference book will effectively prepare you to handle any critical incident during anesthesia. Identify and respond to a broad range of life-threatening situations with the updated Catalog of Critical Incidents, which outlines what may happen during surgery and details the steps necessary to respond to and resolve the crisis. React quickly to a range of potential threats with an added emphasis on simulation of managing critical incidents. Focus on the core knowledge that serves as a useful review. Explore new topics in the ever-expanding anesthesia practice environment with a detailed chapter on debriefing.
Article
Objectives: Debriefing is a common feature of technology-enhanced simulation (TES) education. However, evidence for its effectiveness remains unclear. We sought to characterise how debriefing is reported in the TES literature, identify debriefing features that are associated with improved outcomes, and evaluate the effectiveness of debriefing when combined with TES. Methods: We systematically searched databases, including MEDLINE, EMBASE and Scopus, and reviewed previous bibliographies for original comparative studies investigating the use of TES with debriefing in training health care providers. Reviewers, in duplicate, evaluated study quality and abstracted information on instructional design, debriefing and outcomes. Effect sizes (ES) were pooled using random-effects meta-analysis. Results: From 10 903 potentially eligible studies, we identified 177 studies (11 511 learners) that employed debriefing as part of TES. Key characteristics of debriefing (e.g. duration, educator presence and characteristics, content, structure/method, timing, use of video) were usually incompletely reported. A meta-analysis of four studies demonstrated that video-assisted debriefing has negligible and non-significant effects for time skills (ES = 0.10) compared with non-video-assisted debriefing. Meta-analysis demonstrated non-significant effects in favour of expert modelling with short debriefing in comparison with long debriefing (ES range = 0.21-0.74). Among studies comparing terminal with concurrent debriefing, results were variable depending on outcome measures and the context of training (e.g. medical resuscitation versus technical skills). Eight additional studies revealed insight into the roles of other debriefing-related factors (e.g. multimedia debriefing, learner-led debriefing, debriefing duration, content of debriefing). Among studies that compared simulation plus debriefing with no intervention, pooled ESs were favourable for all outcomes (ES range = 0.28-2.16). Conclusions: Limited evidence suggests that video-assisted debriefing yields outcomes similar to those of non-video-assisted debriefing. Other debriefing design features show mixed or non-significant results. As debriefing characteristics are usually incompletely reported, future debriefing research should describe all the key debriefing characteristics along with their associated descriptors.
Article
Importance Resuscitation training programs use simulation and debriefing as an educational modality with limited standardization of debriefing format and content. Our study attempted to address this issue by using a debriefing script to standardize debriefings. Objective To determine whether use of a scripted debriefing by novice instructors and/or simulator physical realism affects knowledge and performance in simulated cardiopulmonary arrests. Design Prospective, randomized, factorial study design. Setting The study was conducted from 2008 to 2011 at 14 Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing (EXPRESS) network simulation programs. Interprofessional health care teams participated in 2 simulated cardiopulmonary arrests, before and after debriefing. Participants We randomized 97 participants (23 teams) to nonscripted low-realism; 93 participants (22 teams) to scripted low-realism; 103 participants (23 teams) to nonscripted high-realism; and 94 participants (22 teams) to scripted high-realism groups. Intervention Participants were randomized to 1 of 4 arms: permutations of scripted vs nonscripted debriefing and high-realism vs low-realism simulators. Main Outcomes and Measures Percentage difference (0%-100%) in multiple choice question (MCQ) test (individual scores), Behavioral Assessment Tool (BAT) (team leader performance), and the Clinical Performance Tool (CPT) (team performance) scores postintervention vs preintervention comparison (PPC). Results There was no significant difference at baseline in nonscripted vs scripted groups for MCQ (P = .87), BAT (P = .99), and CPT (P = .95) scores. Scripted debriefing showed greater improvement in knowledge (mean [95% CI] MCQ-PPC, 5.3% [4.1%-6.5%] vs 3.6% [2.3%-4.7%]; P = .04) and team leader behavioral performance (median [interquartile range (IQR)] BAT-PPC, 16% [7.4%-28.5%] vs 8% [0.2%-31.6%]; P = .03). Their improvement in clinical performance during simulated cardiopulmonary arrests was not significantly different (median [IQR] CPT-PPC, 7.9% [4.8%-15.1%] vs 6.7% [2.8%-12.7%], P = .18). Level of physical realism of the simulator had no independent effect on these outcomes. Conclusions and Relevance The use of a standardized script by novice instructors to facilitate team debriefings improves acquisition of knowledge and team leader behavioral performance during subsequent simulated cardiopulmonary arrests. Implementation of debriefing scripts in resuscitation courses may help to improve learning outcomes and standardize delivery of debriefing, particularly for novice instructors.
Summary statement: Postsimulation debriefing is a critical component of effective learning in simulation-based health care education. Numerous formats in which to conduct the debriefing have been proposed. In this report, we describe the adaptation the US Army's After-Action Review (AAR) debriefing format for postsimulation debriefing in health care. The Army's AAR format is based on sound educational theory and has been used with great success in the US Army and civilian organizations for decades. Debriefing using the health care simulation AAR process requires planning, preparation, and follow-up. Conducting a postsimulation debriefing using the health care simulation AAR debriefing format includes 7 sequential steps as follows: (1) define the rules of the debriefing, (2) explain the learning objectives of the simulation, (3) benchmark performance, (4) review what was supposed to happen during the simulation, (5) identify what actually happened, (6) examine why events occurred the way they did, and (7) formalize learning by reviewing with the group what went well, what did not go well and what they would do differently if faced with a similar situation in real life. We feel that the use of the health care simulation AAR debriefing format provides a structured and supported method to conduct an effective postsimulation debriefing, with a focus on the learning objectives and reliance on preidentified performance standards.
Simulation instructors often feel caught in a task-versus-relationship dilemma. They must offer clear feedback on learners' task performance without damaging their relationship with those learners, especially in formative simulation settings. Mastering the skills to resolve this dilemma is crucial for simulation faculty development. We conducted a case study of a debriefer stuck in this task-versus-relationship dilemma. Data: The "2-column case" captures debriefing dialogue and instructor's thoughts and feelings or the "subjective experience." Analysis: The "learning pathways grid" guides a peer group of faculty in a step-by-step, retrospective analysis of the debriefing. The method uses vivid language to highlight the debriefer's dilemmas and how to surmount them. The instructor's initial approach to managing the task-versus-relationship dilemma included (1) assuming that honest critiques will damage learners, (2) using vague descriptions of learner actions paired with guess-what-I-am-thinking questions, and (3) creating a context she worried would leave learners feeling neither safe nor clear how they could improve. This case study analysis identified things the instructor could do to be more effective including (1) making generous inferences about the learners' qualities, (2) normalizing the challenges posed by the simulation, (3) assuming there are different understandings of what it means to be a team. There are key assumptions and ways of interacting that help instructors resolve the task-versus-relationship dilemma. The instructor can then provide honest feedback in a rigorous yet empathic way to help sustain good or improve suboptimal performance in the future.
Article
The literature on faculty development activities for family medicine educators is reviewed and presented in three sections: (a) overview of faculty development in family medicine, (b) fellowships, and (c) workshops and other short‐term programs. Although a wide variety of formats have been utilized for faculty development purposes in family medicine, most faculty development programs have been evaluated primarily on participant satisfaction rather than on observed changes in participant behavior when in the faculty role.As funds for family medicine faculty development programs disappear or decline, new strategies for maintaining the quantity and quality of family medicine educators must be developed. We suggestan agenda for future faculty development activities in family medicine. These activities include (a) initiation of faculty development training, including teaching and research skills, during residency training programs, (b) development of fourth‐year clinical fellow positions supported by clinical revenue, (c) development of 2‐year research fellowships supported by clinical revenue and research grants, and (d) more rigorous evaluation of faculty development fellowships and workshops based on outcome measures rather than strictly on participant satisfaction.
Article
Peer observation of teaching is currently receiving widespread interest, and is being implemented in numerous institutions in all sectors of education. There appears to be, however, some uncertainty or controversy over the rationale for this and the role of the observer. In the light of this, we should be very clear about exactly what our objectives are for the implementation of peer observation, and the best way to achieve these, before espousing a potentially divisive and detrimental procedure. The purpose of this paper, therefore, is to examine some popular models of peer observation and their potential drawbacks, and to put the case for the implementation of a more active and reflective model, where the focus is less on the observed and more on the active self‐development of the observer.
Article
Purpose: To (1) provide a detailed account of the nature and scope of faculty development (FD) programs in medical education, (2) assess the quality of FD studies, and (3) identify in what areas and through what means future research can purposefully build on existing knowledge. Method: The authors searched MEDLINE, CINAHL, and ERIC for articles reporting evaluations of FD initiatives published between 1989 and 2010. They applied standard systematic review procedures for sifting abstracts, scrutinizing full texts, and abstracting data, including program characteristics, evaluation methods, and outcomes. They used a modified Kirkpatrick model to guide their data abstraction. Results: The authors included 22 articles reporting on 21 studies in their review. The most common program characteristics included a series/longitudinal format, intended for individuals, and offered to physicians only. Although the most common aim was to improve teaching effectiveness, several programs had multiple aims, including scholarship and leadership. Program evaluation focused on quantitative approaches. A number of studies employed longitudinal designs and included some follow-up component. Surveys were the most popular data collection method, participants the most common data source, and self-reported behavior changes the most commonly reported outcome. Conclusions: Although the authors' findings showed some recent expansion in the scope of the FD literature, they also highlighted areas that require further focus and growth. Future research should employ more rigorous evaluation methods, explore the role of interprofessional teams and communities of practice in the workplace, and address how different organizational and contextual factors shape the success of FD programs.
Article
BACKGROUND: Improving patient safety by training teams to successfully manage emergencies is a major concern in healthcare. Most current trainings use simulation of emergency situations to practice and reflect on relevant clinical and behavioural skills. We developed TeamGAINS, a hybrid, structured debriefing tool for simulation-based team trainings in healthcare that integrates three different debriefing approaches: guided team self-correction, advocacy-inquiry and systemic-constructivist techniques. METHODS: TeamGAINS was administered during simulation-based trainings for clinical and behavioural skills for anaesthesia staff. One of the four daily scenarios involved all trainees, whereas the remaining three scenarios each involved only two trainees with the others observing them. Training instructors were senior anaesthesiologists and psychologists. To determine debriefing quality, we used a post-test-only (debriefing quality) and a pre-post-test (psychological safety, leader inclusiveness), no-control-group design. After each debriefing all trainees completed a self-report debriefing quality scale which we developed based on the Debriefing Assessment for Simulation in Healthcare and the Observational Structured Assessment of Debriefing. Perceived psychological safety and leader inclusiveness were measured before trainees' first (premeasure) and after their last debriefing (postmeasure) at which time trainees' reactions to the overall training were measured as well. RESULTS: Four senior anaesthetists, 29 residents and 28 nurses participated in a total of 40 debriefings resulting in 235 evaluations. Utility of debriefings was evaluated as highly positive. Pre-post comparisons revealed that psychological safety and leader inclusiveness significantly increased after the debriefings. CONCLUSIONS: The results indicate that TeamGAINS could provide a useful debriefing tool for training anaesthesia staff on all levels of work experience. By combining state-of-the-art debriefing methods and integrating systemic-constructivist techniques, TeamGAINS has the potential to allow for a surfacing, reflecting on and changing of the dynamics of team interactions. Further research is necessary to systematically compare the effects of TeamGAINS' components on the debriefing itself and on trainees' changes in attitudes and behaviours.
Article
Objectives: To explore the current status of performance feedback (debriefing) in the operating room and to develop and evaluate an evidence-based, user-informed intervention termed "SHARP" to improve debriefing in surgery. Background: Effective debriefing is a key educational technique for optimizing learning in surgical settings. However, there is a lack of a debriefing culture within surgery. Few studies have prospectively evaluated educational interventions to improve the quality and quantity of performance feedback in surgery. Methods: This was a prospective pre- and post-study of 100 cases involving 22 trainers (attendings) and 30 surgical residents (postgraduate years 3-8). A trained researcher assessed the quality of debriefings provided to the trainee using the validated Objective Structured Assessment of Debriefing (OSAD) tool alongside ethnographic observation. Following the first 50 cases, an educational intervention termed "SHARP" was introduced and measures repeated for a further 50 cases. User satisfaction with SHARP was assessed via questionnaire. Twenty percent of the cases were observed independently by a second researcher to test interrater reliability. Results: Interrater reliability for OSAD was excellent (ICC = 0.994). Objective scores of debriefing (OSAD) improved significantly after the SHARP intervention: median pre = 19 (range, 8-31); median post = 33 (range, 26-40), P < 0.001. Strong correlations between observer (OSAD) and trainee rating of debriefing were obtained (median ρ = 0.566, P < 0.01). Ethnographic observations also supported a significant improvement in both quality and style of debriefings. Users reported high levels of satisfaction in terms of usefulness, feasibility, and comprehensiveness of the SHARP tool. Conclusions: SHARP is an effective and efficient means of improving performance feedback in the operating room. Its routine use should be promoted to optimize workplace-based learning and foster a positive culture of debriefing and performance improvement within surgery.
Article
Objective: : To identify the features of effective debriefing and to use this to develop and validate a tool for assessing such debriefings. Introduction: : Simulation-based training has become an accepted means of surgical skill acquisition. A key component of this is debriefing-yet there is a paucity of research to guide best practice. Methods: : Phase 1-Identification of best practice and tool development. A search of the Medline, Embase, PsycINFO, and ERIC databases identified current evidence on debriefing. End-user input was obtained through 33 semistructured interviews conducted with surgeons (n = 18) and other operating room personnel (n = 15) from 3 continents (UK, USA, Australia) using standardized qualitative methodology. An expert panel (n = 7) combined the data to create the Objective Structured Assessment of Debriefing (OSAD) tool. Phase 2-Psychometric testing. OSAD was tested for feasibility, reliability, and validity by 2 independent assessors who rated 20 debriefings following high-fidelity simulations. Results: : Phase 1: 28 reports on debriefing were retrieved from the literature. Key components of an effective debriefing identified from these reports and the 33 interviews included: approach to debriefing, learning environment, learner engagement, reaction, reflection, analysis, diagnosis of strengths and areas for improvement, and application to clinical practice. Phase 2: OSAD was feasible, reliable [inter-rater ICC (intraclass correlation coefficient) = 0.88, test-retest ICC = 0.90], and face and content valid (content validity index = 0.94). Conclusions: : OSAD provides an evidence-based, end-user informed approach to debriefing in surgery. By quantifying the quality of a debriefing, OSAD has the potential to identify areas for improving practice and to optimize learning during simulation-based training.
Debriefing is a critical component of effective simulation-based medical education. The optimal format in which to conduct debriefing is unknown. The use of video review has been promoted as a means of enhancing debriefing, and video-assisted debriefing is widely used in simulation training. Few empirical studies have evaluated the impact of video-assisted debriefing, and the results of those studies have been mixed. The objective of this study was to compare the effectiveness of video-assisted debriefing to oral debriefing alone at improving performance in neonatal resuscitation. Thirty residents, divided into 15 teams of 2 members each, participated in the study. Each team completed a series of 3 neonatal resuscitation simulations. Each simulation was followed by a facilitated debriefing. Teams were randomly assigned to receive either oral debriefing alone or video-assisted debriefing after each simulation. Objective measures of performance and times to complete critical tasks in resuscitation were evaluated by blinded video review on the first (pretest) and the third (posttest) simulations using a previously validated tool. Overall neonatal resuscitation performance scores improved in both groups [mean (SD), 83% (14%) for oral pretest vs. 91% (7%) for oral posttest (P = 0.005); 81% (16%) for video pretest vs. 93% (10%) for video posttest (P < 0.001)]. There was no difference in performance scores between the 2 groups on either the pretest or posttest [overall posttest scores, 91.3% for oral vs. 93.4% for video (P = 0.59)]. Times to complete the critical tasks of resuscitation also did not differ significantly between the 2 study groups. The educational effect of the video-assisted debriefing versus oral debriefing alone was small (d = 0.08). Using this study design, we failed to show a significant educational benefit of video-assisted debriefing. Although our results suggest that the use of video-assisted debriefing may not offer significant advantage over oral debriefing alone, exactly why this is the case remains obscure. Further research is needed to define the optimal role of video review during simulation debriefing in neonatal resuscitation.
Article
OBJECTIVE:: To describe the history of the Pediatric Advanced Life Support course and outline the new developments in instructor training that will impact the way debriefing is conducted during Pediatric Advanced Life Support courses. OUTLINE:: The Pediatric Advanced Life Support course, first released by the American Heart Association in 1988, has seen substantial growth and change over the past few decades. Over that time, Pediatric Advanced Life Support has become the standard for resuscitation training for pediatric healthcare providers in North America. The incorporation of high-fidelity simulation-based learning into the most recent version of Pediatric Advanced Life Support has helped to enhance the realism of scenarios and cases, but has also placed more emphasis on the importance of post scenario debriefing. We developed two new resources: an online debriefing module designed to introduce a new model of debriefing and a debriefing tool for real-time use during Pediatric Advanced Life Support courses, to enhance and standardize the quality of debriefing by Pediatric Advanced Life Support instructors. In this article, we review the history of Pediatric Advanced Life Support and Pediatric Advanced Life Support instructor training and discuss the development and implementation of the new debriefing module and debriefing tool for Pediatric Advanced Life Support instructors. CONCLUSION:: The incorporation of the debriefing module and debriefing tool into the 2011 Pediatric Advanced Life Support instructor materials will help both new and existing Pediatric Advanced Life Support instructors develop and enhance their debriefing skills with the intention of improving the acquisition of knowledge and skills for Pediatric Advanced Life Support students.
Article
Debriefing is critical to learning from simulation experiences, yet the literature reports little research describing best practices within nursing. Debriefing for Meaningful Learning (DML) is a systematic process for debriefing in which teachers and students explicate different aspects of reflection and generate new meanings from simulation experiences. The purpose of this exploratory, quasi-experimental, pretest-posttest study was to test the relationship of DML on the development of clinical reasoning skills in prelicensure nursing students when compared with customary debriefing strategies and on students' perception of quality of the debriefing experience. Analysis of data demonstrated a greater change in clinical reasoning skills and identification of higher-quality debriefing and a positive correlation between clinical reasoning and perception of quality. Findings demonstrate that DML is an effective debriefing method. It contributes to the body of knowledge supporting the use of debriefing in simulation learning and supports the development of best teaching practices.
Debriefing is a process involving the active participation of learners, guided by a facilitator or instructor whose primary goal is to identify and close gaps in knowledge and skills. A review of existing research and a process for identifying future opportunities was undertaken. A selective critical review of the literature on debriefing in simulation-based education was done. An iterative process of analysis, gathering input from audience participants, and consensus-based synthesis was conducted. Research is sparse and limited in presentation for all important topic areas where debriefing is a primary variable. The importance of a format for reporting data on debriefing in a research context was realized and a "who, when, where, what, why" approach was proposed. Also, a graphical representation of the characteristics of debriefing studies was developed (Sim-PICO) to help guide simulation researchers in appropriate experimental design and reporting. A few areas of debriefing practice where obvious gaps that deserve study were identified, such as comparing debriefing techniques, comparing trained versus untrained debriefers, and comparing the effect of different debriefing venues and times. A model for publication of research data was developed and presented which should help researchers clarify methodology in future work.
A key portion of medical simulation is self-reflection and instruction during a debriefing session; however, there have been surprisingly few direct comparisons of various approaches. The objective of this study was to compare two styles of managing a simulation session: postsimulation debriefing versus in-simulation debriefing. One hundred sixty-one students were randomly assigned to receive either postsimulation debriefing or in-simulation debriefing. Retrospective pre-post assessment was made through survey using Likert-scale questions assessing students' self-reported confidence and knowledge level as it relates to medical resuscitation and statements related to the simulation itself. There were statistically significant differences in the reliable self-reported results between the two groups for effectiveness of the debriefing style, debriefing leading to effective learning, and the debriefing helping them to understand the correct and incorrect actions, with the group that received postsimulation debriefing ranking all these measures higher. Both groups showed significantly higher posttest scores compared with their pretest scores for individual and overall measures. Students felt that a simulation experience followed by a debriefing session helped them learn more effectively, better understand the correct and incorrect actions, and was overall more effective compared with debriefing that occurred in-simulation. Students did not feel that interruptions during a simulation significantly altered the realism of the simulation.
Article
Background: Effective feedback may be defined as feedback in which information about previous performance is used to promote positive and desirable development. This can be challenging as educators must acknowledge the psychosocial needs of the recipient while ensuring that feedback is both honest and accurate. Current feedback models remain reductionist in their approach. They are embedded in the hierarchical, diagnostic endeavours of the health professions. Even when it acknowledges the importance of two-way interactions, feedback often remains an educator-driven, one-way process. Lessons from the literature: An understanding of the various types of feedback and an ability to actively seek an appropriate approach may support feedback effectiveness. Facilitative rather than directive feedback enhances learning for high achievers. High-achieving recipients undertaking complex tasks may benefit from delayed feedback. It is hypothesised that such learners are supported by reducing interruptions during the task. If we accept that medical students and doctors are high achievers, we can draw on some guiding principles from a complex and rarely conclusive literature. Feedback should focus on the task rather than the individual and should be specific. It should be directly linked to personal goals. Self-assessment as a means to identify personal learning requirements has no theoretical basis. Motivated recipients benefit from challenging facilitated feedback from external sources. A new model: To achieve truly effective feedback, the health professions must nurture recipient reflection-in-action. This builds on self-monitoring informed by external feedback. An integrated approach must be developed to support a feedback culture. Early training and experience such as peer feedback may over time support the required cultural change. Opportunities to provide feedback must not be missed, including those to impart potentially powerful feedback from high-stakes assessments. Feedback must be conceptualised as a supported sequential process rather than a series of unrelated events. Only this sustained approach will maximise any effect.
Article
Few studies compare instructor-modeled learning with modified debriefing to self-directed learning with facilitated debriefing during team-simulated clinical scenarios. : To determine whether self-directed learning with facilitated debriefing during team-simulated clinical scenarios (group A) has better outcomes compared with instructor-modeled learning with modified debriefing (group B). This study used a convenience sample of students. The four tools used assessed pre/post knowledge, satisfaction, technical, and team behaviors. Thirteen interdisciplinary student teams participated: seven in group A and six in group B. Student teams consisted of one nurse practitioner student, one registered nurse student, one social work student, and one respiratory therapy student. The Knowledge Assessment Tool was analyzed by student profession. There were no statistically significant differences within each student profession group on the Knowledge Assessment Tool. Group B was significantly more satisfied than group A (P = 0.01). Group B registered nurses and social worker students were significantly more satisfied than group A (30.0 +/- 0.50 vs. 26.2 +/- 3.0, P = 0.03 and 28.0 +/- 2.0 vs. 24.0 +/- 3.3, P = 0.04, respectively). Group B had significantly better scores than group A on 8 of the 11 components of the Technical Evaluation Tool; group B intervened more quickly. Group B had significantly higher scores on 8 of 10 components of the Behavioral Assessment Tool and overall team scores. The data suggest that instructor-modeling learning with modified debriefing is more effective than self-directed learning with facilitated debriefing during team-simulated clinical scenarios.