To compare the psychometric performance of two rating instruments used to assess trainee performance in three clinical scenarios.
This study was part of a two-phase, randomized trial with a wait-list control condition assessing the effectiveness of a pediatric emergency medicine curriculum targeting general emergency medicine residents. Residents received 6 hours of instruction either before or after the first assessment. Separate pairs of raters completed either a dichotomous checklist for each of three cases or the Global Performance Assessment Tool (GPAT), an anchored multidimensional scale. A fully crossed person×rater×case generalizability study was conducted. The effect of training year on performance is assessed using multivariate analysis of variance.
The person and person×case components accounted for most of the score variance for both instruments. Using either instrument, scores demonstrated a small but significant increase as training level increased when analyzed using a multivariate analysis of variance. The inter-rater reliability coefficient was >0.9 for both instruments.
We demonstrate that our checklist and anchored global rating instrument performed in a psychometrically similar fashion with high reliability. As long as proper attention is given to instrument design and testing and rater training, checklists and anchored assessment scales can produce reproducible data for a given population of subjects. The validity of the data arising for either instrument type must be assessed rigorously and with a focus, when practicable, on patient care outcomes.
[Show abstract][Hide abstract] ABSTRACT: The objective was to critically appraise and highlight medical education research studies published in 2010 that were methodologically superior and whose outcomes were pertinent to teaching and education in emergency medicine (EM).
A search of the English language literature in 2010 querying PubMed, Scopus, Education Resources Information Center (ERIC), and PsychInfo identified 41 EM studies that used hypothesis-testing or observational investigations of educational interventions. Five reviewers independently ranked all publications based on 10 criteria, including four related to methodology, that were chosen a priori to standardize evaluation by reviewers. This method was used previously to appraise medical education published in 2008 and 2009.
Five medical education research studies met the a priori criteria for inclusion and are reviewed and summarized here. Comparing the literature of 2010 to 2008 and 2009, the number of published educational research papers increased from 30 to 36 and then to 41. The number of funded studies remained fairly stable over the past 3 years at 13 (2008), 16 (2009), and 9 (2010). As in past years, research involving the use of technology accounted for a significant number of publications (34%), including three of the five highlighted studies.
Forty-one EM educational studies published in 2010 were identified. This critical appraisal reviews and highlights five studies that met a priori quality indicators. Current trends and common methodologic pitfalls in the 2010 papers are noted.
Academic Emergency Medicine 10/2011; 18(10):1081-9. DOI:10.1111/j.1553-2712.2011.01191.x · 2.01 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: High-fidelity simulation (HFS) is increasingly utilized for Emergency Medicine Education. As simulation technology matures, institutions will be investing significant resources to develop simulation centers at their own hospitals and universities. A well-defined philosophy and approach is essential in the initial set-up of a simulation center. We examined multiple factors that we felt were responsible for our simulation program success at NorthShore University HealthSystem. These included: the physical plant, the simulation equipment, the curriculum, and our teaching methodology. The academic exercises and the technological features of the machines were only meaningful when the most basic requirement was met: making people want to participate again. Engaging the learner to want to return has truly been the success of our program. This is the critical launching point for any group interested in developing a simulation lab, as it will drive volume, generate financial support, and foster academic production. We provide a brief review of high fidelity simulation, and the key elements that led to growth and operational success of our center.
[Show abstract][Hide abstract] ABSTRACT: Training by means of advanced simulation can improve the paediatrician's abilities in the management of paediatric trauma patients, as well as decreasing errors and increasing patient safety. The initial management is an essential factor in the outcome of an injured child.
A trauma patient scenario was included in a national simulation training program. The performances of 156 paediatric primary care providers, divided into 39 teams, who participated in the courses carried out from May 2008 until February 2010 were retrospectively analysed. The evaluation of the scenario was based both on the primary survey suggested by the Working Group on Trauma of the SECIP, and in the 8 main targets of a simulation evaluation tool from the Cincinnati Children's Hospital trauma care program.
A pulse oximeter was placed, the intravenous/intraosseous access was indicated, the blood pressure was checked, and the oxygen was applied In 100% of the scenarios. An intravenous fluid bolus was indicated in 87% of the scenarios. The Glasgow scale was performed in 5.1%, and the appropriate warming measures in 25.6%. The bilateral cervical immobilisation was incorrect in 35% of the scenarios (89.7%). The primary survey (ABCDE) was checked correctly in only one scenario. With a top score of 16, based on Cincinnati Hospital, the teams mean score was 5.3 ±1.8.
Primary care paediatricians have problems applying the primary ABCDE trauma care sequence and the cervical spine precautions in a trauma simulation scenario. Educational programs for paediatricians must improve the practical check points of the initial approach to trauma management.
Anales de Pediatría 03/2012; 77(3):203-7. · 0.83 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.