Article

Predictors of a Top Performer During Emergency Medicine Residency

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Emergency Medicine (EM) residency program directors and faculty spend significant time and effort creating a residency rank list. To date, however, there have been few studies to assist program directors in determining which pre-residency variables best predict performance during EM residency. To evaluate which pre-residency variables best correlated with an applicant's performance during residency. This was a retrospective multicenter sample of all residents in the three most recent graduating classes from nine participating EM residency programs. The outcome measure of top residency performance was defined as placement in the top third of a resident's graduating class based on performance on the final semi-annual evaluation. A total of 277 residents from nine institutions were evaluated. Eight of the predictors analyzed had a significant correlation with the outcome of resident performance. Applicants' grade during home and away EM rotations, designation as Alpha Omega Alpha (AOA), U.S. Medical Licensing Examination (USMLE) Step 1 score, interview scores, "global rating" and "competitiveness" on nonprogram leadership standardized letter of recommendation (SLOR), and having five or more publications or presentations showed a significant association with residency performance. We identified several predictors of top performers in EM residency: an honors grade for an EM rotation, USMLE Step 1 score, AOA designation, interview score, high SLOR rankings from nonprogram leadership, and completion of five or more presentations and publications. EM program directors may consider utilizing these variables during the match process to choose applicants who have the highest chance of top performance during residency. Copyright © 2015 Elsevier Inc. All rights reserved.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... It is difficult to gauge applicants' aptitude, attitude and work ethos without direct observation of their work experience. Medical school test results are not always reliable indicators of future success in EM. (3) By denying medical student applicants and restricting resident recruitment to older medical officers who have rotated to the EM department, they may risk losing top performing graduating students to other training specialties. For graduating medical students, choosing and applying for a residency can often result in considerable anxiety, as application for a residency programme is competitive and unpredictable, and requires a significant investment of time and commitment. ...
... As such, careful selection of EM applicants is important to the future development of EM in Singapore. (3,5) Our study team aimed to determine the interest levels and motivating factors for pursuing EM as a career among medical students in Singapore. ...
... EM is a specialty that comes with unique challenges and rewards for practitioners in the field. (3,6,7,15) EM specialists are expected to be quick at formulating accurate diagnoses with little information at hand, possess excellent dexterity for performing a wide range of procedures, and be comfortable with dealing with trauma and deaths on a daily basis. Hence, it comes as no surprise that internal factors such as 'personality fit' turned out to be more important than external factors, such as salary and working hours, in influencing one's decision to pursue EM. ...
Article
Full-text available
Introduction: The introduction of the residency programme in Singapore allows medical students to apply for residency in their graduating year. Our study aimed to determine the interest levels and motivating factors for pursuing emergency medicine (EM) as a career among medical students in Singapore. Methods: A self-administered questionnaire was distributed to Year 1-5 medical students in 2012. Participants indicated their interest in pursuing EM as a career and the degree to which a series of variables influenced their choices. Influencing factors were analysed using multinomial logistic regression. Results: A total of 800 completed questionnaires were collected. 21.0% of the participants expressed interest in pursuing EM. Perceived personality fit and having done an elective in EM were strongly positive influencing factors. Junior medical students were more likely to cite the wide diversity of medical conditions and the lack of a long-term doctor-patient relationship to be negative factors, while senior medical students were more likely to cite personality fit and perceived prestige of EM as negative factors. Conclusion: Careful selection of EM applicants is important to the future development of EM in Singapore. Our study showed that personality fit might be the most important influencing factor in choosing EM as a career. Therefore, greater effort should be made to help medical students explore their interest in and suitability for a particular specialty. These include giving medical students earlier exposure to EM, encouraging participation in student interest groups and using appropriate personality tests for career guidance.
... A total of 434 papers and abstracts satisfied the search criteria, and 61 papers met the inclusion criteria. The authors scored these 61 manuscripts, and the 10 highest scoring quantitative 20,21,23,41,46,[50][51][52][53]70 and two qualitative articles 18,60 are reviewed below, in alphabetical order by first author's last name. The range of reviewers' scores for the top articles was from 15 to 24 with a range of mean scores from 15.8 to 21.8. ...
... 18,60 There were 10 papers (16%) that employed survey methodology but none met the criteria for a highlighted article. 36 17,[19][20][21][22][23][24][25][26][27][28][29][31][32][33][34][35]38,40,[43][44][45]48,49,51,53,55,58,59,62,66,68,[70][71][72][73] and accounted for 50% of the highlighted articles. 20,21,23,51,53,70 It is interesting to note that each study design had representation in meeting the validated criteria to be a highlighted article, with the exception, this year, of survey methodology. ...
... 36 17,[19][20][21][22][23][24][25][26][27][28][29][31][32][33][34][35]38,40,[43][44][45]48,49,51,53,55,58,59,62,66,68,[70][71][72][73] and accounted for 50% of the highlighted articles. 20,21,23,51,53,70 It is interesting to note that each study design had representation in meeting the validated criteria to be a highlighted article, with the exception, this year, of survey methodology. In our quest as a specialty to perform high-quality research in medical education, we believe that it is important to have rigorous methodology and high-level outcomes. ...
Article
Objective: The objectives were to critically appraise the medical education research literature of 2015 and review the highest-quality quantitative and qualitative examples. Methods: A total of 434 emergency medicine (EM)-related articles were discovered upon a search of ERIC, PsychINFO, PubMED, and SCOPUS. These were both quantitative and qualitative in nature. All were screened by two of the authors using previously published exclusion criteria, and the remaining were appraised by all authors using a previously published scoring system. The highest scoring articles were then reviewed. Results: Sixty-one manuscripts were scored, and 10 quantitative and two qualitative papers were the highest scoring and are reviewed and summarized in this article. Conclusions: This installment in this critical appraisal series reviews 12 of the highest-quality EM-related medical education research manuscripts published in 2015.
... A total of 434 papers and abstracts satisfied the search criteria, and 61 papers met the inclusion criteria. The authors scored these 61 manuscripts, and the 10 highest scoring quantitative 20,21,23,41,46,[50][51][52][53]70 and two qualitative articles 18,60 are reviewed below, in alphabetical order by first author's last name. The range of reviewers' scores for the top articles was from 15 to 24 with a range of mean scores from 15.8 to 21.8. ...
... 18,60 There were 10 papers (16%) that employed survey methodology but none met the criteria for a highlighted article. 36 17,[19][20][21][22][23][24][25][26][27][28][29][31][32][33][34][35]38,40,[43][44][45]48,49,51,53,55,58,59,62,66,68,[70][71][72][73] and accounted for 50% of the highlighted articles. 20,21,23,51,53,70 It is interesting to note that each study design had representation in meeting the validated criteria to be a highlighted article, with the exception, this year, of survey methodology. ...
... 36 17,[19][20][21][22][23][24][25][26][27][28][29][31][32][33][34][35]38,40,[43][44][45]48,49,51,53,55,58,59,62,66,68,[70][71][72][73] and accounted for 50% of the highlighted articles. 20,21,23,51,53,70 It is interesting to note that each study design had representation in meeting the validated criteria to be a highlighted article, with the exception, this year, of survey methodology. In our quest as a specialty to perform high-quality research in medical education, we believe that it is important to have rigorous methodology and high-level outcomes. ...
Article
Objective: To critically appraise the medical education research literature of 2015, and review the highest quality quantitative and qualitative examples. Methods: 434 EM-related articles were discovered upon a search of ERIC, PsychINFO, PubMED and SCOPUS. These were both quantitative and qualitative in nature. All were screened by two of the authors using previously published exclusion criteria, and the remaining were appraised by all authors using a previously published scoring system. The highest scoring articles were then reviewed. Results: 61 manuscripts were scored, and 10 quantitative and 2 qualitative papers were the highest scoring and are reviewed and summarized in the article. Conclusions: This installment in this critical appraisal series reviews twelve of the highest quality EM-related medical education research manuscripts published in 2015. This article is protected by copyright. All rights reserved.
... Data were originally collected via a multi-institutional collaborative effort, details of which have been described previously. 23 Briefly, nine programs volunteered to provide performance data on all their residents graduating in the previous three years, along with pre-residency predictor data available at the time of resident matriculation into the program. Institutional review board approval was obtained at each participating program as required. ...
... The nine EM programs were mostly urban, academic centers with eight to 14 residents in each class and emergency department censuses ranging from 60,000 to 115,000 patients per year. 23 Data were available for 286 residents; eight did not have a USMLE Step 1 score, 60 did not have a USMLE Step 2 score, 21 did not have a written board status available, and 26 did not have oral board results. We included a total of 197 residents with complete data in the analysis. ...
... Election into the Alpha Omega Alpha honor society, grade assigned for EM rotation, USMLE Step 1 score, interview score, letters of recommendation, scholarly activity, quality of medical school, and distinctive talents have all demonstrated association with successful performance during residency or high placement on rank lists. [23][24][25][26] Many of these factors are used by EM program directors while selecting and ranking applicants. 6 To identify USMLE scores that would identify candidates with a high likelihood of passing the ABEM examinations on first attempt, the 2014 pass rates of 90% on Harmouche et al. ...
Article
Full-text available
Introduction: There are no existing data on whether performance on USMLE predicts success in ABEM certification. Aim of this study was to determine the presence of any association between USMLE scores and first-time success on the ABEM Qualifying and Oral Certification Examinations. Methods: USMLE Step 1, Step 2 CK scores and pass/fail results from the first-attempt at ABEM qualifying and oral examinations from residents graduating between 2009 and 2011 from 9 emergency medicine programs were retrospectively collected. A composite score was defined as the sum of USMLE Step 1 and Step 2 CK scores. Results: Sample was composed of 197 residents. Median Step 1, Step 2 CK and composite scores were 218 ([IQR] 207-232), 228 (IQR 217-239) and 444 (IQR 427-468). First-time pass rates were 95% for the qualifying examination and 93% for both parts of the examination. Step 2 CK and composite scores were better predictors of achieving ABEM initial certification compared to Step 1 score (area under the curve 0.800, 0.759 and 0.656). Step 1 score of 227, Step 2 CK score of 225 and composite score of 444 predicted a 95% chance of passing both boards. Conclusion: Higher USMLE Step 1, Step 2 CK and composite scores are associated with better performance on ABEM examinations with Step 2 CK being the strongest predictor. Cutoff scores for USMLE Step 1, Step 2 CK and composite score were established to predict first-time success on ABEM Initial Certification.
... 5 An Internal Medicine evaluation of applicant reported research also showed no correlation to clinical performance, 6 while an investigation into the selection process for surgical residents found that prior research experience and research publications had a negative correlation with clinical performance ratings during residency. 7 Research remains a central focus in residency applications, interviews and ranking conversations for many specialties despite specialty literature questioning its value. It is worth noting the distinction between self-reported applicant research and scholarship that occurs within the context of residency training; there is a positive correlation between participation in research during residency and clinical performance. ...
... The literature contains many studies demonstrating research as non-predictive for resident performance, and at least one study demonstrating a negative correlation between research experience and resident performance in General Surgery. 7 The cause or causes for the observed differences may include: applicants for both medical school and residency who are unsuccessful often obtain research experience to increase competitiveness; personal characteristics or personality types attracted to research differ from those that foster clinical excellence in Family Medicine; applicants with research history begin residency with less clinical exposure; or a number of other possibilities. Although reflecting on potential relationships between personal characteristics and subsequent readiness for residency, it is important to acknowledge we were unable to demonstrate causation through this study. ...
Article
Full-text available
BACKGROUND Program directors for Family Medicine residencies must navigate an increasingly complex recruitment landscape. With increasing United States allopathic and osteopathic graduates and continued high volumes of international graduates, the ability to identify application characteristics that predict quality residents both for filtering applications for interview offers and ranking is vital. Our study concentrates on the predictive value of reported life experiences including volunteerism, work experiences, prior career, research experience, and participation in medical student organizations including student leadership. METHODS Through a retrospective cohort study, we extracted the described life experiences from resident application materials. We then obtained initial clinical performance data on the Family Medicine inpatient service during the first six months of residency to determine readiness for residency. This analysis occurred in 2020 and included all matriculants in the graduating classes of 2013 through 2020 for a single residency. Of 110 matriculating residents, data were available for 97(88%). RESULTS Applicants with a history of a prior career demonstrated improved overall readiness for residency with competency domain-specific advantages in Interpersonal and Communication Skills and Systems-Based Practice. In contrast, applicants reporting participation in research performed below peers in all competency domains. Applicant reports on volunteerism, work experience, academic productivity and student involvement did not correlate with initial clinical performance. CONCLUSIONS Residency directors should recognize applicants with prior careers as likely having strong communications and systems-based practice skills. All other examined experiences should be evaluated within the context of broader applicant assessments including research experience which overall has a potential negative correlation to clinical readiness.
... This, they will have to do, amidst the time constraints and competing demands and priorities in the ED. Residents will try to find their own ways of coping and going through this [14][15][16] Some do this very well whilst others may struggle and may need more time. ...
... It can be difficult to provide negative feedback, but there are techniques to deliver this in which faculty can be trained. At times, the resident may not have insight into the problem, habit or working model that he/she has [14][15][16] Here is where the faculty need to assist with very concise, [17]. ...
... Four studies have been published regarding the SLOE's relation to other variables. 2,[24][25][26] The first study compared rankings on the SLOR (this study was undertaken prior to the instrument's name was changed to SLOE) to a ranking of residents' ''final success'' upon graduation, with ''final success'' being defined after the faculty ranked each graduating resident against all previous residents at one institution. 24 The SLOR was not strongly correlated with this measure of success in residency. ...
... The authors found that the residents' ''final ability'' was correlated with the SLOE's global assessment as well as the SLOE's ranking of competitiveness. 26 In summary, there is minimal study regarding relation to variables, making it hard to draw conclusions in either direction. While the results from the 3 studies are mixed, the 2 most recent studies are trending in the correct direction for validity. ...
Article
Background The standardized letter of evaluation (SLOE) is the application component that program directors value most when evaluating candidates to interview and rank for emergency medicine (EM) residency. Given its successful implementation, other specialties, including otolaryngology, dermatology, and orthopedics, have adopted similar SLOEs of their own, and more specialties are considering creating one. Unfortunately, for such a significant assessment tool, no study to date has comprehensively examined the validity evidence for the EM SLOE. Objective We summarized the published evidence for validity for the EM SLOE using Messick's framework for validity evidence. Methods A scoping review of the validity evidence of the EM SLOE was performed in 2020. A scoping review was chosen to identify gaps and future directions, and because the heterogeneity of the literature makes a systematic review difficult. Included articles were assigned to an aspect of Messick's framework and determined to provide evidence for or against validity. Results There have been 22 articles published relating to validity evidence for the EM SLOE. There is evidence for content validity; however, there is a lack of evidence for internal structure, relation to other variables, and consequences. Additionally, the literature regarding response process demonstrates evidence against validity. Conclusions Overall, there is little published evidence in support of validity for the EM SLOE. Stakeholders need to consider changing the ranking system, improving standardization of clerkships, and further studying relation to other variables to improve validity. This will be important across GME as more specialties adopt a standardized letter.
... A higher rank list position among general surgery residents may determine scholarly productivity and pursuit of an academic career [7]. Accomplishments during early training, such as AOA membership, scholarly output, and class rank, have been shown to predict performance at later stages of training [8][9][10][11][12][13]. However, limited evidence specifically links internal medicine resident performance, rotation experience, and career intentions with subspecialty fellowship choice. ...
... These findings were surprising, given the competitive nature of CV fellowships. Literature also suggests that accomplishments during earlier periods of training predict subsequent performance at higher levels of training [7,[9][10][11][12][13][28][29][30]. However, prior studies have focused on performance during or after subsequent training rather than matriculation into specific fields or training programs. ...
Article
Full-text available
Background: The unique traits of residents who matriculate into subspecialty fellowships are poorly understood. We sought to identify characteristics of internal medicine (IM) residents who match into cardiovascular (CV) fellowships. Methods: We conducted a retrospective cohort study of 8 classes of IM residents who matriculated into residency from 2007 to 2014. The primary outcome was successful match to a CV fellowship within 1 year of completing IM residency. Independent variables included residents' licensing exam scores, research publications, medical school reputation, Alpha Omega Alpha (AOA) membership, declaration of intent to pursue CV in the residency application personal statement, clinical evaluation scores, mini-clinical evaluation exercise scores, in-training examination (ITE) performance, and exposure to CV during residency. Results: Of the 339 included residents (59% male; mean age 27) from 120 medical schools, 73 (22%) matched to CV fellowship. At the time of residency application, 104 (31%) had ≥1 publication, 38 (11%) declared intention to pursue CV in their residency application personal statement, and 104 (31%) were members of AOA. Prior to fellowship application, 111 (33%) completed a CV elective rotation. At the completion of residency training, 108 (32%) had ≥3 publications. In an adjusted logistic regression analysis, declaration of intention to pursue CV (OR 6.4, 99% CI 1.7-23.4; p < 0.001), completion of a CV elective (OR 7.3, 99% CI 2.8-19.0; p < 0.001), score on the CV portion of the PGY-2 ITE (OR 1.05, 99% CI 1.02-1.08; p < 0.001), and publication of ≥3 manuscripts (OR 4.7, 99% CI 1.1-20.5; p = 0.007) were positively associated with matching to a CV fellowship. Overall PGY-2 ITE score was negatively associated (OR 0.93, 99% CI 0.90-0.97; p < 0.001) with matching to a CV fellowship. Conclusions: Residents' matriculation into CV fellowships was associated with declaration of CV career intent, completion of a CV elective rotation, CV medical knowledge, and research publications during residency. These findings may be useful when advising residents about pursuing careers in CV. They may also help residents understand factors associated with a successful match to a CV fellowship. The negative association between matching into CV fellowship and overall ITE score may indicate excessive subspecialty focus during IM residency.
... 11 Most studies show USMLE Step 1 largely predicts future test scores, such as in-training examinations (ITEs) and specialty board examinations, but not competency domains such as communication, teamwork, and professionalism. [12][13][14][15][16][17][18] Studies that have shown a connection between USMLE Step 1 and global performance generally have weak associations, 19,20 limited scope of comparisons (ie, just faculty assessment), 21 or were in fields other than IM. [19][20][21] Despite the heavy reliance on USMLE Step 1 scores, recent studies suggested USMLE Step 2 Clinical Knowledge (CK) actually may be a better predictor of ITE scores and resident performance overall. ...
... [12][13][14][15][16][17][18] Studies that have shown a connection between USMLE Step 1 and global performance generally have weak associations, 19,20 limited scope of comparisons (ie, just faculty assessment), 21 or were in fields other than IM. [19][20][21] Despite the heavy reliance on USMLE Step 1 scores, recent studies suggested USMLE Step 2 Clinical Knowledge (CK) actually may be a better predictor of ITE scores and resident performance overall. [22][23][24][25] Many of these studies compare application materials to only 1 or 2 other assessment metrics, usually standardized test scores and work-based observational faculty assessments. ...
Article
Background: Internal medicine (IM) residency programs receive information about applicants via academic transcripts, but studies demonstrate wide variability in satisfaction with and usefulness of this information. In addition, many studies compare application materials to only 1 or 2 assessment metrics, usually standardized test scores and work-based observational faculty assessments. Objective: We sought to determine which application materials best predict performance across a broad array of residency assessment outcomes generated by standardized testing and a yearlong IM residency ambulatory long block. Methods: In 2019, we analyzed available Electronic Residency Application Service data for 167 categorical IM residents, including advanced degree status, research experience, failures during medical school, undergraduate medical education award status, and United States Medical Licensing Examination (USMLE) scores. We compared these with post-match residency multimodal performance, including standardized test scores and faculty member, peer, allied health professional, and patient-level assessment measures. Results: In multivariate analyses, USMLE Step 2 Clinical Knowledge (CK) scores were most predictive of performance across all residency performance domains measured. Having an advanced degree was associated with higher patient-level assessments (eg, physician listens, physician explains, etc). USMLE Step 1 scores were associated with in-training examination scores only. None of the other measured application materials predicted performance. Conclusions: USMLE Step 2 CK scores were the highest predictors of residency performance across a broad array of performance measurements generated by standardized testing and an IM residency ambulatory long block.
... 13,14 Although the eSLOE does not directly measure an applicant's interpersonal and communication skills and professionalism, there are some limited data demonstrating the tool's value for predicting learner success during EM residency. 15 Behavioral and situational interviewing may be useful methods to assess applicants' aptitude in interpersonal and communication skills and professionalism. 16 In 2016, the Association of American Medical Colleges (AAMC) developed the AAMC Standardized Video Interview (SVI), which consists of 6 brief behavior-based questions designed to measure applicants' interpersonal and communications skills and knowledge of professionalism. ...
... The literature suggests that neither Gold Humanism Honor Society nor Alpha Omega Alpha membership are good predictors of residency performance. 4, 15,21,22 The absence of large correlations between academic variables and either the SVI score or eSLOE ratings suggests that these 2 selection tools may provide unique information about applicants' personal competencies. ...
Article
Full-text available
Purpose: To compare the performance characteristics of the Electronic Standardized Letter of Evaluation (eSLOE), a widely used structured assessment of emergency medicine (EM) residency applicants, and the AAMC Standardized Video Interview (SVI), a new tool designed by the Association of American Medical Colleges to assess interpersonal and communication skills and professionalism knowledge. Method: The authors matched EM residency applicants with valid SVI total scores and completed eSLOEs in the 2018 Match application cycle. They examined correlations and group differences for both tools, United States Medical Licensing Examination (USMLE) Step exam scores, and honor society memberships. Results: The matched sample included 2,884 applicants. SVI score and eSLOE global assessment ratings demonstrated small positive correlations approaching r = 0.20. eSLOE ratings had higher correlations with measures of academic ability (USMLE scores, academic honor society membership) than did SVI scores. Group differences were minimal for the SVI, with scores slightly favoring women (d = -.21) and U.S.-MD applicants (d = .23-.42). Group differences in eSLOE ratings were small, favoring women over men (approaching d = -0.20) and white applicants over black applicants (approaching d = 0.40). Conclusions: Small positive correlations between SVI total score and eSLOE global assessment ratings, alongside varying correlations with academic ability indicators, suggest these are complementary tools. Findings suggest the eSLOE is subject to similar sources and degrees of bias as other common assessments; these group differences were not observed with the SVI. Further examination of both tools is necessary to understand their ability to predict clinical performance.
... 11 A prior study assessed factors associated with top performance in nine institutions and found that several preresidency factors predicted high rankings at the final semiannual evaluation including grades, test scores, interview performance during the application process, and scholarship. 12 To our knowledge, no studies have examined how personal factors, behaviors, and background differentiate top clinical performance during EM residency or which factors used to measure resident performance are most influential for EM educators. ...
... For components of residency information associated with top clinical performance, the most common words were evaluations (16%), performance (10%), and scores (8%). After factors identified in the ideation survey were combined, 89 factors in the categories of attributes (6), personal traits (25), general skill set (4), ED-specific skills and behaviors (31), preresidency information (6), and "measures of top clinical performance" (12) were used as the content for the Delphi panel. ...
Article
Study objective We explore attributes, traits, background, skills and behavioral factors important to top clinical performance in emergency medicine residency. Methods We used a two‐step process – an ideation survey with the Council of Emergency Medicine Residency Directors and a modified Delphi technique – to identify: 1) factors important to top performance, 2) pre‐residency factors that predict it, and 3) the best ways to measure it. In the Delphi, six expert educators in emergency care assessed the presence of the factors from the ideation survey results in their top clinical performers. Consensus on important factors that were exemplified in >60% of top performers were retained in three Delphi rounds, as well as predictors and measures of top performance. Results The ideation survey generated 81 responses with ideas for each factor. These were combined into 89 separate factors in seven categories: attributes; personal traits; ED‐specific skills & behaviors; general skillset, background, as well as pre‐residency predictors and ways to measure top performance. After three Delphi rounds, the panel achieved consensus on 20 factors important to top clinical performance. This included two attributes; seven traits; one general skillset; and ten ED‐specific skills & behaviors. Interview performance was considered the sole important pre‐residency predictor and clinical competency committee results the sole important measure of top performance. Conclusion Our expert panel identified 20 factors important to top clinical performance in emergency medicine residency. Future work is needed to further explore how individuals learn and develop these factors. This article is protected by copyright. All rights reserved.
... Ultimately, 13 articles fulfilled these criteria, the majority of which were retrospective cohort studies. [2][3][4][5][6][7][8][9][10][11][12][13][14] All available ERAS data were reviewed dating back to otolaryngology's initial participation in 2006. In addition, 2007-2016 National Residency Matching Program otolaryngology data were analyzed for the years available. ...
... In support, Bhat et al identified that a minimum of 5 publications/presentations correlated with a resident's presence in the top one-third of her or his emergency medicine class. 11 Alternatively, several studies have challenged this finding. While Calhoun et al concluded that research experiences during medical school played a significant role in matching into otolaryngology, a follow-up study revealed that the number of publications did not correlate with resident performance. ...
Article
Objective This State of the Art Review aims (1) to define recent qualifications of otolaryngology resident applicants by focusing on United States Medical Licensing Examination (USMLE) scores, Alpha Omega Alpha (AOA) status, and research/publications and (2) to summarize the current literature regarding the relationship between these measures and performance in residency. Data Sources Electronic Residency Application Service, National Residency Matching Program, PubMed, Ovid, and GoogleScholar. Review Methods Electronic Residency Application Service and National Residency Matching Program data were analyzed to evaluate trends in applicant numbers and qualifications. Additionally, a literature search was performed with the aforementioned databases to identify relevant articles published in the past 5 years that examined USMLE Step 1 scores, AOA status, and research/publications. Conclusions Compared with other highly competitive fields over the past 3 years, the only specialty with decreasing applicant numbers is otolaryngology, with the rest remaining relatively stable or slightly increased. Additionally, USMLE Step 1 scores, AOA status, and research/publications do not reliably correlate with performance in residency. Implications for Practice The consistent decline in applications for otolaryngology residency is concerning and reflects a need for change in the current stereotype of the “ideal” otolaryngology applicant. This includes consideration of additional selection measures focusing on noncognitive and holistic qualities. Furthermore, otolaryngology faculty should counsel medical students that applying in otolaryngology is not “impossible” but rather a feasible and worthwhile endeavor.
... 12 Nevertheless, other studies have shown that LOR can predict success during residency. [16][17][18][19] An analysis of problematic behavior of residents in a single psychiatry program during a period of 20 years revealed that the larger number of negative comments in the dean's letter can predict major future problems. 16 In addition, obstetrics and gynecology residents at Johns Hopkins University with LOR who had more comments about patient care, medical knowledge, and interpersonal and communication skills were more successful during their training compared to their colleagues. ...
... [20][21][22] Furthermore, this type of letters can successfully predict residents' performance during their training. 18,22 The use of standardized LOR should be studied across the residency programs in Canada to assess their value in the selection process. ...
Article
Full-text available
Objective: Letters of recommendation (LOR) provide valuable information that help in selecting new residents. In this study, we aim to investigate the perceptions of surgical residency program directors (PDs) in Canada on the elements that can affect the strength and value of LOR. Design: Cross-sectional; survey. Setting: A national survey was conducted using an online questionnaire consisting of 2 main sections to collect data from PDs from all surgical subspecialties. The first section included basic background questions about the participant, such as the specialty and experience in selecting resident candidates, whereas the second section was about the elements and characteristics of LOR. Participants were asked to rate the importance of 34 different variables using a Likert scale. Participants: Surgical PDs in Canada. Results: Of 122 PDs, 65 (53.3%) participated in the survey. Work ethic (57; 87.7%), interpersonal skills (52; 80.0%), and teamwork (49; 75.4%) were considered very important parts of the LOR by more than three-quarters of the PDs. Thirty-three (50.8%) PDs reported that a familiar author of LOR would always affect their impression regarding the letter. Additionally, 57 (87.7%) and 35 (53.8%) directors thought that LOR are important in evaluating the candidates and can help in predicting the residents' performance during their residency training. Conclusions: LOR are important for the selection of new surgical residents in Canada. Information about the candidate's work ethic, interpersonal skills, and teamwork is essential for a good LOR. Familiarity of PDs with authors of LOR could increase the value of the letter.
... Internationally, there is limited evidence regarding the impact of variation, with the most significant literature sourced in the USA. For over three decades in the USA, honours in clerkships and Alpha Omega Alpha society membership have been associated with future career success [35,36], including residency application success [21,22], performance during residency [37], long-term academic employment, and professorship [38]. Despite increased attention to the impact of degree awards within the USA, there also exists a lack of standardisation. ...
Article
Full-text available
Background: Inequity in assessment can lead to differential attainment. Degree classifications, such as 'Honours', are an assessment outcome used to differentiate students after graduation. However, there are no standardised criteria used to determine what constitutes these awards. Methods: We contacted all medical schools in the United Kingdom (UK) and collected data relating to classifications awarded, criteria used, and percentage of students receiving classifications across the five-year period prior to the 2019/2020 academic year. Results: All 42 UK medical schools responded, and 36 universities provided usable data. Of these 36 universities, 30 (83%) awarded classifications above a 'Pass'. We identified four classifications above a "Pass", and these were "Commendation", "Merit", "Distinction", and "Honours". 16 (44%) universities awarded a single additional classification and 14 (39%) universities awarded two or more. There was considerable variation in the criteria used by each university to award classifications. For example, 30 (67%) out of 45 classifications were dependent on all examined years, 9 (20%) for a combination of years, and 6 (13%) for final year alone. 25 of 30 universities that awarded classifications provided data on the percentage of students awarded a classification, and a median of 15% of students received any type of classification from their university (range 5.3% to 38%). There was a wide range in the percentage of students awarded each classification type across the universities (e.g., Honours, range = 3.1-24%). Conclusions: We demonstrate considerable variation in the way UK medical degree classifications are awarded - regarding terminology, criteria, and percentage of students awarded classifications. We highlight that classifications are another form of inequity in medical education. There is a need to fully evaluate the value of hierarchical degree awards internationally as the consequential validity of these awards is understudied.
... These results support prior studies that identify academic achievement and interview impression as potential markers for successful performance in residency and applicant rank list position. 17,18 Additionally, in the 2021 National Resident Matching Program Program Director Survey, interpersonal skills were considered to be one of the top factors in deciding whom to rank by program directors. 19 Understanding the elements of the interview that make the most impact on applicant selection and interview performance can provide valuable guidance to prospective residency applicants. ...
Article
Background: Residency recruitment requires significant resources for both applicants and residency programs. Virtual interviews offer a way to reduce the time and costs required during the residency interview process. This prospective study investigated how virtual interviews affected scoring of anesthesiology residency applicants and whether this effect differed from in-person interview historical controls. Methods: Between November 2020 and January 2021, recruitment members at the University of Chicago scored applicants before their interview based upon written application materials alone (preinterview score). Applicants received a second score after their virtual interview (postinterview score). Recruitment members were queried regarding the most important factor affecting the preinterview score as well as the effect of certain specified applicant interview characteristics on the postinterview score. Previously published historical controls were used for comparison to in-person recruitment the year prior from the same institution. Results: Eight hundred and sixteen virtual interviews involving 272 applicants and 19 faculty members were conducted. The postinterview score was higher than the preinterview score (4.06 versus 3.98, P value of <.0001). The change in scores after virtual interviews did not differ from that after in-person interviews conducted the previous year (P = .378). The effect of each characteristic on score change due to the interview did not differ between in-person and virtual interviews (all P values >.05). The factor identified by faculty as the most important in the preinterview score was academic achievements (64%), and faculty identified the most important interview characteristic to be personality (72%). Conclusions: Virtual interviews led to a significant change in scoring of residency applicants, and the magnitude of this change was similar compared with in-person interviews. Further studies should elaborate on the effect of virtual recruitment on residency programs and applicants.
... Bhat et al found that "global rating" and "competitiveness" on non-program leadership Journal of Medical Education and Curricular Development Journal of Medical Education and Curricular Development Patel et al 7 standardized letter of recommendation (SLOR) was a factor in determining if a resident will be successful or not. 19 However, this conclusion conflicts with Kimple et al, which mentions that while the SLOR saves time and normalizes how recommendations are written, however the individuality of how people evaluate and complete the letter is suspect to inconsistency. 20 This discrepancy leads to a convoluted conclusion about SLORs in predicting resident success. ...
Article
Full-text available
INTRODUCTION The announcement of Step 1 shifting to a Pass/Fail metric has prompted resident selection committees (RSCs) to pursue objective methods of evaluating prospective residents. Regardless of the program's specialty or affiliated hospital/school, RSCs universally aim to recognize and choose applicants who are an “optimal fit” to their programs. ¹ An optimal fit can be defined as a candidate who thrives in the clinical and academic setting, both contributing to and benefiting from their respective training environments. OBJECTIVE The objective of this scoping review is to evaluate alternative, innovative methods by which RSCs can evaluate applicants and predict success during residency. Objective methods include: Step 2 scores, Traditionally Used Metrics (core clerkship scores), interview performance, musical talent, sports involvement, AOA membership, research publications, unprofessional behavior, Dean's letters, Rank list, judgement testing, and specialty-specific shelf exams. 13–15 METHODS A scoping review was performed in compliance with the guidelines indicated by the PRISMA Protocol for scoping review. ¹⁸ 9308 results were identified in the original PubMed search for articles with the key words “Resident Success”. Abstract screening and application of inclusion and exclusion criteria yielded 97 articles that were critically appraised via review of full manuscript. RESULTS Of the articles that focused on personality traits, situational judgement testing, and specialty specific pre-assessment, all of them demonstrated some level of predictability for resident success. Standardized Letter of Recommendations, Traditionally Used Metrics, and STEP 2 did not show a unanimous consensus in demonstrating predictability of a resident's success, this is because some articles suggested predictability and some articles disputed predictability. CONCLUSION The authors found personality traits, situational judgement testing, and specialty specific assessments to be predictive in selecting successful residents. Further research should aim to analyze exactly how RSCs utilize these assessment tools to aid in screening their large and competitive applicant pools to find residents that will be successful in their program.
... 6 It has been shown to decrease writing time for referees and reviewing time for application reviewers, facilitate interpretation with high interrater reliability, and most importantly, predict resident performance in core residency competencies. [7][8][9] SRLs can also reduce gender bias in trainee selection, as shown in a study of letters submitted to an otolaryngology head and neck surgery residency in the U.S. 10 ...
... Knowing that one can identify early in the admission's process personal characteristics that may be related to medical student success can help medical school admission committees to select who to interview and can build a strong and talented medical school class. Detecting and recruiting medical student applicants with these characteristics is especially important to residency programs that attempt to recruit the best medical students (Boyse et al., 2002;Alterman et al., 2011;Bhat et al., 2015;Bowe, Laury and Gray, 2017;Thompson et al., 2017;Agarwal et al., 2018). A focus on such personal characteristics not only supports a holistic approach to selection, but has the potential to shape a physician workforce that provides society the best that medicine has to offer. ...
Article
Full-text available
Introduction: Medical school admissions committees are tasked with selecting the best students for their institution and historically rely on grade point averages and Medical College Admission Test scores as measures for academic success. Yet research and expertise theory suggest that personal characteristics play a critical role in exceptional performance. Understanding the characteristics of exceptional performing medical students upon application to medical school could contribute to the holistic review process and selection decisions of medical universities. Methods: The purpose of this study was to identify themes in the American Medical College Application Service (AMCAS) application that reflect the characteristics of exceptional performing medical students when they applied to medical school. The authors completed an inductive thematic analysis of the primary AMCAS application of exceptional performing medical students. Selection to both Alpha Omega Alpha Honor Medical Society and the Gold Humanism Honor Society defined exceptional performance. Results/Analysis: 22 (4.5%) of 485 medical school graduates between 2017 and 2019 met criteria for exceptional performance. The authors identified seven themes from the AMCAS applications: success in a practiced activity, altruism, entrepreneurship, passion, perseverance, teamwork, and wisdom. Discussion: The seven identified themes were consistent with the personal characteristics associated with both expertise theory and the AAMC’s core personal competencies for medical student success. By constructing an understanding of the personal characteristics exceptional student performers display in their applications to medical school, these themes offer an additional lens for medical school admission committees to assess a student’s potential to be successful in medical school.
... We also found that scholarly activity, either research presentations or publications, positively predicted performance on the examination. Studies suggests that the experience of publishing studies did not disturb trainees' academic activities during their residency but instead predicted better performance on the certification examination [23,24]. Another study reported that publication experience among internal medicine residents significantly correlated with their clinical performance test score [16]. ...
Article
Full-text available
Background Examining the predictors of summative assessment performance is important for improving educational programs and structuring appropriate learning environments for trainees. However, predictors of certification examination performance in pediatric postgraduate education have not been comprehensively investigated in Japan. Methods The Pediatric Board Examination database in Japan, which includes 1578 postgraduate trainees from 2015 to 2016, was analyzed. The examinations included multiple-choice questions (MCQs), case summary reports, and an interview, and the predictors for each of these components were investigated by multiple regression analysis. Results The number of examination attempts and the training duration were significant negative predictors of the scores for the MCQ, case summary, and interview. Employment at a community hospital or private university hospital were negative predictors of the MCQ and case summary score, respectively. Female sex and the number of academic presentations positively predicted the case summary and interview scores. The number of research publications was a positive predictor of the MCQ score, and employment at a community hospital was a positive predictor of the case summary score. Conclusion This study found that delayed and repeated examination taking were negative predictors, while the scholarly activity of trainees was a positive predictor, of pediatric board certification examination performance.
... For example, one contends higher USMLE Step 1 scores correlate with completion of a general surgery residency [14]. Another supports USMLE Step 1 as a positive predictor of resident performance in Emergency Medicine [15]. ...
Article
Full-text available
Background Family Medicine residencies are navigating recruitment in a changing environment. The consolidation of accreditation for allopathic and osteopathic programs, the high volume of applicants, and the forthcoming transition of the United States Medical Licensing Exam (USMLE) Step 1 to pass/fail reporting all contribute. This retrospective cohort study evaluated which components of a student’s academic history best predict readiness for residency. Methods In 2020, we analyzed applicant data and initial residency data for program graduates at a single residency program between 2013 and 2020. This included undergraduate education characteristics, medical school academic performance, medical school academic problems (including professionalism), STEP exams, location of medical school, and assessments during the first 6 months of residency. Of 110 matriculating residents, assessment data was available for 97 (88%). Results Pre-matriculation USMLE data had a positive correlation with initial American Board of Family Medicine (ABFM) in-training exams. Pre-matriculation exam data did not have a positive correlation with resident assessment across any of the six Accreditation Council for Graduate Medical Education (ACGME) competency domains. A defined cohort of residents with a history of academic struggles during medical school or failure on a USMLE exam performed statistically similarly to residents with no such history on assessments across the six ACGME competency domains. Conclusions Applicants with a history of academic problems perform similarly in the clinical environment to those without. While a positive correlation between pre-matriculation exams and the ABFM in-training exam was found, this did not extend to clinical assessments across the ACGME competency domains.
... 3 A greater number of applications requires a concurrent increase in time and effort by programs to review applicants and make decisions about interview selection. 4 When coupled with the lack of robust outcomes data on which aspects of an applicant's portfolio predict future residency success, program directors and coordinators must spend substantial resources attempting to analyze these applications in order to find those who may be a "best fit" for their program. Additionally, the increase in applications to EM residency puts additional financial strains on the applicants themselves. ...
Article
Full-text available
Introduction: The average number of applications per allopathic applicant to emergency medicine (EM) residency programs in the United States (US) has increased significantly since 2014. This increase in applications has caused a significant burden on both programs and applicants. Our goal in this study was to investigate the drivers of this application increase so as to inform strategies to mitigate the surge. Methods: An expert panel designed an anonymous, web-based survey, which was distributed to US allopathic senior applicants in the 2017-2018 EM match cycle via the Council of Residency Directors in Emergency Medicine and the Emergency Medicine Residents Association listservs for completion between the rank list certification deadline and release of match results. The survey collected descriptive statistics and factors affecting application decisions. Results: A total of 532 of 1748 (30.4%) US allopathic seniors responded to the survey. Of these respondents, 47.3% felt they had applied to too many programs, 11.8% felt they had applied to too few, and 57.7% felt that their perception of their own competitiveness increased their number of applications. Application behavior of peers going into EM was identified as the largest external factor driving an increase in applications (61.1%), followed by US Medical Licensing Exam scores (46.9%) - the latter was most pronounced in applicants who self-perceived as "less competitive." The most significant limiter of application numbers was the cost of using the Electronic Residency Application Service (34.3%). Conclusion: A substantial group of EM applicants identified that they were over-applying to residencies. The largest driver of this process was individual applicant response to the behavior of their peers who were also going into EM. Understanding these motivations may help inform solutions to overapplication.
... We reviewed factors utilized commonly in selection decisions as well as those factors previously identified to be predictive of success or remediation. 19,29,30,[37][38][39] As the milestone assessment was designed to provide for a standard generalizable outcome for residency performance across graduate medical education programs in the same specialty we have used them as our outcomes in this study. ...
Article
Full-text available
Background Emergency medicine (EM) residency programs want to employ a selection process that will rank best possible applicants for admission into the specialty. Objective We tested if application data are associated with resident performance using EM milestone assessments. We hypothesized that a weak correlation would exist between some selection factors and milestone outcomes. Methods Utilizing data from 5 collaborating residency programs, a secondary analysis was performed on residents trained from 2013 to 2018. Factors in the model were gender, underrepresented in medicine status, United States Medical Licensing Examination Step 1 and 2 Clinical Knowledge (CK), Alpha Omega Alpha (AOA), grades (EM, medicine, surgery, pediatrics), advanced degree, Standardized Letter of Evaluation global assessment, rank list position, and controls for year assessed and program. The primary outcomes were milestone level achieved in the core competencies. Multivariate linear regression models were fitted for each of the 23 competencies with comparisons made between each model's results. Results For the most part, academic performance in medical school (Step 1, 2 CK, grades, AOA) was not associated with residency clinical performance on milestones. Isolated correlations were found between specific milestones (eg, higher surgical grade increased wound care score), but most had no correlation with residency performance. Conclusions Our study did not find consistent, meaningful correlations between the most common selection factors and milestones at any point in training. This may indicate our current selection process cannot consistently identify the medical students who are most likely to be high performers as residents.
... In the same study, the more clinically oriented Step 2 score was not associated with residency success. 7 In one study that evaluated success with a standardized patient encounter, USMLE scores had no correlation. 8 I think we need to broaden our assessments. ...
... 13 Internal structure evidence has also been shown in the improved interrater reliability and discrimination of the Standardized Letter of Evaluation (SLOE) as compared to traditional narrative letters of recommendation that it has replaced. 12,14 Finally, validity evidence of relations with other variables stems from a single study that the SLOE is one of the best predictors of clinical performance as a resident 15 Although the SLOE is primarily an assessment of clinical performance, it has not previously been held to the standard of workplace-based assessments (WBA). 16 Valid WBAs are based on a number of underlying tenets that reflect a global perspective on complex, multifaceted performance through "pixilation." ...
Article
Full-text available
Introduction: Interest is growing in specialty-specific assessments of student candidates based on clinical clerkship performance to assist in the selection process for postgraduate training. The most established and extensively used is the emergency medicine (EM) Standardized Letter of Evaluation (SLOE), serving as a substitute for the letter of recommendation. Typically developed by a program's leadership, the group SLOE strives to provide a unified institutional perspective on performance. The group SLOE lacks guidelines to direct its development raising questions regarding the assessments, processes, and standardization programs employ. This study surveys EM programs to gather validity evidence regarding the inputs and processes involved in developing group SLOEs. Methods: A structured telephone interview was administered to assess the input data and processes employed by United States EM programs when generating group SLOEs. Results: With 156/178 (87.6%) of Accreditation Council of Graduate Medical Education-approved programs responding, 146 (93.6%) reported developing group SLOEs. Issues identified in development include the following: (1) 84.9% (124/146) of programs limit the consensus process by not employing rigorous methodology; (2) several stakeholder groups (nurses, patients) do not participate in candidate assessment placing final decisions at risk for construct under-representation; and (3) clinical shift assessments don't reflect the task-specific expertise of each stakeholder group nor has the validity of each been assessed. Conclusion: Success of the group SLOE in its role as a summative workplace-based assessment is dependent upon valid input data and appropriate processes. This study of current program practices provides specific recommendations that would strengthen the validity arguments for the group SLOE.
... This will help others to develop, and will protect the leader's own time. As a bonus, members of Alpha Omega Alpha tend to demonstrate leadership qualities making them strong candidates, and students of accelerated programs were also found to assume more leadership roles during residency, compared with their colleagues (6). ...
Article
Emergency medicine is a profession that requires good leadership skills. Emergency physicians must be able to instill confidence in both the staff and patients, inspire the best in others, have the enthusiasm to take on a surplus of responsibilities, and maintain calmness during unexpected circumstances. Accordingly, residency program directors look carefully for leadership qualities and potential among their applicants. Although some people do have a predisposition to lead, leadership can be both learned and taught. In this article, we provide medical students with the tools that will help them acquire those qualities and thus make them more desirable by program directors.
... However, it generally focuses on evaluating specific questions. [13][14][15][16][17][18][19][20] One previous Canadian study provided a detailed description and analysis of their interview process. 21 To our knowledge, this is the first comprehensive description of an emergency medicine program's entire resident selection process in the literature. ...
Article
Objectives The Canadian Resident Matching Service (CaRMS) selection process has come under scrutiny due to the increasing number of unmatched medical graduates. In response, we outline our residency program's selection process including how we have incorporated best practices and novel techniques. Methods We selected file reviewers and interviewers to mitigate gender bias and increase diversity. Four residents and two attending physicians rated each file using a standardized, cloud-based file review template to allow simultaneous rating. We interviewed applicants using four standardized stations with two or three interviewers per station. We used heat maps to review rating discrepancies and eliminated rating variance using Z-scores. The number of person-hours that we required to conduct our selection process was quantified and the process outcomes were described statistically and graphically. Results We received between 75 and 90 CaRMS applications during each application cycle between 2017 and 2019. Our overall process required 320 person-hours annually, excluding attendance at the social events and administrative assistant duties. Our preliminary interview and rank lists were developed using weighted Z-scores and modified through an organized discussion informed by heat mapped data. The difference between the Z-scores of applicants surrounding the interview invitation threshold was 0.18-0.3 standard deviations. Interview performance significantly impacted the final rank list. Conclusions We describe a rigorous resident selection process for our emergency medicine training program which incorporated simultaneous cloud-based rating, Z-scores, and heat maps. This standardized approach could inform other programs looking to adopt a rigorous selection process while providing applicants guidance and reassurance of a fair assessment.
... 43 Additionally, separate studies, including specialtyspecific studies, have affirmed that many of these factors are important to residency program directors. 44,45 Current literature on this issue, including the NRMP survey, is limited to some extent by the heterogeneity of selection processes, heterogeneity of the literature itself, and difficulty in defining outcomes such as ''success in residency.'' With varying outcomes used to define success in residency it is difficult to determine the predictive value of individual application factors. ...
Article
Background: Residency applicants feel increasing pressure to maximize their chances of successfully matching into the program of their choice, and are applying to more programs than ever before. Objective: In this narrative review, we examined the most common and highly rated factors used to select applicants for interviews. We also examined the literature surrounding those factors to illuminate the advantages and disadvantages of using them as differentiating elements in interviewee selection. Methods: Using the 2018 NRMP Program Director Survey as a framework, we examined the last 10 years of literature to ascertain how residency directors are using these common factors to grant residency interviews, and whether these factors are predictive of success in residency. Results: Residency program directors identified 12 factors that contribute substantially to the decision to invite applicants for interviews. Although United States Medical Licensing Examination (USMLE) Step 1 is often used as a comparative factor, most studies do not demonstrate its predictive value for resident performance, except in the case of test failure. We also found that structured letters of recommendation from within a specialty carry increased benefit when compared with generic letters. Failing USMLE Step 1 or 2 and unprofessional behavior predicted lower performance in residency. Conclusions: We found that the evidence basis for the factors most commonly used by residency directors is decidedly mixed in terms of predicting success in residency and beyond. Given these limitations, program directors should be skeptical of making summative decisions based on any one factor.
... In fact, LORs from authors in that specialty were RECEIVED: 26 March 2019; ACCEPTED: 9 April 2019 the second most commonly cited factor by programs in selecting applicants to interview (4). Additionally, ''global rating'' and ''competitiveness'' on the SLOE were among the highest predictors of top performers in EM residency (6). In a questionnaire sent to EM PDs in 2013, 93% ranked SLOEs as the most important factor when selecting applicants for interviews (7). ...
Article
Letters of recommendation (LORs) are a central element of an applicant’s portfolio for the National Resident Matching Program (NRMP, known as the “Match”). This is especially true when applying to competitive specialties like emergency medicine (EM). LORs convey an applicant’s potential for success, and also highlight an applicant’s qualities that can’t always be recognized from a curriculum vitae, test scores, or grades. Traditional LORs, also called narrative LORs (NLORs), are written in prose and are therefore highly subjective. This led to the establishment of a task force by the Council of EM Residency Directors (CORD) in 1995 to develop a standardized letter of recommendation (SLOR). Revisions of this form are now referred to as a standardized letter of evaluation (SLOE). These evaluations in this format have proven to increase interrater reliability, decrease interpretation time, and standardize the process used by EM faculty to prepare evaluations for EM applicants. In this article, we will further discuss LORs, address applicants’ concerns including from whom to request LORs (EM faculty vs. non-EM faculty vs. non-clinical faculty), discuss the number of LORs an applicant should try to include in his or her application materials, the preferred manner of requesting and the timing in which to ask for a LOR, as well as the philosophy behind waiving the right to see the letter.
... Studies of residency applications show mixed results for predicting future performance including medical school grades, United States Medical Licensing Examination (USMLE) performance, and letters of recommendation. [1][2][3] Success in residency training and beyond likely requires a mixture of cognitive and nontechnical skills. As defined by the Accreditation Council for Graduate Medical Education (ACGME), professionalism (PROF) requires a commitment to carrying out professional responsibilities and an adherence to ethical principles; interpersonal communication skills (ICS) require the effective exchange of information and collaboration with patients, their families, and health professionals. ...
Article
Full-text available
Objectives The AAMC Standardized Video Interview (SVI) was recently added as a component of Emergency Medicine (EM) residency applications to provide additional information about Interpersonal Communication Skills (ICS) and knowledge of Professionalism (PROF) behaviors. Our objective was to ascertain the correlation between the SVI and residency interviewer assessments of PROF and ICS. Secondary objectives included examination of (a) inter‐ and intra‐institutional assessments of ICS and PROF; (b) correlation of SVI scores with Rank Order List (ROL) positions; and (c) the potential influence of gender on interview day assessments. Methods We conducted an observational study using prospectively‐collected data from seven EM residency programs during 2017‐2018 using a standardized instrument. Correlations between interview day PROF / ICS scores and the SVI were tested. A one‐way ANOVA was used to analyze the association of SVI and ROL position. Gender differences were assessed with independent‐groups t‐tests. Results A total of 1,264 interview‐day encounters from 773 unique applicants resulted in 4,854 interviews conducted by 151 interviewers. Both PROF and ICS demonstrated a small positive correlation with the SVI score (rs = .16 and .17, respectively). ROL position was associated with SVI score (p < .001), with mean SVI scores for top‐, middle‐, and bottom‐third applicants being 20.9, 20.5, and 19.8, respectively. No group differences with gender were identified on assessments of PROF or ICS. Conclusions Interview assessments of PROF and ICS have a small, positive correlation with SVI scores. These residency selection tools may be measuring related, but not redundant, applicant characteristics. We did not identify gender differences in interview assessments. This article is protected by copyright. All rights reserved.
... 1 However, the decision on the likelihood to invite (LTI) an applicant for an interview for residency is multifactorial and varies among programs. 2,3 Although the intent was to widen the applicant pool, the SVI theoretically has the ability to potentially limit an applicant as well. Therefore, in real-world use, it is unclear in which direction the SVI affects faculty application reviewers' decisions on the LTI. ...
Article
Full-text available
Background The Association of American Medical Colleges (AAMC) instituted a Standardized Video Interview (SVI) for all applicants to emergency medicine (EM). It is unclear how the SVI affects a faculty reviewer's decision on likelihood to invite an applicant (LTI) for an interview. Objectives To determine whether the SVI affects the LTI. Methods Nine ACGME‐accredited EM residency programs participated in this prospective, observational study. LTI was defined on a 5‐point Likert scale as follows: 1= definitely not invite, 2=likely not invite, 3=might invite, 4=probably invite, 5=definitely invite. LTI was recorded at three instances during each review: (1) after typical screening (blinded to the SVI), (2) after unblinding to the SVI score and (3) after viewing the SVI video. Results Seventeen reviewers at nine ACGME‐accredited residency programs participated. We reviewed 2219 applications representing 1424 unique applicants. After unblinding the SVI score, LTI did not change in 2065 (93.1%), increased in 85 (3.8%) and decreased in 69 (3.1%; p=0.22). In subgroup analyses, the effect of the SVI on LTI was unchanged by USMLE score. However, when examining subgroups of SVI scores, the percentage of applicants in whom the SVI score changed the LTI was significantly different in those that scored in the lower and upper subgroups (p<0.0001). The SVI video was viewed in 816 (36.8%) applications. Watching the video did not change the LTI in 631 (77.3%), LTI increased in 106 (13.0%) and decreased in 79 (9.7%) applications [p=0.04]. Conclusions The SVI score changed the LTI in 7% of applications. In this group, the score was equally likely to increase or decrease the LTI. Lower SVI scores were more likely to decrease the LTI than higher scores were to increase the LTI. Watching the SVI video was more likely to increase the LTI than to decrease it. This article is protected by copyright. All rights reserved.
... Data suggest varied correlations between elements of the application and prediction of success in residency, including the USMLE and induction into honor societies such as Alpha Omega Alpha. 13,14 The USMLE provides PDs with a standardized metric as a result of a uniform grading system across all test-takers. In Step 1 and Step 2 CK, examinees answer multiple choice questions related to the basic sciences and then clinical medicine. ...
Article
Full-text available
Introduction: In 2017, the Standardized Video Interview (SVI) was required for applicants to emergency medicine (EM). The SVI contains six questions highlighting professionalism and interpersonal communication skills. The responses were scored (6-30). As it is a new metric, no information is available on correlation between SVI scores and other application data. This study was to determine if a correlation exists between applicants' United States Medical Licensing Examination (USMLE) and SVI scores. We hypothesized that numeric USMLE Step 1 and Step 2 Clinical Knowledge (CK) scores would not correlate with the SVI score, but that performance on the Step 2 Clinical Skills (CS) portion may correlate with the SVI since both test communication skills. Methods: Nine EM residency sites participated in the study with data exported from an Electronic Residency Application Service (ERAS®) report. All applicants with both SVI and USMLE scores were included. We studied the correlation between SVI scores and USMLE scores. Predetermined subgroup analysis was performed based on applicants' USMLE Step 1 and Step 2 CK scores as follows: (≥ 200, 201-220, 221-240, 241-260, >260). We used linear regression, the Kruskal-Wallis test and Mann-Whitney U test for statistical analyses. Results: 1,325 applicants had both Step 1 and SVI scores available, with no correlation between the overall scores (p=0.58) and no correlation between the scores across all Step 1 score ranges, (p=0.29). Both Step 2 CK and SVI scores were available for 1,275 applicants, with no correlation between the overall scores (p=0.56) and no correlation across all ranges, (p=0.10). The USMLE Step 2 CS and SVI scores were available for 1,000 applicants. Four applicants failed the CS test without any correlation to the SVI score (p=0.08). Conclusion: We found no correlation between the scores on any portion of the USMLE and the SVI; therefore, the SVI provides new information to application screeners.
... 15 Although it remains unclear whether AOA and GHHS induction predicts performance in residency, AOA induction is associated with increased odds of becoming a medical school professor. [16][17][18][19][20] Despite the importance of honor societies, only one study has evaluated potential disparities in the selection process. 21 Assessing applicants to Yale University's residency programs in 2014-2015, that study found that Asian and black students had lower odds than white students of being inducted into AOA. ...
Article
Purpose: A large body of literature has demonstrated racial and gender disparities in the physician workforce, but limited data are available regarding the potential origins of these disparities. To that end, the authors evaluated the effects of race and gender on Alpha Omega Alpha Honor Medical Society (AOA) and Gold Humanism Honor Society (GHHS) induction. Method: In this retrospective cohort study, the authors examined data from 11,781 Electronic Residency Application Service applications from 133 U.S. MD-granting medical schools to 12 residency programs in the 2014-2015 application cycle and to all 15 residency programs in the 2015-2016 cycle at Yale-New Haven Hospital. They estimated the odds of induction into AOA and GHHS using logistic regression models, adjusting for Step 1 score, research publications, citizenship status, training interruptions, and year of application. They used gender- and race-matched samples to account for differences in clerkship grades and to test for bias. Results: Women were more likely than men to be inducted into GHHS (odds ratio 1.84, P < .001) but did not differ in their likelihood of being inducted into AOA. Black medical students were less likely to be inducted into AOA (odds ratio 0.37, P < .05) but not into GHHS. Conclusions: These findings demonstrate significant differences between groups in AOA and GHHS induction. Given the importance of honor society induction in residency applications and beyond, these differences must be explored further.
... Attempts at predicting success in an EM program through the analysis of applicant characteristics has been done in a number of previous studies [1][2][3][4][5][6] as well as in obstetrics/ gynecology 7 and orthopedic surgical residencies, 8 Surgery programs have found a weak correlation between USMLE scores and certain tests of gross manual dexterity. 9 However, the reverse question has not been as well studied and we were unable to identify studies that specifically target applicant characteristics related to poor performance in EM residency. ...
Article
Full-text available
Introduction Negative outcomes in emergency medicine (EM) programs use a disproportionate amount of educational resources to the detriment of other residents. We sought to determine if any applicant characteristics identifiable during the selection process are associated with negative outcomes during residency. Methods Primary analysis consisted of looking at the association of each of the descriptors including resident characteristics and events during residency with a composite measure of negative outcomes. Components of the negative outcome composite were any formal remediation, failure to complete residency, or extension of residency. Results From a dataset of 260 residents who completed their residency over a 19-year period, 26 (10%) were osteopaths and 33 (13%) were international medical school graduates A leave of absence during medical school (p <.001), failure to send a thank-you note (p=.008), a failing score on United States Medical Licensing Examination Step I (p=.002), and a prior career in health (p=.034) were factors associated with greater likelihood of a negative outcome. All four residents with a “red flag” during their medicine clerkships experienced a negative outcome (p <.001). Conclusion “Red flags” during EM clerkships, a leave of absence during medical school for any reason and failure to send post-interview thank-you notes may be associated with negative outcomes during an EM residency.
... Further, correlating many of these elements with success in residency has proven challenging. [15][16][17] Although educators may attempt to glean insight into an applicant's personality through qualitative comments and interview performance, little is understood about how to assess and interpret personality traits in this context. ...
Article
Objectives This study aimed to understand the personality characteristics of emergency medicine (EM) residents and assess consistency and variations among residency programs. Methods In this cross‐sectional study, a convenience sample of residents (N = 140) at five EM residency programs in the United States completed three personality assessments: the Hogan Personality Inventory (HPI)—describing usual tendencies; the Hogan Development Survey (HDS)—describing tendencies under stress or fatigue; and the Motives, Values, and Preferences Inventory (MVPI)—describing motivators. Differences between EM residents and a normative population of U.S. physicians were examined with one‐sample t‐tests. Differences between EM residents by program were analyzed using one‐way analysis of variance tests. Results One‐hundred forty (100%), 124 (88.6%), and 121 (86.4%) residents completed the HPI, HDS, and MVPI, respectively. For the HPI, residents scored lower than the norms on the adjustment, ambition, learning approach, inquisitive, and prudence scales. For the HDS, residents scored higher than the norms on the cautious, excitable, reserved, and leisurely scales, but lower on bold, diligent, and imaginative scales. For the MVPI, residents scored higher than the physician population norms on altruistic, hedonistic, and aesthetics scales, although lower on the security and tradition scales. Residents at the five programs were similar on 22 of 28 scales, differing on one of 11 scales of the HPI (interpersonal sensitivity), two of 11 scales of the HDS (leisurely, bold), and three of 10 scales of the MVPI (aesthetics, commerce, and recognition). Conclusions Our findings suggest that the personality characteristics of EM residents differ considerably from the norm for physicians, which may have implications for medical students’ choice of specialty. Additionally, results indicated that EM residents at different programs are comparable in many areas, but moderate variation in personality characteristics exists. These results may help to inform future research incorporating personality assessment into the resident selection process and the training environment.
... 12 These factors may predict success as well as the factors that have been quantified and examined here. 13 Test scores and memberships in an honorary society should not be taken as evidence of an entirely unchanging applicant pool. However, in light of previously reported difficulties that EM faculty members have in accurately assessing applicants for letters of evaluation and in predicting position on rank lists, it is important that the relative meaning of these scores and designations be understood. ...
Article
Full-text available
Since 1978, the NRMP has published data demonstrating characteristics of applicants that have matched into their preferred specialty in the NRMP main residency match. This data has been published approximately every two years. There is limited information about trends within this published data for students matching into emergency medicine (EM). Our objective was to investigate and describe trends in NRMP data to include the ratio of applicants to available EM positions, USMLE Step 1 and Step 2 scores (compared to the national means), number of programs ranked, and AOA membership among US seniors matching into EM. This was a retrospective observational review of NRMP data published between 2007 and 2016. The data was analyzed using ANOVA and Fischer’s exact to determine statistical significance. The ratio of applicants to available EM positions remained essentially stable from 2007 to 2014, but did increase slightly in 2016. A net upward trend in overall Step 1 and Step 2 scores for EM applicants was observed. However, this did not outpace the national trend increase in Step 1 and 2 scores overall. There was no statistical difference in the mean number of programs ranked by EM applicants among the years studied (p=0.93). Among time intervals, there was a difference in the number of EM applicants with AOA membership (p=0.043) due to a drop in the number of AOA students in 2011. No sustained statistical trend was identified over the 7-year period studied. NRMP data demonstrate trends among EM applicants that are similar to national trends in other specialties for USMLE board scores, and stability in number of programs ranked and AOA membership. EM does not appear to have become more competitive relative to other specialties or previous years in these categories.
Article
Objectives: To examine the Urology residency application process, particularly the interview. Historically, the residency interview has been vulnerable to bias and not determined to be a predictor of future residency performance. Our goal is to determine the relationship between pre-interview metrics and post-interview ranking using best practices for Urology resident selection including holistic review, blinded interviews, and structured behaviorally anchored questions. Methods: Applications were assessed on cognitive (Alpha Omega Alpha [AOA], class rank [CR], junior year clinical clerkship [JYCC] grades) and noncognitive attributes (letters of recommendation [LOR], personal statement [PS], demographics, research, personal characteristics) by reviewers blinded to USMLE scores and photograph. Interviewers were blinded to the application other than PS and LORs. Interviews consisted of a structured behaviorally anchored question (SBI) and an unstructured interview (UI). Odds ratios were determined comparing pre-interview (PI) and interview impressions. Results: 51 applicants were included in analysis. USMLE Step 1 score (avg 245) was associated with AOA, CR, JYCC, and PS. The UI score was associated with the LOR (p=0.04) whereas SBI scores were not (p=0.5). Faculty rank was associated with SBI, UI, and overall interview (OI) scores (p<0.001). Faculty rank was also associated with LOR. Resident impression of interviewees were associated with faculty interview scores (p=0.001) and faculty rank (p<0.001). Conclusions: Traditional interviews may be biased toward application materials and may be balanced with behavioral questions. While Step 1 score does not offer additional information over other PI metrics, blinded interviews may offer discriminant validity over a pre-interview rubric.
Article
Purpose: With the change in Step 1 score reporting, Step 2 Clinical Knowledge (CK) may become a pivotal factor in resident selection. This systematic review and meta-analysis seeks to synthesize existing observational studies that assess the relationship between Step 2 CK scores and measures of resident performance. Method: The authors searched MEDLINE, Web of Science, and Scopus databases using terms related to Step 2 CK in 2021. Two researchers identified studies investigating the association between Step 2 CK and measures of resident performance and included studies if they contained a bivariate analysis examining Step 2 CK scores' association with an outcome of interest: in-training examination (ITE) scores, board certification examination (BCE) scores, select Accreditation Council for Graduate Medical Education (ACGME) core competency assessments, overall resident performance evaluations, or other subjective measures of performance. For outcomes that were investigated by 3 or more studies, pooled effect sizes were estimated with random-effects models. Results: Among 1,355 potential studies, 68 met inclusion criteria and 43 were able to be pooled. There was a moderate positive correlation between Step 2 CK and ITE scores (0.52, 95% CI 0.45-0.59, P < .01). There was a moderate positive correlation between Step 2 CK and ITE scores for both nonsurgical (0.59, 95% CI 0.51-0.66, P < .01) and surgical specialties (0.41, 95% CI 0.33-0.48, P < .01). There was a very weak positive correlation between Step 2 CK scores and subjective measures of resident performance (0.19, 95% CI 0.13-0.25, P < .01). Conclusions: This study found Step 2 CK scores have a statistically significant moderate positive association with future examination scores and a statistically significant weak positive correlation with subjective measures of resident performance. These findings are increasingly relevant as Step 2 CK scores will likely become more important in resident selection.
Article
Residency application numbers have skyrocketed in the last decade, and stakeholders have scrambled to identify and deploy methods of reducing the number of applications submitted to each program. These interventions have traditionally focused on the logistics of the application submission and review process, neglecting many of the drivers of overapplication. Implementing application caps, preference signaling as described by Pletcher and colleagues in this issue, or an early Match does not address the fear of not matching that applicants hold, the lack of transparent data available for applicants to assess their alignment with a specific program, or issues of inequity in the residency selection process. Now is the time to reconsider the residency selection process itself. As competency-based medical education emerges as the predominant educational paradigm, residency selection practices must also shift to align with societal, specialty, and program outcomes. The field of industrial and organizational psychology offers a multitude of tools (e.g., job analysis) by which to define the necessary outcomes of residency training. These tools also provide programs with the infrastructure around which to scaffold an outcomes-oriented approach to the residency selection process. Programs then can connect residency selection to training outcomes, longitudinal assessment modalities, and the evolving learning environment. To achieve an outcomes-oriented residency selection process, stakeholders at all levels will need to invest in coproducing novel ways forward. These solutions range from defining program priorities to implementing national policy. Focusing on outcomes will facilitate a more transparent residency selection process while also allowing logistics-level interventions to be successful, as applicants will be empowered to better assess their alignment with each program and apply accordingly.
Article
Background To match medical students into residency training programs, both the program and student create rank order lists (ROLs). We aim to investigate temporal trends in ROL lengths across 7 match cycles between 2014 and 2021 for both matched and unmatched residency applicants and programs. Methods retrospective study of ROLs of 7 match cycles, 2014-2021. Residency match and ROL data were extracted from the NRMP database to assess the number of programs filled and unfilled, length of ROLs, position matched, and average ranks per position for osteopathic (DO) and allopathic (MD) medical programs. Results For filled residency programs, the average ROL length consistently increased from 70.72 in 2015 to 88.73 in 2021 ( P = .003), with ROL lengths consistently longer for filled vs unfilled residency programs ( P < .001). The average ROL length for matched applicants increased consistently from 10.41 in 2015 to 12.35 in 2021 ( P = .002), with matched applicants having consistently longer ROLs than unmatched applicants ( P < .001). From 2015 to 2021, in both MD and DO applicants, progressively lower proportions of applicants matched their first and second choices. Conclusion Trends across the past 7 residency match cycles suggest that ROL lengths for both programs and applicants have been increasing with matched programs and applicants submitting significantly longer ROLs than unmatched applicants. Additionally, fewer applicants are matching at their preferred programs over time. Our findings support the mounting evidence that the Match has become increasingly congested and we discuss the possible factors that may be contributing to the current state of the Match as well as potential solutions.
Article
Residency programs should use a systematic method of recruitment that begins with defining unique desired candidate attributes. Commonly sought-after characteristics may be delineated via the residency application. Scores from standardized examinations taken in medical school predict academic success, and may correlate to overall performance. Strong letters of recommendation and a personal history of prior success outside the medical field both forecast success in residency. Interviews are crucial to determining fit within a program, and remain a valid measure of an applicant's ability to prosper in a particular program, even with many interviews being completed in the virtual realm.
Article
Background: The American Orthopaedic Association (AOA) released the standardized letter of recommendation (SLOR) form to provide standardized information to evaluators of orthopaedic residency applicants. The SLOR associates numerical data to an applicant's letter of recommendation. However, it remains unclear whether the new letter form effectively distinguishes among orthopaedic applicants, for whom letters are perceived to suffer from "grade inflation." In addition, it is unknown whether letters from more experienced faculty members differ in important ways from those written by less experienced faculty. Questions/purposes: (1) What proportion of SLOR recipients were rated in the top 10th percentile and top one-third of the applicant pool? (2) Did letters from program leaders (program directors and department chairs) demonstrate lower aggregate SLOR scores compared with letters written by other faculty members? (3) Did letters from away rotation program leaders demonstrate lower aggregate SLOR scores compared with letters written by faculty at the applicant's home institution? Methods: This was a retrospective, single institution study examining 559 applications from the 2018 orthopaedic match. Inclusion criteria were all applications submitted to this residency. Exclusion criteria included all letters without an associated SLOR. In all, 1852 letters were received; of these, 26% (476) were excluded, and 74% (1376) were analyzed for SLOR data. We excluded 12% (169 of 1376) of letters that did not include a final summative score. Program leaders were defined as orthopaedic chairs and program directors. Away rotation letters were defined as letters written by faculty during an applicant's away rotation. Our study questions were answered accounting for each subcategory on the SLOR (scale 1-10) and the final ranking (scale 1-5) to form an aggregated score from the SLOR form for each letter. All SLOR questions were included in the creation of these scores. Correlations between program leaders and other faculty letter writers were assessed using a chi-square test. We considered a 1-point difference on 5-point scales to be a clinically important difference and a 2-point difference on 10-point scales to be clinically important. Results: We found that 36% (437 of 1207) of the letters we reviewed indicated the candidate was in the top 10th percentile of all applicants evaluated, and 51% (619 of 1207) of the letters we reviewed indicated the candidate was in the top one-third of all applicants evaluated. We found no clinically important difference between program leaders and other faculty members in terms of summative scores on the SLOR (1.9 ± 0.7 versus 1.7 ± 0.7, mean difference -0.2 [95% CI -0.3 to 0.1]; p < 0.001). We also found no clinically important difference between home program letter writers and away program letter writers in terms of the mean summative scores (1.9 ± 0.7 versus 1.7 ± 0.7, mean difference 0.2; p < 0.001). Conclusion: In light of these discoveries, programs should examine the data obtained from SLOR forms carefully. SLOR scores skew very positively, which may benefit weaker applicants and harm stronger applicants. Program leaders give summative scores that do not differ substantially from junior faculty, suggesting there is no important difference in grade inflation between these faculty types, and as such, there is no strong need to adjust scores by faculty level. Likewise, away rotation letter writers' summative scores were not substantially different from those of home institution letters writers, indicating that there is no need to adjust scores between these groups either. Based on these findings, we should interpret letters with the understanding that overall there is substantial grade inflation. However, while weight used to be given to letters written by senior faculty members and those obtained on away rotations, we should now examine them equally, rather than trying to adjust them for overly high or low scores. Level of evidence: Level III, therapeutic study.
Article
Otolaryngology continues to have one of the lowest percentages of black physicians of any surgical specialty, a number than has not improved in recent years. The history of exclusion of black students in medical education as well as ongoing bias affecting examination scores, clerkship grades and evaluations, and honors society acceptance of black students may factor into the disproportionately low number of black otolaryngology residents. In order to increase the number of black physicians in otolaryngology, intentional steps must be taken to actively recruit, mentor, and train black physicians specializing in otolaryngology.
Article
Purpose: Given the growing emphasis placed on clerkship performance for residency selection, clinical evaluation and its grading implications are critically important; therefore, the authors conducted this study to determine which evaluation components best predict a clinical honors recommendation across three core clerkships. Method: Student evaluation data were collected during academic years 2015-2017 from the 3-year internal medicine (IM), pediatrics, and surgery clerkships at the University of Alabama at Birmingham School of Medicine. The authors used factor analysis to examine 12 evaluation components (12 items), and they applied multi-level logistic regression to correlate evaluation components with a clinical honors recommendation. Results: Of the 3,947 completed evaluations, 1,508 (38%) recommended clinical honors. The top item that predicted clinical honors recommendation was clinical reasoning skills for IM (odds ratio [OR] 2.8; 95% confidence interval [CI] 1.9 to 4.2; P < 0.001), presentation skills for surgery (OR 2.6; 95% CI 1.6 to 4.2; P < 0.001), and knowledge application for pediatrics (OR 4.8; 95% CI 2.8 to 8.2; P < 0.001). Students who spent more time with their evaluators were more likely to receive clinical honors (P < 0.001), and residents were more likely than faculty to recommend clinical honors (P < 0.001). Of the top 5 evaluation items associated with clinical honors, 4 composed a single factor for all clerkships: clinical reasoning, knowledge application, record keeping, and presentation skills. Conclusions: The 4 characteristics that best predicted a clinical honors recommendation in all disciplines (clinical reasoning, knowledge application, record keeping, and presentation skills) correspond with traditional definitions of clinical competence. Structural components such as contact time with evaluators also correlated with a clinical honors recommendation. These findings provide empiric insight into the determination of clinical honors and the need for heightened attention to structural components of clerkships and increased scrutiny of evaluation rubrics.
Article
Objective Our goal was to identify aspects of residency applications predictive of subsequent performance during pediatric internship. Methods We conducted a retrospective cohort study of graduates of U.S. medical schools who began pediatric internship in a large pediatric residency program in the summers of 2013 through 2017. The primary outcome was the weighted average of subjects’ ACGME pediatric milestones scores at the end of pediatric internship. To determine factors independently associated with performance, we conducted multivariate linear mixed-effects models controlling for match year and Milestone grading committee as random effects and the following application factors as fixed effects: letter of recommendation strength, clerkship grades, medical school reputation, master's or PhD degrees, gender, USMLE Step 1 score, Alpha Omega Alpha membership, private medical school, and interview score. Results Our study population included 195 interns. In multivariate analyses, the aspects of applications significantly associated with composite Milestone scores at the end of internship were LOR strength (estimate 0.09, 95% confidence intervals 0.04, 0.15), numbers of clerkship honors (est. 0.05, 95% CI: 0.01-0.09), medical school ranking (est. 0.04, 95% CI: 0.08-0.01), having a master's degree (est. 0.19, 95% CI: 0.03-0.36), and not having a PhD (est. 0.14, 95% CI: 0.02-0.26). Overall the final model explained 18% of the variance in milestone scoring. Conclusion Letter of recommendation strength, clerkship grades, medical school ranking, and having obtained a Master's degree were significantly associated with higher clinical performance during pediatric internship.
Article
Background Program directors (PDs) in emergency medicine (EM) receive an abundance of applications for very few residency training spots. It is unclear which selection strategies will yield the most successful residents. Many authors have attempted to determine which items in an applicant’s file predict future performance in EM. Objectives The purpose of this scoping review is to examine the breadth of evidence related to the predictive value of selection factors for performance in EM residency. Methods The authors systematically searched four databases and websites for peer‐reviewed and gray literature related to EM admissions published between 1992 and February 2019. Two reviewers screened titles and abstracts for articles that met the inclusion criteria, according to the scoping review study protocol. The authors included studies if they specifically examined selection factors and whether those factors predicted performance in EM residency training in the United States. Results After screening 23,243 records, the authors selected 60 for full review. From these, the authors selected 15 published manuscripts, one unpublished manuscript, and 11 abstracts for inclusion in the review. These studies examined the United States Medical Licensing Examination (USMLE), Standardized Letters of Evaluation, Medical Student Performance Evaluation, medical school attended, clerkship grades, membership in honor societies, and other less common factors and their association with future EM residency training performance. Conclusions The USMLE was the most common factor studied. It unreliably predicts clinical performance, but more reliably predicts performance on licensing examinations. All other factors were less commonly studied and, similar to the USMLE, yielded mixed results.
Article
Background: The standardized letter of evaluation (SLOE) in emergency medicine (EM) is one of the most important items in a student's application to EM residency and replaces narrative letters of recommendation. The SLOE ranks students into quantile categories in comparison to their peers for overall performance during an EM clerkship and for their expected rank list position. Gender differences exist in several assessment methods in undergraduate and graduate medical education. No authors have recently studied whether there are differences in the global assessment of men and women on the SLOE. Objectives: The objective of this study was to determine if there is an effect of student gender on the outcome of a SLOE. Methods: This was a retrospective observational study examining SLOEs from applications to a large urban, academic EM residency program from 2015 to 2016. Composite scores (CSs), comparative rank scores (CRSs), and rank list position scores (RLPSs) on the SLOE were compared for female and male applicants using Mann-Whitney U-test. Results: From a total 1,408 applications, 1,038 applicants met inclusion criteria (74%). We analyzed 2,092 SLOEs from these applications. Female applicants were found to have slightly lower and thus better CRSs, RLPSs, and CSs than men. The mean CRS for women was 2.27 and 2.45 for men (p < 0.001); RLPS for women was 2.32 and 2.52 for men (p < 0.001) and CS was 4.59 for women and 4.97 for men (p < 0.001). Conclusions: Female applicants have somewhat better performance on the EM SLOE than their male counterparts.
Article
Background: Each application cycle, emergency medicine (EM) residency programs attempt to predict which applicants will be most successful in residency and rank them accordingly on their program's Rank Order List (ROL). Objective: Determine if ROL position, participation in a medical student rotation at their respective program, or United States Medical Licensing Examination (USMLE) Step 1 rank within a class is predictive of residency performance. Methods: All full-time EM faculty at Los Angeles County + University of Southern California (LAC + USC), Harbor-UCLA (Harbor), Alameda Health System-Highland (Highland), and the University of California-Irvine (UCI) ranked each resident in the classes of 2013 and 2014 at time of graduation. From these anonymous surveys, a graduation ROL was created, and using Spearman's rho, was compared with the program's adjusted ROL, USMLE Step 1 rank, and whether the resident participated in a medical student rotation. Results: A total of 93 residents were evaluated. Graduation ROL position did not correlate with adjusted ROL position (Rho = 0.14, p = 0.19) or USMLE Step 1 rank (Rho = 0.15, p = 0.14). Interestingly, among the subgroup of residents who rotated as medical students, adjusted ROL position demonstrated significant correlation with final ranking on graduation ROL (Rho = 0.31, p = 0.03). Conclusions: USMLE Step 1 score rank and adjusted ROL position did not predict resident performance at time of graduation. However, adjusted ROL position was predictive of future residency success in the subgroup of residents who had completed a sub-internship at their respective programs. These findings should guide the future selection of EM residents.
Article
Purpose: The use of a standardized knowledge test to assess postgraduate year 1 (PGY1) pharmacy residency training was evaluated. Methods: This was a retrospective review of a prospectively administered exam. A bank of questions was developed by preceptors from each of the core rotation disciplines: general medicine (including ambulatory care and oncology), pediatrics, critical care (including transplantation), drug information, operations, practice management, and psychiatry. Board-certified pharmacy specialists at our institution were asked to submit 5-10 questions with answers that would likely be encountered by residents during rotation in their specific specialty area. The exam was administered at the beginning and the end of the resident's PGY1 year. Results: A total of 49 PGY1 residents completed the examination during the first and last months of their residency training. Residents' overall scores improved 5-10% annually from baseline to completion of their residency. The mean overall exam score significantly improved from baseline after completion of a PGY1 residency at our institution for all four class years. All four residency classes demonstrated an increase from baseline scores in most core disciplines with the exception of practice management, which decreased every year of the examination. Conclusion: Scores on a standardized exam developed to assess the baseline knowledge of incoming PGY1 residents and the effect of one year of residency training improved in the majority of practice areas at the end of the year compared to scores at the beginning of the year.
Article
Full-text available
Introduction The standard letter of recommendation in emergency medicine (SLOR) was developed to standardize the evaluation of applicants, improve inter-rater reliability, and discourage grade inflation. The primary objective of this study was to describe the distribution of categorical variables on the SLOR in order to characterize scoring tendencies of writers. Methods We performed a retrospective review of all SLORs written on behalf of applicants to the three Emergency Medicine residency programs in the University of Arizona Health Network (i.e. the University Campus program, the South Campus program and the Emergency Medicine/Pediatrics combined program) in 2012. All “Qualifications for Emergency Medicine” and “Global Assessment” variables were analyzed. Results 1457 SLORs were reviewed, representing 26.7% of the total number of Electronic Residency Application Service applicants for the academic year. Letter writers were most likely to use the highest/most desirable category on “Qualifications for EM” variables (50.7%) and to use the second highest category on “Global Assessments” (43.8%). For 4-point scale variables, 91% of all responses were in one of the top two ratings. For 3-point scale variables, 94.6% were in one of the top two ratings. Overall, the lowest/least desirable ratings were used less than 2% of the time. Conclusions SLOR letter writers do not use the full spectrum of categories for each variable proportionately. Despite the attempt to discourage grade inflation, nearly all variable responses on the SLOR are in the top two categories. Writers use the lowest categories less than 2% of the time. Program Directors should consider tendencies of SLOR writers when reviewing SLORs of potential applicants to their programs.
Article
Full-text available
Several factors influence the final placement of a medical student candidate on an emergency medicine (EM) residency program's rank order list, including EM grade, standardized letter of recommendation, medical school class rank, and US Medical License Examination (USMLE) scores. We sought to determine the correlation of these parameters with a candidate's final rank on a residency program's rank order list. We used a retrospective cohort design to examine 129 candidate packets from an EM residency program. Class ranks were assessed according to the instructions provided by the students' medical schools. EM grades were scored from 1 (honors) to 5 (fail). Global assessments noted on the standardized letter of recommendation (SLOR) were scored from 1 (outstanding) to 4 (good). USMLE scores were reported as the candidate's 3-digit scores. Spearman's rank correlation coefficient was used to analyze data. Electronic Residency Application Service packets for 127/129 (98.4%) candidates were examined. The following parameters correlated positively with a candidate's final placement on the rank order list: EM grade, ρ = 0.379, P < 0.001; global assessment, ρ = 0.332, P < 0.001; and class rank, ρ = 0.234, P = 0.035. We found a negative correlation between final placement on the rank order list with both USMLE step 1 scores, ρ = -0.253, P=0.006; and USMLE step 2 scores, ρ = -0.348, P = 0.004. Higher scores on EM rotations, medical school class ranks, and SLOR global assessments correlated with higher placements on a rank order list, whereas candidates with higher USMLE scores had lower placements on a rank order list. However, none of the parameters examined correlated strongly with ultimate position of a candidate on the rank list, which underscores that other factors may influence a candidate's final ranking.
Article
Full-text available
In selecting a medical student for a urology residency, a set of preconceived criteria as to what will predict a successful resident are generally applied. To determine what factors predict an "excellent" clinical resident and a successful in-service test taker, we analyzed 10 years of urology resident files. PARTICIPANTS AND STUDY DESIGN: Retrospective chart review of 29 urology residents at Washington University graduating from July 2000 to July 2009. Medical student applications and interview evaluations were compared with future performance as a general surgical intern and then as a urology resident, in terms of clinical performance and in-service examination scores. Of 29 residents, based on clinical evaluations over 4 years of urology residency, 12 were "excellent," 17 "average and needing improvement." "Excellent" residents had higher applicant rank submitted to the "match" (7.2 vs. 12.1, p = 0.04) and better letters of recommendation (3.0 vs. 2.5, 0.018). "Excellent" residents also had better evaluations as an intern (3.9 vs 2.7, p < 0.001). "Good" urology in-service examination test takers compared with "below average" test takers noted higher rank on the match list (7.8 vs 12.1, p = 0.04), better quality med school (2.6 vs 2.0; p = 0.002), higher USMLE scores (92.5 vs 86.6% tile, p = 0.02), American Board of Surgery in-training examination (ABSITE) score (58.6 vs 37.2% tile, p = 0.04), and were more likely to pass the board examination (100% vs 76.9%, p = 0.03). Residents with higher clinical evaluations were also more likely to go into fellowships (83.3% vs 16.2%, OR = 23.3) and academic careers (41.6 vs 11.1%, OR = 5.71). Performance as a surgery intern predicts future performance as a GU Resident. "Good" test takers as medical students and as interns continue to test well as GU residents. Early identification, intervention, and mentoring while still an intern are essential. Selection criteria we currently use to select GU residents are surprisingly predictive.
Article
Full-text available
Although never directly compared, structured interviews are reported as being more reliable than unstructured interviews. This study compared the reliability of both types of interview when applied to a common pool of applicants for positions in an emergency medicine residency program. In 2008, one structured interview was added to the two unstructured interviews traditionally used in our resident selection process. A formal job analysis using the critical incident technique guided the development of the structured interview tool. This tool consisted of 7 scenarios assessing 4 of the domains deemed essential for success as a resident in this program. The traditional interview tool assessed 5 general criteria. In addition to these criteria, the unstructured panel members were asked to rate each candidate on the same 4 essential domains rated by the structured panel members. All 3 panels interviewed all candidates. Main outcomes were the overall, interitem, and interrater reliabilities, the correlations between interview panels, and the dimensionality of each interview tool. Thirty candidates were interviewed. The overall reliability reached 0.43 for the structured interview, and 0.81 and 0.71 for the unstructured interviews. Analyses of the variance components showed a high interrater, low interitem reliability for the structured interview, and a high interrater, high interitem reliability for the unstructured interviews. The summary measures from the 2 unstructured interviews were significantly correlated, but neither was correlated with the structured interview. Only the structured interview was multidimensional. A structured interview did not yield a higher overall reliability than both unstructured interviews. The lower reliability is explained by a lower interitem reliability, which in turn is due to the multidimensionality of the interview tool. Both unstructured panels consistently rated a single dimension, even when prompted to assess the 4 specific domains established as essential to succeed in this residency program.
Article
Full-text available
Surveys have suggested one of the most important determinants of orthopaedic resident selection is completion of an orthopaedic clerkship at the program director's institution. The purpose of this study was to further elucidate the significance of visiting externships on the resident selection process. We retrospectively reviewed data for all medical students applying for orthopaedic surgery residency from six medical schools between 2006 and 2008, for a total of 143 applicants. Univariate and multivariate regression analyses were used to compare students who matched successfully versus those who did not in terms of number of away rotations, United States Medical Licensing Examination scores, class rank, and other objective factors. Of the 143 medical students, 19 did not match in orthopaedics (13.3%), whereas the remaining 124 matched. On multiple logistic regression analysis, whether a student did more than one home rotation, how many away rotations a student performed, and United States Medical Licensing Examination Step 1 score were factors in the odds of match success. Orthopaedic surgery is one of the most competitive specialties in medicine; the away rotation remains an important factor in match success.
Article
Full-text available
Effectiveness of medical education programs is most meaningfully measured as performance of its graduates. To assess the value of measurements obtained in medical schools in predicting future performance in medical practice. The English literature from 1955 to 2004 was searched using MEDLINE, Embase, Cochrane's EPOC (Effective Practice and Organization of Care Group), Controlled Trial databases, ERIC, British Education Index, Psych Info, Timelit, Web of Science and hand searching of medical education journals. Selected studies included students assessed or followed up to internship, residency and/or practice after postgraduate training. Assessment systems and instruments studied (Predictors) were the National Board Medical Examinations (NBME) I and II, preclinical and clerkship grade-point average, Observed Standardized Clinical Examination scores and Undergraduate Dean's rankings and honors society. Outcome measures were residency supervisor ratings, NBME III, residency in-training examinations, American Specialty Board examination scores, and on-the-job practice performance. Data were extracted by using a modification of the BEME data extraction form study objectives, design, sample variables, statistical analysis and results. All included studies are summarized in a tabular form. DATA ANALYSIS AND SYNTHESIS: Quantitative meta-analysis and qualitative approaches were used for data analysis and synthesis including the methodological quality of the studies included. Of 569 studies retrieved with our search strategy, 175 full text studies were reviewed. A total of 38 studies met our inclusion criteria and 19 had sufficient data to be included in a meta-analysis of correlation coefficients. The highest correlation between predictor and outcome was NBME Part II and NBME Part III, r = 0.72, 95% CI 0.30-0.49 and the lowest between NBME I and supervisor rating during residency, r = 0.22, 95% CI 0.13-0.30. The approach to studying the predictive value of assessment tools varied widely between studies and no consistent approach could be identified. Overall, undergraduate grades and rankings were moderately correlated with internship and residency performance. Performance on similar instruments was more closely correlated. Studies assessing practice performance beyond postgraduate training programs were few. There is a need for a more consistent and systematic approach to studies of the effectiveness of undergraduate assessment systems and tools and their predictive value. Although existing tools do appear to have low to moderate correlation with postgraduate training performance, little is known about their relationship to longer-term practice patterns and outcomes.
Article
Purpose To explore the relationship(s) between USMLE, In-Training Exam, and American Board of Pediatric (ABP) board-certifying exam scores within a pediatric residency-training program. Methods Data were abstracted from records of graduating residents from the Department of Pediatrics at the University of Florida/Jacksonville from 1999 to 2004. Sixty-one residents were identified and their files were reviewed for the following information: USMLE Step 1 and 2 scores (if available), in-training exam scores, and eventual board scores as reported by the ABP. Correlational and regression analyses were performed and receiver operating characteristic (ROC) curves were compared to evaluate the overall screening power of the tests by comparing their area under the curves (AUC). Results The correlation coefficients between USMLE, in-training exam, and ABP scores were all statistically significant. In addition, USMLE Step 1 scores showed a strong correlation with board performance. Interestingly, none of the three in-training exam scores had any additional impact on predicting board performance given one's USMLE Step 1 score. USMLE Step 1 scores greater than 220 were associated with nearly a 100% passage rate on the board-certifying exam. Conclusions The data suggest that performance on Step 1 of the USMLE is an important predictor of a resident's chances of passing the pediatric boards. This information, which is available when a resident begins training, can be used to identify those at risk of not passing the boards. Individual learning plans can then be implemented early in training to maximize one's ability to pass the certifying exam.
Article
Objectives: The standardized letter of evaluation (SLOE) was created in 1997 to provide residency program directors (PDs) with a summative evaluation that incorporates normative grading (i.e., comparisons to peers applying to emergency medicine [EM] training). Although the standard letter of recommendation (SLOR) has become increasingly popular and important in decision-making, it has not been studied in the past 12 years. To assess the SLOR's effectiveness and limitations, the perspective of EM PDs was surveyed in this study. Methods: After validation of the questionnaire by 10 retired PDs, the survey was sent to the PD of each of the 159 EM residencies that existed at that time. The survey was circulated via the Council of Emergency Medicine Residency Directors' (CORD) listserv from January 24, 2013, to February 13, 2013. Weekly e-mail reminders to all PDs served to increase participation. Results: A total of 150 of 159 PDs (94.3%) completed the questionnaire. Nearly all respondents (149 of 150; 99.3%) agreed that the SLOR is an important evaluative tool and should continue to be used. In the application process, 91 of 150 (60.7%) programs require one or more SLORs, and an additional 55 (36.7%) recommend but do not require a SLOR to be considered for interview. When asked to identify the top three factors in deciding who should be interviewed, the SLOR was ranked first (139 of 150; 92.7%), with EM rotation grades ranked second (73 of 150; 48.7%). The factors that were most often identified as the top three that diminish the value of the SLOR in order were 1) "inflated evaluations" (121 of 146; 82.9%), 2) "inconsistency between comments and grades" (106 of 146; 72.6%), and 3) "inadequate perspective on candidate attributes in the written comments" and "inexperienced authors" (60 of 146; 41.1% each). Conclusions: The SLOR appears to be the most important tool in the EM PD's armamentarium for determining which candidates should be interviewed for residency training. Although valuable, the SLOR's potential utility is hampered by a number of factors, the most important of which is inflated evaluations. Focused changes in the SLOR template should be mindful that it appears, in general, to be successful in its intended purpose.
Article
Faculty in graduate medical education programs may not have uniform approaches to differentiating the quality of residents, and reviews of evaluations suggest that faculty use different standards when assessing residents. Standards for assessing residents also do not consistently map to items on evaluation forms. One way to improve assessment is to reach consensus on the traits and behaviors that are (or should be) present in the best residents. A trained interviewer conducted semistructured interviews with faculty affiliated with 2 pediatrics residency programs until content saturation was achieved. Interviewees were asked to describe specific traits present in residents they identify as the best. Interviews were recorded and transcribed. We used an iterative, inductive approach to generate a coding scheme and identify common themes. From 23 interviews, we identified 7 thematic categories of traits and behaviors: personality, energy, professionalism, team behaviors, self-improvement behaviors, patient-interaction behaviors, and medical knowledge and clinical skills (including a subcategory, knowledge integration). Most faculty interviewees focused on traits like passion, enthusiasm, maturity, and reliability. Examination score or intelligence was mentioned less frequently than traits and behaviors categorized under personality and professionalism. Faculty identified many traits and behaviors in the residents they define as the best. The thematic categories had incomplete overlap with Accreditation Council for Graduate Medical Education (ACGME) and CanMEDS competencies. This research highlights the ongoing need to review our assessment strategies, and may have implications for the ACGME Milestone Project.
Article
BACKGROUND: The Residency Review Committee requires that 65% of general surgery residents pass the American Board of Surgery qualifying and certifying examinations on the first attempt. The aim of this study was to identify predictors of successful first-attempt completion of the examinations. METHODS: Age, sex, Alpha Omega Alpha Honor Medical Society status, class rank, honors in third-year surgery clerkship, interview score, rank list number, National Board of Medical Examiners/ United States Medical Licensing Examination scores, American Board of Surgery In-Training Examination scores, resident awards, and faculty evaluations of senior residents were reviewed. Graduates who passed both examinations on the first attempt were compared with those who failed either examination on the first attempt. RESULTS: No subjective evaluations of performance predicted success other than resident awards. Significant objective predictors of successful first-attempt completion of the examinations were Alpha Omega Alpha status, ranking within the top one third of one's medical student class, National Board of Medical Examiners/United States Medical Licensing Examination Step 1 (200, top 50%) and Step 2 (186.5, top 3 quartiles) scores, and American Board of Surgery In-Training Examination scores 50th percentile (postgraduate years 1 and 3) and 33rd percen-tile (postgraduate years 4 and 5). CONCLUSIONS: Residency programs can use this information in selecting residents and in identifying residents who may need remediation.
Article
To evaluate the information available about otolaryngology residency applicants for factors that may predict future success as an otolaryngologist. Retrospective review of residency applications; survey of resident graduates and otolaryngology clinical faculty. Otolaryngology residency program. Otolaryngology program graduates from 2001 to 2010 and current clinical faculty from Barnes-Jewish Hospital/Washington University School of Medicine. Overall ratings of the otolaryngology graduates by clinical faculty (on a 5-point scale) were compared with the resident application attributes that might predict success. The application factors studied are United States Medical Licensing Examination part 1 score, Alpha Omega Alpha Honor Medical Society election, medical school grades, letter of recommendation, rank of the medical school, extracurricular activities, residency interview, experience with acting intern, and extracurricular activities. Forty-six graduates were included in the study. The overall faculty rating of the residents showed good interrater reliability. The objective factors, letters of recommendation, experience as an acting intern, and musical excellence showed no correlation with higher faculty rating. Rank of the medical school and faculty interview weakly correlated with faculty rating. Having excelled in a team sport correlated with higher faculty rating. Many of the application factors typically used during otolaryngology residency candidate selection may not be predictive of future capabilities as a clinician. Prior excellence in a team sport may suggest continued success in the health care team.
Article
During the evaluation process, Residency Admissions Committees typically gather data on objective and subjective measures of a medical student's performance through the Electronic Residency Application Service, including medical school grades, standardized test scores, research achievements, nonacademic accomplishments, letters of recommendation, the dean's letter, and personal statements. Using these data to identify which medical students are likely to become successful residents in an academic residency program in obstetrics and gynecology is difficult and to date, not well studied. To determine whether objective information in medical students' applications can help predict resident success. We performed a retrospective cohort study of all residents who matched into the Johns Hopkins University residency program in obstetrics and gynecology between 1994 and 2004 and entered the program through the National Resident Matching Program as a postgraduate year-1 resident. Residents were independently evaluated by faculty and ranked in 4 groups according to perceived level of success. Applications from residents in the highest and lowest group were abstracted. Groups were compared using the Fisher exact test and the Student t test. Seventy-five residents met inclusion criteria and 29 residents were ranked in the highest and lowest quartiles (15 in highest, 14 in lowest). Univariate analysis identified no variables as consistent predictors of resident success. In a program designed to train academic obstetrician-gynecologists, objective data from medical students' applications did not correlate with successful resident performance in our obstetrics-gynecology residency program. We need to continue our search for evaluation criteria that can accurately and reliably select the medical students that are best fit for our specialty.
Article
The current resident selection process relies heavily on medical student performance, with the assumption that analysis of this performance will aid in the selection of successful residents. Although there is abundant literature analyzing indicators of medical student performance measures as predictors of success in residency, wide-ranging differences in beliefs persist concerning their validity. We sought to collect and review studies that have correlated medical student performance with residency performance. The English-language literature from 1996 to 2009 was searched with PubMed. Selected studies evaluated medical students on the basis of US Medical Licensing Examination scores, preclinical and clinical performance, research experience, objective structured clinical examination performance, medical school factors, honor society membership, Medical Student Performance Evaluations, letters of recommendation, and faculty interviews. Outcome measures were standardized residency examinations and residency supervisor ratings. The medical student factors that correlated most strongly with performance on examinations in residency were medical student examination scores, clinical performance, and honor society membership. Those that correlated most strongly with supervisor ratings were clinical grades, faculty interview, and medical school attended. Overall, there were inconsistent results for most performance measures. In addition to the lack of a widely used measure of success in residency, most studies were small, single institution, and single specialty, and thus of limited ability to generalize findings. No one medical student factor can be used to predict performance in residency. There is a need for a more consistent and systematic approach to determining predictors of success in residency.
Article
Multiple studies have attempted to determine which attributes are predictive of success during residency as well as the optimal method of selecting residents who possess these attributes. Factors that are consistently ranked as being important in the selection of candidates into orthopaedic residency programs include performance during orthopaedic rotation, United States Medical Licensing Examination (USMLE) Step 1 score, Alpha Omega Alpha Honor Medical Society membership, medical school class rank, interview performance, and letters of recommendation. No consensus exists regarding the best predictors of resident success, but trends do exist. High USMLE Step 1 scores have been shown to correlate with high Orthopaedic In-Training Examination scores and improved surgical skill ratings during residency, whereas higher numbers of medical school clinical honors grades have been correlated to higher overall resident performance, higher residency interpersonal skills grading, higher resident knowledge grading, and higher surgical skills evaluations. Successful resident performance can be measured by evaluating psychomotor abilities, cognitive skills, and affective domain.
Article
The Residency Review Committee requires that 65% of general surgery residents pass the American Board of Surgery qualifying and certifying examinations on the first attempt. The aim of this study was to identify predictors of successful first-attempt completion of the examinations. Age, sex, Alpha Omega Alpha Honor Medical Society status, class rank, honors in third-year surgery clerkship, interview score, rank list number, National Board of Medical Examiners/United States Medical Licensing Examination scores, American Board of Surgery In-Training Examination scores, resident awards, and faculty evaluations of senior residents were reviewed. Graduates who passed both examinations on the first attempt were compared with those who failed either examination on the first attempt. No subjective evaluations of performance predicted success other than resident awards. Significant objective predictors of successful first-attempt completion of the examinations were Alpha Omega Alpha status, ranking within the top one third of one's medical student class, National Board of Medical Examiners/United States Medical Licensing Examination Step 1 (>200, top 50%) and Step 2 (>186.5, top 3 quartiles) scores, and American Board of Surgery In-Training Examination scores >50th percentile (postgraduate years 1 and 3) and >33rd percentile (postgraduate years 4 and 5). Residency programs can use this information in selecting residents and in identifying residents who may need remediation.
Article
With all of the constraints facing residency programs today, such as increasing work-hour restrictions, a changing economy and its accompanying financial pressures, and the current generation of physicians' enhanced interest in lifestyle quality, it is imperative for orthopaedic educators to identify and select the best possible residents to fill our residency programs. Several recent editorials from major orthopaedic journals have discussed identifying resident quality and maximizing success during residency. One such commentary stressed that continuously refining and improving resident education and training are of the utmost importance to the future of orthopaedic surgery (and, indeed, to the future of medicine) because doing so today will facilitate the continued recruitment of top medical students to orthopaedic surgery1. In another recent editorial, in The Journal of Bone and Joint Surgery (American Volume), Deputy Editor Marc Swiontkowski remarked that the Orthopaedic Residency Review Committee's expanded role within the Accreditation Council for Graduate Medical Education (ACGME) has recently enabled us to attain a number of important goals in postgraduate education with respect to interviewing and selecting candidates for residency as well as educating and counseling residents during residency2. In order to build on these improvements, Swiontkowski stressed that we must continue to innovate and improve all aspects of orthopaedic surgery postgraduate education. Therefore, it is important for orthopaedic surgery postgraduate training programs to evaluate the means by which they identify and select the candidates who are likely to succeed during residency and, ultimately, in practice. Moreover, identifying predictors or factors associated with success during orthopaedic surgery residency is critical knowledge for program directors, selection committees, and students. Currently and for the last twenty years, resident selection decisions have been made by a committee of faculty at our institution that reviews all of the applications and then generates a consensus ranking …
Article
This article reviews the pertinent literature related to the selection process of medical students to emergency medicine residency programs. The impact that academic performance in medical school, the interview, letters of recommendation, and other achievements have on the performance of the future resident are reviewed. All articles identified by an English language MEDLINE search were reviewed by the authors as to significance to the subject. Review of relevant literature indicates that no precise correlation can be made between performance in medical school and achievements during the residency, although there seems to be a correlation between academic performance in medical school and similar performance on board certification examinations.
To determine whether information collected during the National Resident Matching Program (NRMP) predicts clinical performance during residency. Ten faculty members rated the overall quality of 69 pediatric house officers as clinicians. After rating by the faculty, folders were reviewed for absolute rank on the NRMP match list; relative ranking (where they ranked in their postgraduate year 1 [PGY-1] group); scores on part I of the National Board of Medical Examiners (NBME) examination; grades during medical school pediatrics and internal medicine rotations; membership in the Alpha Omega Alpha Medical Honor Society; scores of faculty interviews during intern application; scores on the pediatric in-service examination during PGY-1; and scores on the American Board of Pediatrics certification examination. There was substantial agreement among faculty raters as to the overall quality of the residents (agreement rate, 0.60; kappa = 0.50; P = .001). There was little correlation between faculty ratings and absolute (r = 0.19; P = .11) or relative (r = 0.20; P = .09) ranking on the NRMP match list. Individuals ranked in the top 10 of the match list had higher faculty ratings than did their peers (mean +/- SD, 3.66+/-1.22 vs. 3.0+/-1.27; P = .03), as did individuals ranked highest in their PGY-1 group (mean +/- SD, 3.88+/-1.45 vs. 3.04+/-1.24; P = .03). There was no correlation between faculty ratings and scores on part I of the NBME examination (r = 0.10; P = .49) or scores on the American Board of Pediatrics certification examination (r = 0.22; P = . 11). There were weak correlations between faculty ratings and scores of faculty interviews during the intern application process (r = 0.27; P = .02) and scores on the pediatric in-service examination during PGY-1 (r = 0.28; P = .02). There was no difference in faculty ratings of residents who were elected to Alpha Omega Alpha during medical school (mean +/- SD, 3.32+/-1.21) as compared with those who were not (mean +/- SD, 3.08+/-1.34) (P = .25). There is significant agreement among faculty raters about the clinical competence of pediatric residents. Medical school grades, performance on standardized examinations, interviews during the intern application process, and match-list ranking are not predictors of clinical performance during residency.
Article
To determine the criteria used by emergency medicine (EM) residency selection committees to select their residents, to determine whether there is a consensus among residency programs, to inform programs of areas of possible inconsistency, and to better educate applicants pursuing careers in EM. A questionnaire consisting of 20 items based on the current Electronic Residency Application Service (ERAS) guidelines was mailed to the program directors of all 118 EM residencies in existence in February 1998. The program directors were instructed to rank each item on a five-point scale (5 = most important, 1 = least important) as to its importance in the selection of residents. Followup was done in the form of e-mail and facsimile. The overall response rate was 79.7%, with 94 of 118 programs responding. Items ranking as most important (4.0-5.0) in the selection process included: EM rotation grade (mean +/- SD = 4.79 +/- 0.50), interview (4.62 +/- 0.63), clinical grades (4.36 +/- 0.70), and recommendations (4.11 +/- 0.85). Moderate emphasis (3.0-4.0) was placed on: elective done at program director's institution (3.75 +/- 1.25), U.S. Medical Licensing Examination (USMLE) step II (3.34 +/- 0.93), interest expressed in program director's institution (3.30 +/- 1.19), USMLE step I (3.28 +/- 0.86), and awards/achievements (3.16 +/- 0.88). Less emphasis (<3.0) was placed on Alpha Omega Alpha Honor Society (AOA) status (3.01 +/- 1.09), medical school attended (3.00 +/- 0.85), extracurricular activities (2.99 +/- 0.87), basic science grades (2.88 +/- 0.93), publications (2.87 +/- 0.99), and personal statement (2.75 +/- 0.96). Items most agreed upon by respondents (lowest standard deviation, SD) included EM rotation grade (SD 0.50), interview (SD 0.63), and clinical grades (SD 0.70). Of the 94 respondents, 37 (39.4%) replied they had minimum requirements for USMLE step I (195.11 +/- 13.10), while 30 (31.9%) replied they had minimum requirements for USMLE step II (194.27 +/- 14.96). Open-ended responses to "other" were related to personal characteristics, career/goals, and medical school performance. The selection criteria with the highest mean values as reported by the program directors were EM rotation grade, interview, clinical grades, and recommendations. Criteria showing the most consistency (lowest SD) included EM rotation grade, interview, and clinical grades. Results are compared with those from previous multispecialty studies.
Article
To determine the reliability of scores assigned to interviews of medical students applying to an emergency medicine program. A scoring instrument was derived based on faculty and resident input, institutional and national documents, and previous application procedures. Candidates were interviewed by four pairs of interviewers. Interviewers were asked to score the candidates on five visual analog scales (VASs) with objective anchors. Each interview assessed a unique candidate characteristic. All interviewers were given explicit instructions on scoring procedures and instrument use. The data were entered into an Excel database and transferred to SPSS, and reliabilities were measured with a two-way mixed-effect Cronbach's alpha. Forty applications were received for the 2002 residency entry year. Thirty-eight application packages were complete, and 16 candidates were interviewed. Data collection was complete for all 16. The average measure intraclass correlations for each individual interviewer across the five VASs ranged from 0.72 to 0.92 (mean, 0.85). The interrater reliability within the four interviews (personal characteristics, trainability, suitability for emergency medicine, and suitability for the specific training program) were low at 0.36, 0.59, 0.69, and 0.49. The overall reliability of the four interview scores was 0.83, and for the eight interviewer scores it was 0.86. The reliability of the overall interview scores was very high. The intraclass correlations for each interviewer's VAS scores were also high, but interrater correlations within interview teams were moderate and not higher than those across interview teams. This study suggests that an interview assessment instrument can be highly reliable overall and that interviewers base scores on an overall global impression.
Article
Program directors of emergency medicine (EM) residencies attempt to select candidates who will subsequently perform well as residents. This study was undertaken to identify characteristics available at the time of application to an EM residency that predict future success in residency. The EM faculty at the University of California San Diego (UCSD) completed a one-time confidential assessment of EM residents on performance in residency at the time of graduation. The faculty member compared the graduate with all residents (both EM and non-EM) with whom the faculty member had previously worked using the five-point scale: > or =90th percentile, 70th-89th percentile, 50th-69th percentile, 30th-49th percentile, or < 30th percentile. Descriptive statistics, ordinal logistic regression (OLR), classification and regression tree (CART) analysis, and multiple additive regression tree (MART) analysis were used to find predictors for each of the outcome variables. Fifty-four graduates were evaluated. The medical school attended (MSA) was the strongest predictor of overall performance in residency in all regression models. OLR showed that MSA and "distinctive factors" (being a championship athlete, medical school officer, etc.) were significant predictors and may deserve greater weighting in the selection process. The most robust MART model demonstrated that MSA, dean's letter of recommendation, and distinctive factors had the most impact on overall performance in an EM residency. Using regression modeling, it may be possible to predict future resident performance from characteristics contained in residency applications. Applicants from top-tier medical schools and those with distinctive talents were more successful in the UCSD EM residency.
Article
Academic and other student-specific variables associated with United States Medical Licensing Examination (USMLE) Step 3 performance have not been fully defined. We analyzed Step 3 scores in association with medical school academic-performance measures, gender, residency specialty, and first postgraduate year (PGY-l) of training program-director performance evaluations. There were significant first-order associations between Step 3 scores and each of USMLE Step 1 and Step 2 scores, third-year clerkships' grade point average (GPA), Alpha Omega Alpha election, Medical Scientist Training Program graduation, broad-based specialty residency training, and PGY-l performance evaluation score. In a multiple linear regression model accounting for over 50% of the total variance in Step 3 scores, Step 2 scores, broad-based-specialty residency training, and GPA independently predicted Step 3 scores. Individualized Step 3 scores provide medical schools with additional means to externally validate their educational programs and to enhance the scope of outcomes assessments for their graduates.
Article
This study aimed to determine predictors for otolaryngology resident success using data available at the time candidates are interviewed (eg, medical school attended, letters of recommendation, test scores) and data that emerge during residency. We performed a retrospective cohort study of 36 residents who entered our program between 1983 and 1993. Seventy percent of Alpha Omega Alpha (AOA) members and 13% of nonmembers were in the highest tertile based on faculty ranking (p<0.01), and candidates with an exceptional trait were more likely than those without an exceptional trait to rank in the highest tertile (57% versus 10%, p<0.01). AOA membership was also related to current academic appointment (p=0.02). Significant correlations included United States Medical Licensing Examination (USMLE) I score, year 2 in-training score (0.48, p=0.03), and years 3 and 4 in-training score and faculty ranking (minus 0.39, minus 0.50, respectively, p<or=0.01). Having more than one peer-reviewed publication was associated with higher USMLE I scores and being favored for selection by >50% of the interviewers (p<0.05 for both). In our program designed to train academic otolaryngologists, postresident success was strongly predicted by having an exceptional trait and AOA membership. Success during residency was predicted by interviewer's impression of the candidate and a USMLE I score>570. Knowledge of these factors at the time of the resident interview could increase the likelihood of selecting the most appropriate candidates for academic otolaryngology. Resident success is a complex outcome, and other unmeasured and unexamined characteristics can provide additional insight into choosing successful residents.
Article
To explore the relationship(s) between USMLE, In-Training Exam, and American Board of Pediatrics (ABP) board-certifying exam scores within a Pediatric residency-training program. Data were abstracted from records of graduating residents from the Pediatric residency program at the University of Florida College of Medicine Jacksonville from 1999 to 2005. Seventy (70) residents were identified and their files reviewed for the following information: USMLE Step 1 and 2 scores, in-training exam results and eventual board scores as reported by the ABP. Correlation and regression analyses were performed and compared across all tests. The correlation coefficients between the three types of tests were all statistically significant. Using logistic regression, however, only USMLE Step 1 scores (compared to Step 2) had a statistically significant association with board performance. Interestingly, none of the three in-training exam scores had any additional impact on predicting board performance given one's USMLE Step 1 score. USMLE Step 1 scores greater than 220 were associated with nearly a 95 per cent passage rate on the board-certifying exam. The data suggests that performance on USMLE Step 1 is an important predictor of a resident's chances of passing the pediatric boards. This information, which is available when a resident initiates training, can be used to identify those at risk of not passing the boards. While Step 1 scores should not be used as a sole determinant in the recruiting process, individual learning plans can be developed and implemented early in training to maximize one's ability to pass the certifying exam.
Can medical school performance predict residency performance?
  • H E Stohl
  • N A Hueppchen
  • J L Bienstock
Stohl HE, Hueppchen NA, Bienstock JL. Can medical school performance predict residency performance? J Grad Med Educ 2010; 2:322-6.
The ''Zing Factor''-how do faculty describe the best pediatrics residents?
  • G Rosenbluth
  • O Brien
  • B Asher
  • A Cho
Rosenbluth G, O'Brien B, Asher A, Cho C. The ''Zing Factor''-how do faculty describe the best pediatrics residents? J Grad Med Educ 2014;6:106-11.