Article

Candidate Evaluation Using Targeted Construct Assessment in the Multiple Mini-Interview: A Multifaceted Rasch Model Analysis

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Background: Despite the apparent benefits of the MMI, construct-irrelevant variance continues to be a topic of study. Refining the MMI to more effectively measure candidate ability is critical to improving our ability to identify and select candidates that are equipped for success within health professions education and the workforce. Approach: Each station assessed a single construct and was rated by a single interviewer who was provided only the name of the candidate and no additional information about the candidate's background, application, or prior academic performance. All interviewers received online and in-person training in the fall prior to the MMI and the morning of the MMI. A 3-facet multifaceted Rasch measurement analysis was completed to determine interviewer severity, candidate ability, and MMI station difficulty and examine how the model performed overall (e.g., rating scale). Results: Altogether, the Rasch measures explained 62.84% of the variance in the ratings. Differences in candidate ability explained 45.28% of the variance in the data, whereas differences in interviewer severity explained 16.09% of the variance in the data. None of the interviewers had Infit or Outfit mean-square scores greater than 1.7, and only 2 (5.4%) had mean-square scores less than 0.5. Conclusions: The data demonstrated acceptable fit to the multifaceted Rasch measurement model. This work is the first of its kind in pharmacy and provides insight into the development of an MMI that provides useful and meaningful candidate assessment ratings for institutional decision making.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... First, the MMI is designed to include questions or scenarios that target noncognitive or nonacademic constructs (i.e. social and behavioral professional competence) [11]. Conversely, the OSCE generally focuses on the measurement of clinical knowledge and procedural skill development. ...
... The purpose of our study was to describe the design and implementation of an end-of-year capstone MMI specifically focusing on the quality of the assessment results. We designed the C-MMI using a targeted assessment approach in which prompts are tailored to address a specific construct of interest at each station, similar to MMI examples in the literature [11]. Our analyses support the use of the MMI as a reliable assessment strategy that can effectively be incorporated within health professions curricula to target the evaluation of select professional competencies. ...
... Our analyses support the use of the MMI as a reliable assessment strategy that can effectively be incorporated within health professions curricula to target the evaluation of select professional competencies. The findings of this study are congruent with research demonstrating the validity and reliability of the MMI when used in selection contexts [11,12,[23][24][25]. Our MFRM accounted for approximately 59% of the total variance in MMI scores, which is similar to previous studies using MFRM that account for 30-62% of total variance in performance data [11,12,[23][24][25]. ...
Article
Full-text available
Background: The multiple mini-interview (MMI) is a common assessment strategy used in student selection. The MMI as an assessment strategy within a health professions curriculum, however, has not been previously studied. This study describes the integration of a 5-station MMI as part of an end-of-year capstone following the first year of a health professions curriculum. The goal of the capstone MMI was to assess professional competencies of students and to offer formative feedback to prepare students for their upcoming clinical practice experiences. The purpose of this study was to evaluate the psychometric properties of an MMI integrated into a health professions curriculum. Methods: Five capstone MMI stations were designed to each evaluate a single construct assessed by one rater. A principal component analysis (PCA) was used to evaluate the structure of the model and its ability to distinguish 5 separate constructs. A Multifaceted Rasch Measurement (MFRM) model assessed student performance and estimated the sources of measurement error attributed to 3 facets: student ability, rater stringency, and station difficulty. At the conclusion, students were surveyed about the capstone MMI experience. Results: The PCA confirmed the MMI reliably assessed 5 unique constructs and performance on each station was not strongly correlated with one another. The 3-facet MFRM analysis explained 58.79% of the total variance in student scores. Specifically, 29.98% of the variance reflected student ability, 20.25% reflected rater stringency, and 8.56% reflected station difficulty. Overall, the data demonstrated an acceptable fit to the MFRM model. The majority of students agreed the MMI allowed them to effectively demonstrate their communication (80.82%), critical thinking (78.77%), and collaboration skills (70.55%). Conclusions: The MMI can be a valuable assessment strategy of professional competence within a health professions curriculum. These findings suggest the MMI is well-received by students and can produce reliable results. Future research should explore the impact of using the MMI as a strategy to monitor longitudinal competency development and inform feedback approaches.
... One reason for why the latter are not so commonly used in large-scale assessments is that these kinds of tasks mostly require human judgment (rather than computer programs) to score answers or to assess their quality. Besides educational and language assessment, many other areas of testing require human judgment as well, such as the scoring of students within medical education programs (Tor & Steketee, 2011), the assessment of abilities using the approach of multiple mini-interviews (McLaughlin, Singer, & Cox, 2017), or large-scale placement tests (S. M. Wu & Tan, 2016). ...
Article
Automated scoring has been developed and has the potential to provide solutions to some of the obvious shortcomings in human scoring. In this study, we investigated whether SpeechRaterSM and a series of combined SpeechRaterSM and human scores were comparable to human scores for an English language assessment speaking test. We found that there were some systematic patterns in the five tested scenarios based on item response theory.
... No information was returned by nine authors regarding domains and domain source; therefore these articles were also excluded. Ultimately, 80 articles were included in this review reporting data from 65 individual studies (Ahmed et al., 2014;Alaki et al., 2016;Alweis et al., 2015;Barbour and Sandy, 2014;Brownell et al., 2007;Callwood et al., 2014;Cameron and MacKeigan, 2012;Cameron et al., 2017;Campagna-Vaillancourt et al., 2014;Corelli et al., 2015;Cottingham et al., 2014;Cowart et al., 2016;Cox et al., 2015;Daniel-Filho et al., 2017;Dodson et al., 2009;Dore et al., 2010;Dowell et al., 2012;El Says et al., 2013;Eva et al., 2004aEva et al., , 2004bEva et al., , 2009Eva et al., , 2012Eva and Macala, 2014;Finlayson and Townson, 2011;Foley and Hijazi, 2013;Foley and Hijazi, 2015;Fraga et al., 2013;Gale et al., 2016;Grice, 2014;Griffin and Wilson, 2012;Harris and Owen, 2007;Hecker et al., 2009;Hecker and Violato, 2011;Hissbach et al., 2014;Hofmeister et al., 2008;Hofmeister et al., 2009;Hopson et al., 2014;Humphrey et al., 2008;Husbands and Dowell, 2013;Jerant et al., 2012;Jerant et al., 2015Jerant et al., , 2017Jones and Forister, 2011;Kelly et al., 2014;Kim et al., 2017;Kulasegaram et al., 2010;Kumar et al., 2009;Leduc et al., 2017;Lee et al., 2016;Lemay et al., 2007;Makransky et al., 2017;McAndrew and Ellis, 2012;McBurney and Carty, 2009;McLaughlin et al., 2017;O'Brien et al., 2011;Ogunyemi et al., 2016;Oyler et al., 2014;Oliver et al., 2014;Pau et al., 2016;Perkins et al., 2013;Razack et al., 2009;Reiter et al., 2007;Roberts et al., 2008Roberts et al., , 2009Roberts et al., , 2014Ross et al., 2017;Sebok et al., 2014;Shinawi et al., 2017;Singer et al., 2016;Soares et al., 2015;Taylor et al., 2015;Tavares and Mausz, 2013;Terregino et al., 2015;Thomas et al., 2015;Till et al., 2013;Traynor et al., 2017;Uijtdehaage et al., 2011;Yamada et al., 2017;Yoshimura et al., 2015). ...
Article
Objectives: To examine the personal domains multiple mini interviews (MMIs) are being designed to assess, explore how they were determined and contextualise such domains in current and future healthcare student selection processes DESIGN: A systematic review of empirical research reporting on MMI model design was conducted from database inception to November 2017. Data sources: Twelve electronic bibliographic databases. Review methods: Evidence was extracted from original studies, and integrated in a narrative synthesis guided by the PRISMA statement for reporting systematic reviews. Personal domains were clustered into themes using a modified Delphi technique. Results: A total of 584 articles were screened. 65 unique studies (80 articles) matched our inclusion criteria of which seven were conducted within nursing/midwifery faculties. Six in 10 studies featured applicants to medical school. Across selection processes, we identified 32 personal domains assessed by MMIs, the most frequent being: communication skills (84%), teamwork/collaboration (70%), and ethical/moral judgement (65%). Domains capturing ability to cope with stressful situations (14%), make decisions (14%), and resolve conflict in the workplace (13%) featured in fewer than ten studies overall. Intra- and inter-disciplinary inconsistencies in domain profiles were noted, as well as differences by entry level. MMIs deployed in nursing and midwifery assessed compassion and decision-making more frequently than in all other disciplines. Own programme philosophy and professional body guidance were most frequently cited (~50%) as sources for personal domains; a blueprinting process was reported in only 8% of studies. Conclusions: Nursing, midwifery and allied healthcare professionals should develop their theoretical frameworks for MMIs to ensure they are evidence-based and fit-for-purpose. We suggest a re-evaluation of domain priorities to ensure that students who are selected, not only have the capacity to offer the highest standards of care provision, but are able to maintain these standards when facing clinical practice and organisational pressures.
... While OSCEs are clearly useful for assessing patient-care skills, such as medication education, they generally target multiple constructs and processes in a single station, making it difficult to specifically evaluate single constructs such as critical thinking, adaptability, and pharmacy appreciation. 23 Additionally, it would be a costly endeavor to design separate OSCE experiences to assess all the constructs assessed in the c-MMI. [24][25][26] Another comparative advantage of the c-MMI is that it does not involve hiring standardized patient actors, the expenditure of faculty time developing cases, or the scheduling and renting of specialized OSCE rooms. ...
Article
Full-text available
Objective. To describe the development and implementation of an innovative, comprehensive, multi-day module focused on assessing and providing feedback on student cognitive and interpersonal skill development and practice readiness after the first year (PY1) of a Doctor of Pharmacy (PharmD) curriculum. Methods. A multi-day capstone assessment was developed to evaluate first-year students' knowledge of course content, ability to find and apply information, and interpersonal skills, including teamwork and adaptability. The PY1 Capstone consisted of four parts. Knowledge was assessed using 130 multiple-choice items on first-year course content and 50 fill-in-the-blank items on Top 200 brand and generic drug names. The ability to find and apply information was assessed using a 45-question open-book test. Interpersonal skills were assessed using a specially designed multiple mini-interview (MMI). The final part of the assessment was a debriefing session that provided rapid-cycle feedback on capstone performance and a bridge between students' recently completed first-year coursework and an upcoming 2-month experiential immersion. Results. The average score on the closed-book and open-book assessments were 75% and 68%, respectively. Most students displayed satisfactory interpersonal skills based on the MMI. Students viewed the assessment positively based on post-assessment survey responses (>75%). Most students (98%) reported not studying for the assessment, indicating that the results should reflect students' retention of knowledge and skills. Conclusion. The capstone assesses students on knowledge and skills and provides students with feedback on areas to focus on during their early immersion. Continued work is needed to ensure the process is transparent and cost-effective.
Article
Full-text available
The UNC Eshelman School of Pharmacy is transforming its doctor of pharmacy program to emphasize active engagement of students in the classroom, foster scientific inquiry and innovation, and immerse students in patient care early in their education. The admissions process is also being reengineered.
Article
Full-text available
Effective health care workforce development requires the adoption of team-based care delivery models, in which participating professionals practice at the full extent of their training in pursuit of care quality and cost goals. The proliferation of such new models as medical homes, accountable care organizations, and community-based care teams is creating new opportunities for pharmacists to assume roles and responsibilities commensurate with their capabilities. Some challenges to including pharmacists in team-based care delivery models, including the lack of payment mechanisms that explicitly provide for pharmacist services, have yet to be fully addressed by policy makers and others. Nevertheless, evolving models and strategies reveal a variety of ways to draw on pharmacists' expertise in such critical areas as medication management for high-risk patients. As Affordable Care Act provisions are implemented, health care workforce projections need to consider the growing number of pharmacists expected to play an increasing role in delivering primary care services.
Article
Full-text available
Background: The Multiple Mini-Interview (MMI) has been used increasingly for selection of students to health professions programmes. Objectives: This paper reports on the evidence base for the feasibility, acceptability, reliability and validity of the MMI. Data sources: CINAHL and MEDLINE Study eligibility criteria: All studies testing the MMI on applicants to health professions training. Study appraisal and synthesis methods: Each paper was appraised by two reviewers. Narrative summary findings on feasibility, acceptability, reliability and validity are presented. Results: Of the 64 citations identified, 30 were selected for review. The modal MMI consisted of 10 stations, each lasting eight minutes and assessed by one interviewer. The MMI was feasible, i.e. did not require more examiners, did not cost more, and interviews were completed over a short period of time. It was acceptable, i.e. fair, transparent, free from gender, cultural and socio-economic bias, and did not favour applicants with previous coaching. Its reliability was reported to be moderate to high, with Cronbach's alpha = 0.69-0.98 and G = 0.55-0.72. MMI scores did not correlate to traditional admission tools scores, were not associated with pre-entry academic qualifications, were the best predictor for OSCE performance and statistically predictive of subsequent performance at medical council examinations. Conclusions: The MMI is reliable, acceptable and feasible. The evidence base for its validity against future medical council exams is growing with reports from longitudinal investigations. However, further research is needed for its acceptability in different cultural context and validity against future clinical behaviours.
Article
Full-text available
See www.winsteps.com/manuals.htm
Article
Full-text available
Purpose: The authors report multiple mini-interview (MMI) selection process data at the University of Dundee Medical School; staff, students, and simulated patients were examiners and investigated how effective this process was in separating candidates for entry into medical school according to the attributes measured, whether the different groups of examiners exhibited systematic differences in their rating patterns, and what effect such differences might have on candidates' scores. Method: The 452 candidates assessed in 2009 rotated through the same 10-station MMI that measured six noncognitive attributes. Each candidate was rated by one examiner in each station. Scores were analyzed using Facets software, with candidates, examiners, and stations as facets. The computer program calculated fair average scores that adjusted for examiner severity/leniency and station difficulty. Results: The MMI reliably (0.89) separated the candidates into four statistically distinct levels of noncognitive ability. The Rasch measures accounted for 31.69% of the total variance in the ratings (candidates 16.01%, examiners 11.32%, and stations 4.36%). Students rated more severely than staff and also had more unexpected ratings. Adjusting scores for examiner severity/leniency and station difficulty would have changed the selection outcomes for 9.6% of the candidates. Conclusions: The analyses highlighted the fact that quality control monitoring is essential to ensure fairness when ranking candidates according to scores obtained in the MMI. The results can be used to identify examiners needing further training, or who should not be included again, as well as stations needing review. "Fair average" scores should be used for ranking the candidates.
Article
Full-text available
The multiple mini-interview (MMI) is a new interview process that Dundee Medical School has recently adopted to assess entrants into its undergraduate medicine course. This involves an 'objective structured clinical examination' like rotational approach in which candidates are assessed on specific attributes at a number of stations.  To present methodological, questionnaire and psychometric data on the transitional process from traditional interviews to MMIs over a 3-year period and discuss the implications for those considering making this transition.  To facilitate the transition, a four-station MMI was piloted in 2007. Success encouraged consideration of desirable attributes which were used to develop a full 10-station process which was implemented in 2009 with assessors being recruited from staff, students and simulated patients. A questionnaire was administered to all assessors and candidates who participated in the 2009 MMIs. Cronbach's alpha and Pearson's r and analysis of variances were used to determine the MMI's psychometric properties. Multi-faceted Rasch modelling (MFRM) was modelled to control for assessor leniency/stringency and the impact of using 'fair scores' determined. Analysis was conducted using SPSS 17 and FACETS 3.65.0.  The questionnaire confirmed that the process was acceptable to all parties. Cronbach's alpha reliability was satisfactory and consistent. Graduates/mature candidates outperformed U.K. school-leavers and overseas candidates. Using MFRM fair scores would change the selection outcome of 6.2% and 9.6% of candidates in 2009 and 2010, respectively. Students were less lenient, made more use of the full range of the rating scales and were just as reliable as staff.  The strategy of generating institutional support through staged introduction proved effective. The MMI in Dundee was shown to be feasible and displayed sound psychometric properties. Student assessors appeared to perform at least as well as staff. Despite a considerable intellectual and logistical challenge MMIs were successfully introduced and deemed worthwhile.
Article
Full-text available
This is a defining moment for health and health care in the United States, and medical schools and teaching hospitals have a critical role to play. The combined forces of health care reform, demographic shifts, continued economic woes, and the projected worsening of physician shortages portend major challenges for the health care enterprise in the near future. In this commentary, the author employs a diversity framework implemented by IBM and argues that this framework should be adapted to an academic medicine setting to meet the challenges to the health care enterprise. Using IBM's diversity framework, the author explores three distinct phases in the evolution of diversity thinking within the academic medicine community. The first phase included isolated efforts aimed at removing social and legal barriers to access and equality, with institutional excellence and diversity as competing ends. The second phase kept diversity on the periphery but raised awareness about how increasing diversity benefits everyone, allowing excellence and diversity to exist as parallel ends. In the third phase, which is emerging today and reflects a growing understanding of diversity's broader relevance to institutions and systems, diversity and inclusion are integrated into the core workings of the institution and framed as integral for achieving excellence. The Association of American Medical Colleges, a leading voice and advocate for increased student and faculty diversity, is set to play a more active role in building the capacity of the nation's medical schools and teaching hospitals to move diversity from a periphery to a core strategy.
Article
Full-text available
The number of Multiple Mini Interview (MMI) stations and the type and number of interviewers required for an acceptable level of reliability for veterinary admissions requires investigation. The goal is to investigate the reliability of the 2009 MMI admission process at the University of Calgary. Each applicant (n = 103; female = 80.6%; M age = 23.05 years, SD = 3.96) participated in a 7-station MMI. Applicants were rated independently by 2 interviewers, a faculty member, and a community veterinarian, within each station (total interviewers/applicant N = 14). Interviewers scored applicants on 3 items, each on a 5-point anchored scale. Generalizability analysis resulted in a reliability coefficient of G = 0.79. A Decision study (D-study) indicated that 10 stations with 1 interviewer would produce a G = 0.79 and 8 stations with 2 interviewers would produce a G = 0.81; however, these have different resource requirements. A two-way analysis of variance showed that there was a nonsignificant main effect of interviewer type (between faculty member and community veterinarian) on interview scores, F(1, 1428) = 3.18, p = .075; a significant main effect of station on interview scores, F(6, 1428) = 4.34, p < .001; and a nonsignificant interaction effect between interviewer-type and station on interview scores, F(6, 1428) = 0.74, p = .62. Overall reliability was adequate for the MMI. Results from the D-study suggest that the current format with 7 stations provides adequate reliability given that there are enough interviewers; to achieve the same G-coefficient 1 interviewer per station with 10 stations would suffice and reduce the resource requirements. Community veterinarians and faculty members demonstrated an adequate level of agreement in their assessments of applicants.
Article
Full-text available
There are significant levels of variation in candidate multiple mini-interview (MMI) scores caused by interviewer-related factors. Multi-facet Rasch modelling (MFRM) has the capability to both identify these sources of error and partially adjust for them within a measurement model that may be fairer to the candidate. Using facets software, a variance components analysis estimated sources of measurement error that were comparable with those produced by generalisability theory. Fair average scores for the effects of the stringency/leniency of interviewers and question difficulty were calculated and adjusted rankings of candidates were modelled. The decisions of 207 interviewers had an acceptable fit to the MFRM model. For one candidate assessed by one interviewer on one MMI question, 19.1% of the variance reflected candidate ability, 8.9% reflected interviewer stringency/leniency, 5.1% reflected interviewer question-specific stringency/leniency and 2.6% reflected question difficulty. If adjustments were made to candidates' raw scores for interviewer stringency/leniency and question difficulty, 11.5% of candidates would see a significant change in their ranking for selection into the programme. Greater interviewer leniency was associated with the number of candidates interviewed. Interviewers differ in their degree of stringency/leniency and this appears to be a stable characteristic. The MFRM provides a recommendable way of giving a candidate score which adjusts for the stringency/leniency of whichever interviewers the candidate sees and the difficulty of the questions the candidate is asked.
Article
Full-text available
Multiple mini-interviews (MMIs) are increasingly used in high-stakes medical school selection. Yet there is little published research about participants' experiences and understandings of the process. We report the findings from an international qualitative study on candidate and interviewer experiences of the MMI for entry into a graduate-entry medical school. Qualitative data from six interviewer focus groups and 442 candidate and 75 interviewer surveys were analysed using framework analysis. Multiple researchers (n = 3) analysed a proportion of the data and developed a thematic framework capturing content-related (i.e. what was said) themes that emerged from the data. This thematic framework was then used to code the complete dataset. Several key themes were identified, including participants' perspectives on having: (i) a one-to-one interview; (ii) multiple assessment opportunities; (iii) a standardised, scenario-based interview; (iv) a mini-interview, and on (v) the attributes currently measured by the MMI, and (vi) other attributes that should be assessed. We gained a deeper understanding of participants' experiences of a high-stakes, decision-making process for selection into a graduate-entry medical school. We discuss our findings in the light of the existing literature and make recommendations to address the issue of differing participant expectations and understandings of the MMI, and to improve the credibility and acceptability of the process.
Article
Full-text available
To assess the practice effects from coaching on the Undergraduate Medicine and Health Sciences Admission Test (UMAT), and the effect of both coaching and repeat testing on the Multiple Mini Interview (MMI). Observational study based on a self-report survey of a cohort of 287 applicants for entry in 2008 to the new School of Medicine at the University of Western Sydney. Participants were asked about whether they had attended UMAT coaching or previous medical school interviews, and about their perceptions of the relative value of UMAT coaching, attending other interviews or having a "practice run" with an MMI question. UMAT and MMI results for participants were compared with respect to earlier attempts at the test, the degree of similarity between questions from one year to the next, and prior coaching. Effect of coaching on UMAT and MMI scores; effect of repeat testing on MMI scores; candidates' perceptions of the usefulness of coaching, previous interview experience and a practice run on the MMI. 51.4% of interviewees had attended coaching. Coached candidates had slightly higher UMAT scores on one of three sections of the test (non-verbal reasoning), but this difference was not significant after controlling for Universities Admission Index, sex and age. Coaching was ineffective in improving MMI scores, with coached candidates actually having a significantly lower score on one of the nine interview tasks ("stations"). Candidates who repeated the MMI in 2007 (having been unsuccessful at their 2006 entry attempt) did not improve their score on stations that had new content, but showed a small increase in scores on stations that were either the same as or similar to previous stations. A substantial number of Australian medical school applicants attend coaching before undertaking entry selection tests, but our study shows that coaching does not assist and may even hinder their performance on an MMI. Nevertheless, as practice on similar MMI tasks does improve scores, tasks should be rotated each year. Further research is required on the predictive validity of the UMAT, given that coaching appeared to have a small positive effect on the non-verbal reasoning component of the test.
Article
Full-text available
Background: A potential problem of clinical examinations is known as the hawk-dove problem, some examiners being more stringent and requiring a higher performance than other examiners who are more lenient. Although the problem has been known qualitatively for at least a century, we know of no previous statistical estimation of the size of the effect in a large-scale, high-stakes examination. Here we use FACETS to carry out a multi-facet Rasch modelling of the paired judgements made by examiners in the clinical examination (PACES) of MRCP(UK), where identical candidates were assessed in identical situations, allowing calculation of examiner stringency. Methods: Data were analysed from the first nine diets of PACES, which were taken between June 2001 and March 2004 by 10,145 candidates. Each candidate was assessed by two examiners on each of seven separate tasks. with the candidates assessed by a total of 1,259 examiners, resulting in a total of 142,030 marks. Examiner demographics were described in terms of age, sex, ethnicity, and total number of candidates examined. Results: FACETS suggested that about 87% of main effect variance was due to candidate differences, 1% due to station differences, and 12% due to differences between examiners in leniency-stringency. Multiple regression suggested that greater examiner stringency was associated with greater examiner experience and being from an ethnic minority. Male and female examiners showed no overall difference in stringency. Examination scores were adjusted for examiner stringency and it was shown that for the present pass mark, the outcome for 95.9% of candidates would be unchanged using adjusted marks, whereas 2.6% of candidates would have passed, even though they had failed on the basis of raw marks, and 1.5% of candidates would have failed, despite passing on the basis of raw marks. Conclusion: Examiners do differ in their leniency or stringency, and the effect can be estimated using Rasch modelling. The reasons for differences are not clear, but there are some demographic correlates, and the effects appear to be reliable across time. Account can be taken of differences, either by adjusting marks or, perhaps more effectively and more justifiably, by pairing high and low stringency examiners, so that raw marks can be used in the determination of pass and fail.
Article
Objective: To describe the development, implementation, and evaluation of the multiple mini-interview (MMI) within a doctor of pharmacy (PharmD) admissions model. Methods: Demographic data and academic indicators were collected for all candidates who participated in Candidates' Day (n=253), along with the score for each MMI station criteria (7 stations). A survey was administered to all candidates who completed the MMI, and another survey was administered to all interviewers to examine perceptions of the MMI. Results: Analyses suggest that MMI stations assessed different attributes as designed, with Cronbach alpha for each station ranging from 0.90 to 0.95. All correlations between MMI station scores and academic indicators were negligible. No significant differences in average station scores were found based on age, gender, or race. Conclusion: This study provides additional support for the use of the MMI as an admissions tool in pharmacy education.
Article
The Multiple Mini-Interview (MMI) uses multiple, short-structured contacts to evaluate communication and professionalism. It predicts medical school success better than the traditional interview and application. Its acceptability and utility in emergency medicine (EM) residency selection are unknown. We theorized that participants would judge the MMI equal to a traditional unstructured interview and it would provide new information for candidate assessment. Seventy-one interns from 3 programs in the first month of training completed an eight-station MMI focused on EM topics. Pre- and post-surveys assessed reactions. MMI scores were compared with application data. EM grades correlated with MMI performance (F[1, 66] = 4.18; p < 0.05) with honors students having higher scores. Higher third-year clerkship grades were associated with higher MMI performance, although this was not statistically significant. MMI performance did not correlate with match desirability and did not predict most other components of an application. There was a correlation between lower MMI scores and lower global ranking on the Standardized Letter of Recommendation. Participants preferred a traditional interview (mean difference = 1.36; p < 0.01). A mixed format (traditional interview and MMI) was preferred over a MMI alone (mean difference = 1.1; p < 0.01). MMI performance did not significantly correlate with preference for the MMI. Although the MMI alone was viewed less favorably than a traditional interview, participants were receptive to a mixed-methods interview. The MMI does correlate with performance on the EM clerkship and therefore can measure important abilities for EM success. Future work will determine whether MMI performance predicts residency performance.
Book
Written in an accessible style, this book facilitates a deep understanding of the Rasch model. Authors Bond and Fox review the crucial properties of the Rasch model and demonstrate its use with a wide range of examples including the measurement of educational achievement, human development, attitudes, and medical rehabilitation. A glossary and numerous illustrations further aid the reader's understanding. The authors demonstrate how to apply Rasch analysis and prepare readers to perform their own analyses and interpret the results. Updated throughout, highlights of the Second Edition include: a new CD that features an introductory version of the latest Winsteps program and the data files for the book's examples, preprogrammed to run using Winsteps;, a new chapter on invariance that highlights the parallels between physical and human science measurement;, a new appendix on analyzing data to help those new to Rasch analysis;, more explanation of the key concepts and item characteristic curves;, a new empirical example with data sets demonstrates the many facets of the Rasch model and other new examples; and an increased focus on issues related to unidimensionality, multidimensionality, and the Rasch factor analysis of residuals. Applying the Rasch Model is intended for researchers and practitioners in psychology, especially developmental psychologists, education, health care, medical rehabilitation, business, government, and those interested in measuring attitude, ability, and/or performance. The book is an excellent text for use in courses on advanced research methods, measurement, or quantitative analysis. Significant knowledge of statistics is not required. © 2007 by Lawrence Erlbaum Associates, Inc. All rights reserved.
Article
In 1910, in his recommendations for reforming medical education, Abraham Flexner responded to what he deemed to be the "public interest." Now, 100 years later, to respond to the current needs of society, the education of physicians must once again change. In addition to understanding the biological basis of health and disease, and mastering technical skills for treating individual patients, physicians will need to learn to navigate in and continually improve complex systems in order to improve the health of the patients and communities they serve. Physicians should not be mere participants in, much less victims of, such systems. Instead, they ought to be prepared to help lead those systems toward ever-higher-quality care for all. A number of innovative programs already exist for students and residents to help integrate improvement skills into professional preparation, and that goal is enjoying increasing support from major professional organizations and accrediting bodies. These experiences have shown that medical schools and residency programs will need to both teach the scientific foundations of system performance and provide opportunities for trainees to participate in team-based improvement of the real-world health systems in which they work. This significant curricular change, to meet the social need of the 21st century, will require educators and learners to embrace new core values, in addition to those held by the profession for generations. These include patient-centeredness, transparency, and stewardship of limited societal resources for health care.
Article
The Carnegie Foundation for the Advancement of Teaching, which in 1910 helped stimulate the transformation of North American medical education with the publication of the Flexner Report, has a venerated place in the history of American medical education. Within a decade following Flexner's report, a strong scientifically oriented and rigorous form of medical education became well established; its structures and processes have changed relatively little since. However, the forces of change are again challenging medical education, and new calls for reform are emerging. In 2010, the Carnegie Foundation will issue another report, Educating Physicians: A Call for Reform of Medical School and Residency, that calls for (1) standardizing learning outcomes and individualizing the learning process, (2) promoting multiple forms of integration, (3) incorporating habits of inquiry and improvement, and (4) focusing on the progressive formation of the physician's professional identity. The authors, who wrote the 2010 Carnegie report, trace the seeds of these themes in Flexner's work and describe their own conceptions of them, addressing the prior and current challenges to medical education as well as recommendations for achieving excellence. The authors hope that the new report will generate the same excitement about educational innovation and reform of undergraduate and graduate medical education as the Flexner Report did a century ago.
Article
In this paper we report on further tests of the validity of the multiple mini-interview (MMI) selection process, comparing MMI scores with those achieved on a national high-stakes clinical skills examination. We also continue to explore the stability of candidate performance and the extent to which so-called 'cognitive' and 'non-cognitive' qualities should be deemed independent of one another. To examine predictive validity, MMI data were matched with licensing examination data for both undergraduate (n = 34) and postgraduate (n = 22) samples of participants. To assess the stability of candidate performance, reliability coefficients were generated for eight distinct samples. Finally, correlations were calculated between 'cognitive' and 'non-cognitive' measures of ability collected in the admissions procedure, on graduation from medical school and 18 months into postgraduate training. The median reliability of eight administrations of the MMI in various cohorts was 0.73 when 12 10-minute stations were used with one examiner per station. The correlation between performance on the MMI and number of stations passed on an objective structured clinical examination-based licensing examination was r = 0.43 (P < 0.05) in a postgraduate sample and r = 0.35 (P < 0.05) in an undergraduate sample of subjects who sat the MMI 5 years prior to sitting the licensing examination. The correlation between 'cognitive' and 'non-cognitive' assessment instruments increased with time in training (i.e. as the focus of the assessments became more tailored to the clinical practice of medicine). Further evidence for the validity of the MMI approach to making admissions decisions has been provided. More generally, the reported findings cast further doubt on the extent to which performance can be captured with trait-based models of ability. Finally, although a complementary predictive relationship has consistently been observed between grade point average and MMI results, the extent to which cognitive and non-cognitive qualities are distinct appears to depend on the scope of practice within which the two classes of qualities are assessed.
Article
The multiple mini-interview (MMI) was initially designed to test non-cognitive characteristics related to professionalism in entry-level students. However, it may be testing cognitive reasoning skills. Candidates to medical and dental schools come from diverse backgrounds and it is important for the validity and fairness of the MMI that these background factors do not impact on their scores. A suite of advanced psychometric techniques drawn from item response theory (IRT) was used to validate an MMI question bank in order to establish the conceptual equivalence of the questions. Bias against candidate subgroups of equal ability was investigated using differential item functioning (DIF) analysis. All 39 questions had a good fit to the IRT model. Of the 195 checklist items, none were found to have significant DIF after visual inspection of expected score curves, consideration of the number of applicants per category, and evaluation of the magnitude of the DIF parameter estimates. The question bank contains items that have been studied carefully in terms of model fit and DIF. Questions appear to measure a cognitive unidimensional construct, 'entry-level reasoning skills in professionalism', as suggested by goodness-of-fit statistics. The lack of items exhibiting DIF is encouraging in a contemporary high-stakes admission setting where candidates of diverse personal, cultural and academic backgrounds are assessed by common means. This IRT approach has potential to provide assessment designers with a quality control procedure that extends to the level of checklist items.
Article
Although the interview is widely used in the selection of applicants for admission to U.S. medical schools, little is known about current interview practices. The authors formulated a 46-item questionnaire concerning the interview process for medical school applicants, then in 1989 sent it to admission officials at all the 127 LCME-accredited schools in the United States. The questionnaire concerned the interview's status as a predictor; interviewers and interview structure; interviewer training; and the utility of interview data. Seventy-two percent of those sent the questionnaire completed and returned it. The responding admission officials indicated that the interview had two major purposes at their schools: as a means of assessing candidates' noncognitive skills and as a public relations tool. Most schools' interview processes were loosely to moderately structured, and interviewers received minimal training. It is concluded that the interview's role is primarily subjective and that it has a definite but imprecise influence on admission decisions.
Article
Significant demographic, legal, and educational developments during the last ten years have led medical schools to review critically their selection procedures. A critical component of this review is the selection interview, since it is an integral part of most admission processes; however, some question its value. Interviews serve four purposes: information gathering, decision making, verification of application data, and recruitment. The first and last of these merit special attention. The interview enables an admission committee to gather information about a candidate that would be difficult or impossible to obtain by any other means yet is readily evaluated in an interview. Given the recent decline in numbers of applicants to and interest in medical school, many schools are paying closer attention to the interview as a powerful recruiting tool. Interviews can be unstructured, semistructured, or structured. Structuring involves analyzing what makes a medical student successful, standardizing the questions for all applicants, providing sample answers for evaluating responses, and using panel interviews (several interviewers simultaneously with one applicant). Reliability and validity of results increase with the degree of structuring. Studies of interviewers show that they are often biased in terms of the rating tendencies (for instance, leniency or severity) and in terms of an applicant's sex, race, appearance, similarity to the interviewer, and contrast to other applicants). Training interviewers may reduce such bias. Admission committees should weigh the purposes of interviewing differently for various types of candidates, develop structured or semistructured interviews focusing on nonacademic criteria, and train the interviewers.
Article
Although health sciences programmes continue to value non-cognitive variables such as interpersonal skills and professionalism, it is not clear that current admissions tools like the personal interview are capable of assessing ability in these domains. Hypothesising that many of the problems with the personal interview might be explained, at least in part, by it being yet another measurement tool that is plagued by context specificity, we have attempted to develop a multiple sample approach to the personal interview. A group of 117 applicants to the undergraduate MD programme at McMaster University participated in a multiple mini-interview (MMI), consisting of 10 short objective structured clinical examination (OSCE)-style stations, in which they were presented with scenarios that required them to discuss a health-related issue (e.g. the use of placebos) with an interviewer, interact with a standardised confederate while an examiner observed the interpersonal skills displayed, or answer traditional interview questions. The reliability of the MMI was observed to be 0.65. Furthermore, the hypothesis that context specificity might reduce the validity of traditional interviews was supported by the finding that the variance component attributable to candidate-station interaction was greater than that attributable to candidate. Both applicants and examiners were positive about the experience and the potential for this protocol. The principles used in developing this new admissions instrument, the flexibility inherent in the multiple mini-interview, and its feasibility and cost-effectiveness are discussed.
Article
Admission to health-related professions is very competitive and selecting candidates with the best prospects for success is critical. A variety of measures are used to assess candidates to predict success. The purpose of this research was to assess the effectiveness of using selection interviews for admissions. Meta-analysis was applied to a sample of 20 studies examined in a comprehensive review article on the use of interviews in healthcare academic disciplines. Nineteen of these studies examined the relationship between performance in an interview situation and academic performance, while 10 examined the relationship between performance in an interview situation and clinical performance. A separate meta-analysis was conducted for each category of performance measure. The mean sample-size-effect size for studies examining the predictive power of interviews for academic success was 0.06 (95% confidence intervals 0.03-0.08), indicating a very small effect. The sample of studies was homogeneous using a fixed-effect model. The sample of studies for predicting clinical success had a mean effect size of 0.17 (95% confidence intervals 0.11-0.22), indicating modest positive predictive power. Using a random-effects model, this sample of studies was also homogeneous. Future research should investigate a larger sample of primary studies.
Article
The Multiple Mini-Interview (MMI) has previously been shown to have a positive correlation with early medical school performance. Data have matured to allow comparison with clerkship evaluations and national licensing examinations. Of 117 applicants to the Michael G DeGroote School of Medicine at McMaster University who had scores on the MMI, traditional non-cognitive measures, and undergraduate grade point average (uGPA), 45 were admitted and followed through clerkship evaluations and Part I of the Medical Council of Canada Qualifying Examination (MCCQE). Clerkship evaluations consisted of clerkship summary ratings, a clerkship objective structured clinical examination (OSCE), and progress test score (a 180-item, multiple-choice test). The MCCQE includes subsections relevant to medical specialties and relevant to broader legal and ethical issues (Population Health and the Considerations of the Legal, Ethical and Organisational Aspects of Medicine[CLEO/PHELO]). In-programme, MMI was the best predictor of OSCE performance, clerkship encounter cards, and clerkship performance ratings. On the MCCQE Part I, MMI significantly predicted CLEO/PHELO scores and clinical decision-making (CDM) scores. None of these assessments were predicted by other non-cognitive admissions measures or uGPA. Only uGPA predicted progress test scores and the MCQ-based specialty-specific subsections of the MCCQE Part I. The MMI complements pre-admission cognitive measures to predict performance outcomes during clerkship and on the Canadian national licensing examination.
Article
Contemporary studies have shown that traditional medical school admissions interviews have strong face validity but provide evidence for only low reliability and validity. As a result, they do not provide a standardised, defensible and fair process for all applicants. In 2006, applicants to the University of Calgary Medical School were interviewed using the multiple mini-interview (MMI). This interview process consisted of 9, 8-minute stations where applicants were presented with scenarios they were then asked to discuss. This was followed by a single 8-minute station that allowed the applicant to discuss why he or she should be admitted to our medical school. Sociodemographic and station assessment data provided for each applicant were analysed to determine whether the MMI was a valid and reliable assessment of the non-cognitive attributes, distinguished between the non-cognitive attributes, and discriminated between those accepted and those placed on the waitlist (waiting list). We also assessed whether applicant sociodemographic characteristics were associated with acceptance or waitlist status. Cronbach's alpha for each station ranged from 0.97-0.98. Low correlations between stations and the factor analysis suggest each station assessed different attributes. There were significant differences in scores between those accepted and those on the waitlist. Sociodemographic differences were not associated with status on acceptance or waiting lists. The MMI is able to assess different non-cognitive attributes and our study provides additional evidence for its reliability and validity. The MMI offers a fairer and more defensible assessment of applicants to medical school than the traditional interview.
Article
The MMI was introduced into the medical admissions process at the University of Calgary (UofC) in 2006. This report outlines the steps which were involved in its development and our evaluation of the process. The MMI allowed us to interview applicants in one weekend, with fewer interviewers and less time required per interviewer compared to our previous interview process. Most importantly, more than 90% of both the applicants and interviewers found the process to be very acceptable. This process allowed us to ensure that the interview process focused on the non-cognitive traits we are looking for in the students we admit to the UofC.
Article
We wished to determine which factors are important in ensuring interviewers are able to make reliable and valid decisions about the non-cognitive characteristics of candidates when selecting candidates for entry into a graduate-entry medical programme using the multiple mini-interview (MMI). Data came from a high-stakes admissions procedure. Content validity was assured by using a framework based on international criteria for sampling the behaviours expected of entry-level students. A variance components analysis was used to estimate the reliability and sources of measurement error. Further modelling was used to estimate the optimal configurations for future MMI iterations. This study refers to 485 candidates, 155 interviewers and 21 questions taken from a pre- prepared bank. For a single MMI question and 1 assessor, 22% of the variance between scores reflected candidate-to-candidate variation. The reliability for an 8-question MMI was 0.7; to achieve 0.8 would require 14 questions. Typical inter-question correlations ranged from 0.08 to 0.38. A disattenuated correlation with the Graduate Australian Medical School Admissions Test (GAMSAT) subsection 'Reasoning in Humanities and Social Sciences' was 0.26. The MMI is a moderately reliable method of assessment. The largest source of error relates to aspects of interviewer subjectivity, suggesting interviewer training would be beneficial. Candidate performance on 1 question does not correlate strongly with performance on another question, demonstrating the importance of context specificity. The MMI needs to be sufficiently long for precise comparison for ranking purposes. We supported the validity of the MMI by showing a small positive correlation with GAMSAT section scores.
The global achievement gap: Updated edition
  • T Wagner
  • Bond TG