The Predictive Validity of the MCAT for Medical School Performance and Medical Board Licensing Examinations: A Meta-Analysis of the Published Research

Medical Education and Research Unit, Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Calgary, Canada.
Academic Medicine (Impact Factor: 2.93). 02/2007; 82(1):100-6. DOI: 10.1097/01.ACM.0000249878.25186.b7
Source: PubMed

ABSTRACT To conduct a meta-analysis of published studies to determine the predictive validity of the MCAT on medical school performance and medical board licensing examinations.
The authors included all peer-reviewed published studies reporting empirical data on the relationship between MCAT scores and medical school performance or medical board licensing exam measures. Moderator variables, participant characteristics, and medical school performance/medical board licensing exam measures were extracted and reviewed separately by three reviewers using a standardized protocol.
Medical school performance measures from 11 studies and medical board licensing examinations from 18 studies, for a total of 23 studies, were selected. A random-effects model meta-analysis of weighted effects sizes (r) resulted in (1) a predictive validity coefficient for the MCAT in the preclinical years of r = 0.39 (95% confidence interval [CI], 0.21-0.54) and on the USMLE Step 1 of r = 0.60 (95% CI, 0.50-0.67); and (2) the biological sciences subtest as the best predictor of medical school performance in the preclinical years (r = 0.32 95% CI, 0.21-0.42) and on the USMLE Step 1 (r = 0.48 95% CI, 0.41-0.54).
The predictive validity of the MCAT ranges from small to medium for both medical school performance and medical board licensing exam measures. The medical profession is challenged to develop screening and selection criteria with improved validity that can supplement the MCAT as an important criterion for admission to medical schools.

Download full-text


Available from: Claudio Violato, Sep 26, 2015
1,805 Reads
  • Source
    • "The test addresses a range of cognitive abilities identified as important to the practice of medicine, namely verbal, quantitative and abstract reasoning and decision analysis. As with other aptitude tests worldwide such as the Medical Colleges Aptitude Test (MCAT) [12] and Graduate Australian Medical School Admissions Test (GAMSAT) [13], determining whether the UKCAT is predictive of important outcomes such as medical school examination performance is important to justify its use as a selection tool. However a distinction should be made between validity coefficients observed from mainly pre-clinical assessments undertaken in the early years and those in the latter, clinical years. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The UK Clinical Aptitude Test (UKCAT) was designed to address issues identified with traditional methods of selection. This study aims to examine the predictive validity of the UKCAT and compare this to traditional selection methods in the senior years of medical school. This was a follow-up study of two cohorts of students from two medical schools who had previously taken part in a study examining the predictive validity of the UKCAT in first year. The sample consisted of 4th and 5th Year students who commenced their studies at the University of Aberdeen or University of Dundee medical schools in 2007. Data collected were: demographics (gender and age group), UKCAT scores; Universities and Colleges Admissions Service (UCAS) form scores; admission interview scores; Year 4 and 5 degree examination scores. Pearson's correlations were used to examine the relationships between admissions variables, examination scores, gender and age group, and to select variables for multiple linear regression analysis to predict examination scores. Ninety-nine and 89 students at Aberdeen medical school from Years 4 and 5 respectively, and 51 Year 4 students in Dundee, were included in the analysis. Neither UCAS form nor interview scores were statistically significant predictors of examination performance. Conversely, the UKCAT yielded statistically significant validity coefficients between .24 and .36 in four of five assessments investigated. Multiple regression analysis showed the UKCAT made a statistically significant unique contribution to variance in examination performance in the senior years. Results suggest the UKCAT appears to predict performance better in the later years of medical school compared to earlier years and provides modest supportive evidence for the UKCAT's role in student selection within these institutions. Further research is needed to assess the predictive validity of the UKCAT against professional and behavioural outcomes as the cohort commences working life.
    BMC Medical Education 04/2014; 14(1):88. DOI:10.1186/1472-6920-14-88 · 1.22 Impact Factor
  • Source
    • "School grades such as grade point average (GPA) and high stakes ability tests are usually easily administered, cost efficient and psychometrically sound but they disregard personality factors that might be crucial for a medical career (e.g. [1-3]). On the other hand, interviews have high face validity [4], but evidence for the reliability and validity of panel interviews is scarce. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Multiple mini-interviews (MMIs) are a valuable tool in medical school selection due to their broad acceptance and promising psychometric properties. With respect to the high expenses associated with this procedure, the discussion about its feasibility should be extended to cost-effectiveness issues. Following a pilot test of MMIs for medical school admission at Hamburg University in 2009 (HAM-Int), we took several actions to improve reliability and to reduce costs of the subsequent procedure in 2010. For both years, we assessed overall and inter-rater reliabilities based on multilevel analyses. Moreover, we provide a detailed specification of costs, as well as an extrapolation of the interrelation of costs, reliability, and the setup of the procedure. The overall reliability of the initial 2009 HAM-Int procedure with twelve stations and an average of 2.33 raters per station was ICC=0.75. Following the improvement actions, in 2010 the ICC remained stable at 0.76, despite the reduction of the process to nine stations and 2.17 raters per station. Moreover, costs were cut down from $915 to $495 per candidate. With the 2010 modalities, we could have reached an ICC of 0.80 with 16 single rater stations ($570 per candidate). With respect to reliability and cost-efficiency, it is generally worthwhile to invest in scoring, rater training and scenario development. Moreover, it is more beneficial to increase the number of stations instead of raters within stations. However, if we want to achieve more than 80 % reliability, a minor improvement is paid with skyrocketing costs.
    BMC Medical Education 03/2014; 14(1):54. DOI:10.1186/1472-6920-14-54 · 1.22 Impact Factor
  • Source
    • "In terms of specific tools for selection, the North-American based Medical College Admissions Test (MCAT) sets the standard worldwide in that it has a rich amount of research into its predictive validity with future performance in medical school. Meta-analyses of both Kyei-Blankson [2] and Donnon and colleagues [3] showed varying correlations between MCAT scores and future performance, with the former reporting correlations between the test and academic performance ranging between 0.10 and 0.50, and the latter reporting correlations between 0.39 and 0.60 between MCAT and licensing exam measures. Mixed results are also reported for UK based medical entrance examinations, the UK Clinical Aptitude Test (UKCAT) and the Biomedical Admissions Test [4-6]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Admission to medical school is one of the most highly competitive entry points in higher education. Considerable investment is made by universities to develop selection processes that aim to identify the most appropriate candidates for their medical programs. This paper explores data from three undergraduate medical schools to offer a critical perspective of predictive validity in medical admissions. This study examined 650 undergraduate medical students from three Australian universities as they progressed through the initial years of medical school (accounting for approximately 25 per cent of all commencing undergraduate medical students in Australia in 2006 and 2007). Admissions criteria (aptitude test score based on UMAT, school result and interview score) were correlated with GPA over four years of study. Standard regression of each of the three admissions variables on GPA, for each institution at each year level was also conducted. Overall, the data found positive correlations between performance in medical school, school achievement and UMAT, but not interview. However, there were substantial differences between schools, across year levels, and within sections of UMAT exposed. Despite this, each admission variable was shown to add towards explaining course performance, net of other variables. The findings suggest the strength of multiple admissions tools in predicting outcomes of medical students. However, they also highlight the large differences in outcomes achieved by different schools, thus emphasising the pitfalls of generalising results from predictive validity studies without recognising the diverse ways in which they are designed and the variation in the institutional contexts in which they are administered. The assumption that high-positive correlations are desirable (or even expected) in these studies is also problematised.
    BMC Medical Education 12/2013; 13(1):173. DOI:10.1186/1472-6920-13-173 · 1.22 Impact Factor
Show more