Article

The Predictive Validity of the MCAT for Medical School Performance and Medical Board Licensing Examinations: A Meta-Analysis of the Published Research

Medical Education and Research Unit, Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Calgary, Canada.
Academic Medicine (Impact Factor: 3.47). 02/2007; 82(1):100-6. DOI: 10.1097/01.ACM.0000249878.25186.b7
Source: PubMed

ABSTRACT To conduct a meta-analysis of published studies to determine the predictive validity of the MCAT on medical school performance and medical board licensing examinations.
The authors included all peer-reviewed published studies reporting empirical data on the relationship between MCAT scores and medical school performance or medical board licensing exam measures. Moderator variables, participant characteristics, and medical school performance/medical board licensing exam measures were extracted and reviewed separately by three reviewers using a standardized protocol.
Medical school performance measures from 11 studies and medical board licensing examinations from 18 studies, for a total of 23 studies, were selected. A random-effects model meta-analysis of weighted effects sizes (r) resulted in (1) a predictive validity coefficient for the MCAT in the preclinical years of r = 0.39 (95% confidence interval [CI], 0.21-0.54) and on the USMLE Step 1 of r = 0.60 (95% CI, 0.50-0.67); and (2) the biological sciences subtest as the best predictor of medical school performance in the preclinical years (r = 0.32 95% CI, 0.21-0.42) and on the USMLE Step 1 (r = 0.48 95% CI, 0.41-0.54).
The predictive validity of the MCAT ranges from small to medium for both medical school performance and medical board licensing exam measures. The medical profession is challenged to develop screening and selection criteria with improved validity that can supplement the MCAT as an important criterion for admission to medical schools.

Download full-text

Full-text

Available from: Claudio Violato, Jun 29, 2015
6 Followers
 · 
228 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Anatomy instruction has evolved over the past two decades as many medical schools have undergone various types of curricular reform. To provide empirical evidence about whether or not curricular changes impact the acquisition and retention of anatomy knowledge, this study investigated the effect of variation in gross anatomy course hours, curricular approach (stand-alone versus integrated), and laboratory experience (dissection versus dissection and prosection) on USMLE Steps 1 and 2 Clinical Knowledge (CK) scores. Gross anatomy course directors at 54 United States schools provided information about their gross anatomy courses via an online survey (response rate of 42%). Survey responses were matched with USMLE scores for 6,411 examinees entering LCME-accredited schools in 2007 and taking Step 1 for the first time in 2009. Regression analyses were conducted to examine relationships between gross anatomy instructional characteristics and USMLE performance. Step 1 total scores, Step 1 gross anatomy sub-scores, and Step 2 CK scores were unrelated to instructional hours, controlling for MCAT scores. Examinees from schools with integrated curricula scored slightly lower on Steps 1 and 2 CK than those from stand-alone courses (effect sizes of 2.1 and 1.9 on score scales with SDs of 22 and 20, respectively). Examinees with dissection and prosection experience performed slightly better on Step 2 CK than examinees in courses with dissection only laboratories (effect size of 1.2). Results suggest variation in course hours is unrelated to performance on Steps 1 and 2 CK. Although differences were observed in relation to curricular approach and laboratory experience, effect sizes were small. Anat Sci Educ 6: 3-10. © 2012 American Association of Anatomists.
    Anatomical Sciences Education 01/2013; 6(1):3-10. DOI:10.1002/ase.1343 · 2.98 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Online lectures have been used in lieu of live lectures in our gross anatomy and embryology course for the past eight years. We examined patterns of online lecture use by our students and related that use to academic entry measures, gender and examination performance. Detailed access records identified by student were available from server logs. Total views per page of lecture material increased over the first six years, then decreased markedly between years seven and eight, possibly due to the recent availability of alternate forms of lecture audio. Lecture use peaked in midafternoon and again in the evening, although some use was seen at all hours. Usage was highest at midweek and lowest on Fridays as might be expected. Individual student's use varied widely from rates equivalent to less than one viewing/page to more than three viewings per page. Overall use by male students was greater than that of females and gender-specific differences in the daily pattern were seen. Lecture use was correlated to the Medical College Admission Test® (MCAT®) Verbal Reasoning and Physical Sciences scores but not to composite MCAT scores or undergraduate grade point average. Overall use appeared to be driven by scheduled team-based learning (TBL) sessions and major examinations. Specific subsets of lecture material were most often viewed before related TBL sessions and again during review for examinations. A small but significant correlation between lecture use and examination and course performance was seen, specifically in the male student population. These findings, along with earlier observations, suggest that varied use of online lectures is attributable to multiple factors. Anat Sci Educ © 2012 American Association of Anatomists.
    Anatomical Sciences Education 11/2012; 5(6). DOI:10.1002/ase.1289 · 2.98 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The admissions interview still remains the most common approach used to describe candidates' noncognitive attributes for medical school. In this prospective study, we have investigated the predictive validity of a semi-structured interview for admissions to medical school based on medical judgment vignettes: (1) ethical decision-making (moral), (2) relationships with patients and their families (altruistic), and (3) roles and responsibilities in professional relationships (dutiful). A group of 26 medical students from the Class of 2007 participated in the interview process and provided their subsequent performance results from clerkship 3 years later. Inter-rater reliability of the scored interviews was high (kappa = 0.96). Our results provided evidence for both convergent and divergent predictive validity. Medical judgment vignettes scores correlated significantly with seven mandatory clerkship rotation in-training evaluation reports (r = 0.39, p < 0.05; to r = 0.55, p < 0.01). This semi-structured interview based on clearly defined and scored medical judgment vignettes that focus on the assessment of medical students' noncognitive attributes is promising for student's selection into medical school. The high reliability and evidence of predictive validity of clinical performance over a 3-year period suggests a workable approach to the assessment of 'compelling personal characteristics' beyond merely cognitive variables.
    Medical Teacher 03/2009; 31(4):e148-55. DOI:10.1080/01421590802512888 · 2.05 Impact Factor