ArticlePDF Available

Saudi Standardized Tests and English Competence: Association and Prediction for Freshmen Medical Students’ Performance in Chemistry

Authors:

Abstract and Figures

The research is created to investigate the prediction of admission criteria for medical student achievement in chemistry in Saudi Arabia. It examines if the General Aptitude Test (GAT), the Scholastic Achievement Admission Test (SAAT), and English competence, can to a certain extent predict and foretell students’ achievement in the chemistry. The study sample consists of 240 participants, providing their grades in the admission criteria and chemistry. Regression analyses are utilized to define the weight of individual admission criterion prediction for student achievement in chemistry. It illustrates that admission criteria could predict students’ grades in chemistry with a variance of 30%. The results also show English competence does play a more significant rule in predicting students’ performance in chemistry. More research is needed to examine whether these criteria are also predictors with a large scale of students’ population.
Content may be subject to copyright.
International Education Studies; Vol. 13, No. 5; 2020
ISSN 1913-9020 E-ISSN 1913-9039
Published by Canadian Center of Science and Education
144
Saudi Standardized Tests and English Competence: Association and
Prediction for Freshmen Medical Students’ Performance in Chemistry
Abdulaziz Althewini1
1 King Saud bin Abdulaziz University for Health Sciences and King Abdullah International Medical Research
Center, Saudi Arabia
Correspondence: Abdulaziz Althewini, P.O. Box 22490, Riyadh 11426, Saudi Arabia.
Received: December 5, 2019 Accepted: January 14, 2020 Online Published: April 18, 2020
doi:10.5539/ies.v13n5p144 URL: https://doi.org/10.5539/ies.v13n5p144
Abstract
The research is created to investigate the prediction of admission criteria for medical student achievement in
chemistry in Saudi Arabia. It examines if the General Aptitude Test (GAT), the Scholastic Achievement
Admission Test (SAAT), and English competence, can to a certain extent predict and foretell students’
achievement in the chemistry. The study sample consists of 240 participants, providing their grades in the
admission criteria and chemistry. Regression analyses are utilized to define the weight of individual admission
criterion prediction for student achievement in chemistry. It illustrates that admission criteria could predict
students’ grades in chemistry with a variance of 30%. The results also show English competence does play a
more significant rule in predicting students’ performance in chemistry. More research is needed to examine
whether these criteria are also predictors with a large scale of students’ population.
Keywords: standardized tests, English competence, chemistry education, predictive validity, medical education,
college admission
1. Introduction
There is widespread concern in the Saudi academic community about the validity of the national standardized
tests on which college admission is partly based. These tests are the General Aptitude Test (GAT) and Scholastic
Achievement Admission Test (SAAT). The GAT is intended to test students’ critical thinking skills and
mathematical reasoning. The SAAT is meant to examine students’ comprehension of foundational ideas in
chemistry, biology, physics, and mathematics covered in high school. Students complain that such tests consume
a lot of their time and effort, and yet are not necessarily related to their college learning trajectory. In addition,
English competence has been used in Saudi colleges as another indicator for students’ success in college, with
either international/in-house standardized tests or through preparatory intensive language learning programs.
There are questions on whether English competence, as well as GAT and SAAT, do predict students’
performance in college. Such a major concern should be taken seriously to improve the Saudi college admission
system. It also merits extensive study in several research projects.
The present study looks at the effectiveness of these standardized tests and their legitimacy for college
admission. It is designed specifically to test the prediction of GAT and SAAT as standardized tests as well as
English competence for freshmen students’ achievement in an introductory chemistry course in their first year of
college. Chemistry teachers are intrigued to learn whether students’ English proficiency level does help them to
perform well in the chemistry course or they need more language learning support. In addition, the teachers are
interested to see whether there is a connection between students’ fresh background of GAT, SAAT, and high
school and their performance in the chemistry course, so they could think about how to adapt their curriculum
and teaching method. The study provides an invitation to college educators to think broadly and critically about
the validity of using both the GAT and the SAAT standardized tests and English competence for college
admission. It will help them to think about how interrelated the three are with chemistry as a science subject. It
will help specifically English and science educators to map the connection between students’ performance in
English and that in science and determine whether any consistent patterns exist.
The study takes place in the preparatory year program at King Saud bin Abdulaziz University for Health
Sciences (KSAU-HS). Students undertake advanced English courses in their first semester and go on to science
courses in their second, including biology, chemistry, and physics. With their commutative GPA performance in
ies.ccsenet.org International Education Studies Vol. 13, No. 5; 2020
145
the two semesters, students get then enrolled in their specific college. The program is aimed at evaluating
students’ readiness for their college and nominates them for different medical majors, including medicine,
dentistry, pharmacy, and applied medical sciences. Within the current literature of admission criteria and their
predictive role for medical students, the present research is the first local Saudi study to take an in-depth
approach to test the prediction of GAT and SAAT and English competence for students’ achievement in
chemistry.
There are global research attempts in various countries to study intensively student admission to medical
colleges. There is one question common to these studies, which is whether the current system of medical college
admission offers fair opportunities for all students and actually and accurately measures and predicts students’
later performance (Schwartz, 2004; Roberts & Prideaux, 2010; McManus et al., 2011; Prideaux et al., 2011).
There are different ways to assess students’ skills before admission (Evans & Wen, 2007; McManus et al., 2003;
Groves et al., 2007); the most common approach among medical colleges is to combine both cognitive
achievements and various student personality characteristics (Albanese et al., 2003; Benbassat et al., 2007).
KSAU-HS is similar to a lot of other universities in that they prefer to employ a combination of admission
criteria, which includes: students’ high school grades, a test for reasoning abilities (in the Saudi context, the
GAT) and an interview (Julian, 2005; Peskun et al., 2007). These universities differ in terms of which
components should be counted and the numerical value assigned to each one (Parry et al., 2006). When these
components are calculated together, they do predict to a certain extent students’ performance in college
(Ferguson et al., 2003). Thus, KSAU-HS, along with other universities, takes a holistic approach to evaluating
students’ performance before college.
Moreover, KSAU-HS assigns a relatively low value to students high school grades (30%), whereas students’
high school grades are seen, in different contexts, as a more predictive and reliable tool (Ferguson et al, 2002;
McManus et al, 2003; Coates, 2008; Wright and Bradley, 2010; Wilkinson et al., 2008). Most Saudi universities
do not place a significant value on high school grades compared to other components, the GAT and SAAT. This
may indicate reduced trust in Saudi public high school education, leading these universities not to rely heavily on
it.
Within this discussion of admission criteria for medical colleges and the KSAU-HS admission approach as a
Saudi university, this study will serve the university as well as the entire academic community by providing a
deeply statistical analysis of the relationship between admission criteria and students’ performance in chemistry
in their first year of college.
2. Overview
The chemistry course in the KSAU-HS pre-professional program is designed for beginner and non-major
chemistry students. Students are expected to learn about major subjects: general and organic chemistry. In the
first part of the course, students focus on general chemistry, learning specifically about:
Chemical Foundations.
Atomic structures, chemical bonding, and electron configuration.
Types of chemical reactions and oxidation reduction.
Mole concept, chemical equations, and reaction stoichiometry.
Aqueous solutions.
Acids, bases, and buffers.
In the second part of the course, students learn about organic chemistry, focusing on:
Structure and bonding in organic molecules.
Functional groups and chemistry of carbon.
Saturated and unsaturated hydrocarbons: alkanes, alkenes, and alkynes.
Aromatic compounds.
Stereochemistry.
Students then get tested through two midterms and the final, with English as a medium of instruction.
As for measuring students’ English competence, this study employs two methods. The first takes the average of
students’ scores in English courses they take in their first semester. These courses include reading, grammar, and
communication. The second determines students’ reading and communication proficiency, through separated
ies.ccsenet.org International Education Studies Vol. 13, No. 5; 2020
146
tests reviewed and edited by language teacher experts. The goal of the reading test is to students’ ability to
understand academic texts written, with a focus on inference abilities. Students in the test must explain their skills
in finding the major idea and supportive examples in a reading text. Also, they should apply critical thinking
strategies to get the meaning beyond and between the lines. They should be able to distinguish between beliefs and
scientific facts and cause and effect and interpret diagrams and charts. The reading test emphasizes on students’
awareness of vocabulary and its parts of speech.
In the communication test, students should be able to use diverse writing styles, including comparison and cause
and effect. They should be able to reword, summarize, and compose an essay. Students are specifically required
to illustrate pre-determined communication skills which include: analyzing and making conclusions from a
reading passage, understanding the discourse organization of the reading text, framing a topic statement with a
clear and cohesive paragraph, and writing a paragraph with one main idea and supportive examples.
Both tests, with the addition of English average score, are the components of English competence employed in
this research for foretelling students’ achievement in chemistry.
3. Research Questions
The research questions investigate the prediction of the GAT and SAAT as well as English competence for
students’ achievement in chemistry. They are the following:
Does each admission criterion independently as independent variables foretell students’ achievement in
chemistry?
When the admission criteria are put together statistically, what is their predictive variance for students’
achievement in chemistry?
4. Method
This study gathered the grades of English average, the reading and communication tests, GAT, SAAT, and
chemistry from 240 male students. The data then were inserted in the SPSS program and run into statistical
regression analyses. Independent variables were English average score, reading test, communication test, and
GAT, SAAT whereas the chemistry score was the main dependent one.
5. Results
For the examination of each predictor individually, the study used simple linear regression. Table 1 shows the
variance of each predictor and its significance for the chemistry grade. It reveals that individually English
average has the highest predictive power (R-square=24.6%) following by the reading test (R-square=15.7%) and
then the communication test (R-square=14.4%). GAT and SAAT are not strong predictors of the dependent
variable (R-square=3.4% and 1.8% respectively). However, the multivariate regression model is stronger
explaining 30.1% variance of the dependent variable, as shown in Table 2. Table 3 shows that two metrics
English average (p<0.0005) and the communication test (p<0.05) are the most significant predictors of the
chemistry grade. The regression formula of foretelling the chemistry score is = 0.254 + 0.150* communication
test + 0.071* reading test + 0.627* English average + .0.014* GAT - 0.008* SAAT.
Table 1. Prediction of each independent variable
Model Variable R R
Square
Adjusted R
Square
Regression
Coefficient
Std.
Error t Coefficient
p-value
1 GAT 0.183 .034 .030 0.026 0.009 2.889 0.004
2 SAAT 0.136 .018 .014 0.019 0.009 2.111 0.035
3 English average 0.496 .246 .243 0.809 0.093 8.699 0.000
4 Reading test 0.396 .157 .154 0.316 0.047 6.723 0.000
5 Communication
test 0.38 .144 .141 0.353 0.056 6.304 0.000
Table 2. Prediction of combined independent variables
Model Variable R R Square Adjusted R Square Std. Error of The Estimate
1 CT, GAT, SAAT, ENGL, RT 0.549 .301 .286 0.627
Predictors: (Constant) Communication test (CT), SAAT, GAT, English average (ENGL), Reading test (RT).
ies.ccsenet.org International Education Studies Vol. 13, No. 5; 2020
147
Table 3. Coefficients analysis
Coefficientsa
Model Unstandardized Coefficients Standardized Coefficients t Sig.
B Std. Error Beta
1
(Constant) 0.254 0.900 0.282 .778
GAT .014 .009 .097 1.589 .113
SAAT -.008 .009 -.057 -0.936 .350
English average .627 .108 .385 5.822 .000 (P<0.0005)
Reading test .071 .063 .088 1.127 .261
Communication test -150 .069 .160 2.165 .031 (P<0.05)
a. Dependent Variable: Chemistry grade
6. Discussion
Analysis of these results yields some interesting findings. It confirms that each of the predictors is individually
significant for students’ performance in chemistry. However, the English average and reading and
communication tests were the best predictors, whereas GAT and SAAT have significantly lower predictive value.
It was not surprising that the English average along with students’ excellence in reading and communication
skills predict strongly for their chemistry grade since it is personally observed by English academic faculty at
KSAU-HS that there is a connection between performance in English and that in chemistry. When students have
solid and reliable English skills, they generally tend to perform well in science courses.
However, what comes as a shock is that both GAT and SAAT are individually predictors with lower values
(R-square=3.4% and 1.8% respectively). Even in the combined model, both are not significant predictors, as the
analysis of the coefficients confirms. Such results call for further inquiry. While this study focused on one
science course (chemistry), what about the prediction of both GAT and SAAT for other science courses? Does it
fall into the same pattern? If so, what is the problem? Why is there little connection between these tests and
students’ performance in college? One of the possible reasons for this scant connection is that GAT and SAAT
are designed for specific purposes, evaluating students’ learning in high school and generally assessing their
intellectual abilities. They are not aimed at, or suitable for, confirming students’ readiness for college, nor
guessing their success in college, as the results of this study show.
What is also astonishing is that standardized tests in other countries besides Saudi Arabia generally have less
predictive value (McManus et al., 2005a, 2005b). In addition, the prediction of standardized tests decreases over
the four years of college, as Geiser and Santelices (2007) assert. This challenges the conventional view that
standardized tests are good predictors for students’ college success; although these tests are methodologically
rigorous and provide an easy and uniform yardstick for evaluating students’ skills, this does not mean that they
are reliable predictors for success.
Moreover, the combined model of independent variables explains 30% R-square of chemistry variance. The
model illustrates a moderate relationship between the predictors and chemistry grade. Within other studies on
prediction of admission criteria, it has been seen that the variance of the combined model is usually low to
moderate, whereas a large proportion of the variance is unexplained (Callahan et al., 2010; James et al., 2010;
Lynch et al., 2009; Evans & Wen 2007). This study confirms that 70% of the variance is unexplained; suggesting
that among predictive studies the unexplained variance may extend to 70% (Ferguson et al., 2002).
It is recommended that for better prediction for college admission, more variables should be included and tested
to see if they relate to students’ performance in college (Benbassat et al., 2007). These variables include
admission interviews, personality tests, and high school internships. In addition, when colleges intend to employ
a combination of several admission criteria, this practice should be tested and shared with the academic
community (Ferguson et al., 2003; Parry et al., 2006).
7. Conclusion
Students’ performance in an introductory chemistry course at KSAU-HS is partially dependent on their English
average and success in communication and reading tests. The combined model explains only 30% of the
variance in chemistry, confirming GAT and SAAT are not significant predictors. However, this study has a
limited number of students, and although it does present some interesting findings, it would be preferable to
conduct a large national study for the Saudi community to evaluate such findings and reshape them for better
ies.ccsenet.org International Education Studies Vol. 13, No. 5; 2020
148
educational progress and college admission. Saudi educators are invited to use the admission data available at
their universities and establish different studies regarding the prediction of admission criteria for students’
performance in college and in certain majors and subjects. It would be a remarkable addition to the library of
Saudi research and to the global inquiry into predictive admission criteria.
References
Albanese, M. A., Snow, M. H., Skochelak, S. E., Huggett, K. N., & Farrell, P. M. (2003). Assessing personal
qualities in medical school admissions. Academic Medicine, 78(3), 313-321.
https://doi.org/10.1097/00001888-200303000-00016
Benbassat, J., & Baumal, R. (2007). Uncertainties in the selection of applicants for medical school. Advances in
Health Sciences Education, 12(4), 509-521. https://doi.org/10.1007/s10459-007-9076-0
Callahan, C. A., Hojat, M., Veloski, J., Erdmann, J. B., & Gonnella, J. S. (2010). The predictive validity of three
versions of the MCAT in relation to performance in medical school, residency, and licensing examinations: A
longitudinal study of 36 classes of Jefferson Medical College. Academic Medicine, 85(6), 980-987.
https://doi.org/10.1097/ACM.0b013e3181cece3d
Coates, H. (2008). Establishing the criterion validity of the graduate medical school admissions test (GAMSAT).
Medical Education, 42(10), 999-1006. https://doi.org/10.1111/j.1365-2923.2008.03154.x
Evans, P., & Wen, F. K. (2007). Does the medical college admission test predict global academic performance in
osteopathic medical school? Journal of the American Osteopathic Association, 107(4), 157.
Ferguson, E., James, D., & Madeley, L. (2002). Factors associated with success in medical school: Systematic
review of the literature. British Medical Journal, 324(7343), 952-957.
https://doi.org/10.1136/bmj.324.7343.952
Ferguson, E., McManus, I. C., James, D., O’Hehir, F., & Sanders, A. (2003). Pilot study of the roles of
personality, references, and personal statements in relation to performance over the five years of a medical
degree. British Medical Journal, 326(7386), 429-432. https://doi.org/10.1136/bmj.326.7386.429
Geiser, S., & Santelices, M. (2007). Validity of high-school grades in predicting student success beyond the
freshman year: High-school record vs. standardized tests as indicators of four-year college outcomes. Center
for Studies in Higher Education, UC Berkeley. Retrieved from
http://cshe.berkeley.edu/publications/publications.php?id=265
Groves, M. A., Gordon, J., & Ryan, G. (2007). Entry tests for graduate medical programs: Is it time to re-think?
Medical Journal of Australia, 186(9), 486. https://doi.org/10.5694/j.1326-5377.2007.tb01013.x
James, D., Yates, J., & Nicholson, S. (2010). Comparison of a level and UKCAT performance in students applying
to UK medical and dental schools in 2006: Cohort study. British Medical Journal, 349, c478.
https://doi.org/10.1136/bmj.c478
Jessee, S. A., O’Neil, P. N., & Dosch, R. O. (2006). Matching student personality types and learning preferences
to teaching methodologies. Journal of Dental Education, 70, 644-651.
Julian, E. R. (2005). Validity of the Medical College Admission Test for predicting medical school performance.
Academic Medicine, 80(10), 910-917. https://doi.org/10.1097/00001888-200510000-00010
Lynch, B., MacKenzie, R., Dowell, J., Cleland, J., & Prescott, G. (2009). Does the UKCAT predict Year 1
performance in medical school? Medical education, 43(12), 1203-1209.
https://doi.org/10.1111/j.1365-2923.2009.03535.x
McManus, I. C., Ferguson, E., Wakeford, R., Powis, D., & James, D. (2011). Predictive validity of the
Biomedical Admissions Test: an evaluation and case study. Medical teacher, 33(1), 53-57.
https://doi.org/10.3109/0142159X.2010.525267
McManus, I. C., Iqbal, S., Chandrarajan, A., Ferguson, E., & Leaviss, J. (2005). Unhappiness and dissatisfaction
in doctors cannot be predicted by selectors from medical school application forms: A prospective,
longitudinal study. BMC Medical Education, 5(1), 38. https://doi.org/10.1186/1472-6920-5-38
McManus, I. C., Powis, D. A., Wakeford, R., Ferguson, E., James, D., & Richards, P. (2005). Intellectual aptitude
tests and A levels for selecting UK school leaver entrants for medical school. British Medical Journal,
331(7516), 555-559. https://doi.org/10.1136/bmj.331.7516.555
McManus, I. C., Smithers, E., Partridge, P., Keeling, A., & Fleming, P. R. (2003). A levels and intelligence as
ies.ccsenet.org International Education Studies Vol. 13, No. 5; 2020
149
predictors of medical careers in UK doctors: 20 year prospective study. British Medical Journal, 327(7407),
139-142. https://doi.org/10.1136/bmj.327.7407.139
Parry, J., Mathers, J., Stevens, A., Parsons, A., Lilford, R., Spurgeon, P., & Thomas, H. (2006). Admissions
processes for five year medical courses at English schools. British Medical Journal, 332(7548), 1005-1009.
https://doi.org/10.1136/bmj.38768.590174.55
Peskun, C., Detsky, A., & Shandling, M. (2007). Effectiveness of medical school admissions criteria in
predicting residency ranking four years later. Medical Education, 41(1), 57-64.
https://doi.org/10.1111/j.1365-2929.2006.02647.x
Prideaux, D., Roberts, C., Eva, K., Centeno, A., Mccrorie, P., Mcmanus, C., ... & Wilkinson, D. (2011).
Assessment for selection for the health care professions and specialty training: consensus statement and
recommendations from the Ottawa 2010 Conference. Medical Teacher, 33(3), 215-223.
https://doi.org/10.3109/0142159X.2011.551560
Roberts, C., & Prideaux, D. (2010). Selection for medical schools: Reimaging as an international discourse.
Medical Education, 44(11), 1054-1056. https://doi.org/10.1111/j.1365-2923.2010.03852.x
Schwartz, S. (2004). Fair admissions to higher education: recommendations for good practice. London: Higher
Education Steering Group.
Searle, J., & McHarg, J. (2003). Selection for medical school: just pick the right students and the rest is easy!
Medical Education, 37(5), 458-463. https://doi.org/10.1046/j.1365-2923.2003.01496.x
Sefcik, D. J., Prerost, F. J., & Arbet, S. E. (2009). Personality types and performance on aptitude and achievement
tests: Implications for osteopathic medical education. Journal of American Osteopathic Association, 109(6),
296-301.
Turnbull, D., Buckley, P., Robinson, J. S., Mather, G., Leahy, C., & Marley, J. (2003). Increasing the evidence
base for selection for undergraduate medicine: Four case studies investigating process and interim outcomes.
Medical Education, 37(12), 1115-1120. https://doi.org/10.1111/j.1365-2923.2003.01716.x
Wilkinson, D., Zhang, J., Byrne, G. J., Luke, H., Ozolins, I. Z., Parker, M. H., & Peterson, R. F. (2008). Medical
school selection criteria and the prediction of academic performance. Medical Journal of Australia, 189(4),
235. https://doi.org/10.5694/j.1326-5377.2008.tb01998.x
Wright, S. R., & Bradley, P. M. (2010). Has the UK Clinical Aptitude Test improved medical student selection?
Medical Education, 44(11), 1069-1076. https://doi.org/10.1111/j.1365-2923.2010.03792.x
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution
license (http://creativecommons.org/licenses/by/4.0/).
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
There has been an increase in the use of pre-admission selection tests for medicine. Such tests need to show good psychometric properties. Here, we use a paper by Emery and Bell [2009. The predictive validity of the Biomedical Admissions Test for pre-clinical examination performance. Med Educ 43:557-564] as a case study to evaluate and comment on the reporting of psychometric data in the field of medical student selection (and the comments apply to many papers in the field). We highlight pitfalls when reliability data are not presented, how simple zero-order associations can lead to inaccurate conclusions about the predictive validity of a test, and how biases need to be explored and reported. We show with BMAT that it is the knowledge part of the test which does all the predictive work. We show that without evidence of incremental validity it is difficult to assess the value of any selection tests for medicine.
Article
Full-text available
Assessment for selection in medicine and the health professions should follow the same quality assurance processes as in-course assessment. The literature on selection is limited and is not strongly theoretical or conceptual. For written testing, there is evidence of the predictive validity of Medical College Admission Test (MCAT) for medical school and licensing examination performance. There is also evidence for the predictive validity of grade point average, particularly in combination with MCAT for graduate entry but little evidence about the predictive validity of school leaver scores. Interviews have not been shown to be robust selection measures. Studies of multiple mini-interviews have indicated good predictive validity and reliability. Of other measures used in selection, only the growing interest in personality testing appears to warrant future work. Widening access to medical and health professional programmes is an increasing priority and relates to the social accountability mandate of medical and health professional schools. While traditional selection measures do discriminate against various population groups, there is little evidence on the effect of non-traditional measures in widening access. Preparation and outreach programmes show most promise. In summary, the areas of consensus for assessment for selection are small in number. Recommendations for future action focus on the adoption of principles of good assessment and curriculum alignment, use of multi-method programmatic approaches, development of interdisciplinary frameworks and utilisation of sophisticated measurement models. The social accountability mandate of medical and health professional schools demands that social inclusion, workforce issues and widening of access are embedded in the principles of good assessment for selection.
Article
Full-text available
High-school grades are often viewed as an unreliable criterion for college admissions, owing to differences in grading standards across high schools, while standardized tests are seen as methodologically rigorous, providing a more uniform and valid yardstick for assessing student ability and achievement. The present study challenges that conventional view. The study finds that high-school grade point average (HSGPA) is consistently the best predictor not only of freshman grades in college, the outcome indicator most often employed in predictive-validity studies, but of four-year college outcomes as well. A previous study, UC and the SAT (Geiser with Studley, 2003), demonstrated that HSGPA in college-preparatory courses was the best predictor of freshman grades for a sample of almost 80,000 students admitted to the University of California. Because freshman grades provide only a short-term indicator of college performance, the present study tracked four-year college outcomes, including cumulative college grades and graduation, for the same sample in order to examine the relative contribution of high-school record and standardized tests in predicting longer-term college performance. Key findings are: (1) HSGPA is consistently the strongest predictor of four-year college outcomes for all academic disciplines, campuses and freshman cohorts in the UC sample; (2) surprisingly, the predictive weight associated with HSGPA increases after the freshman year, accounting for a greater proportion of variance in cumulative fourth-year than first-year college grades; and (3) as an admissions criterion, HSGPA has less adverse impact than standardized tests on disadvantaged and underrepresented minority students. The paper concludes with a discussion of the implications of these findings for admissions policy and argues for greater emphasis on the high-school record, and a corresponding de-emphasis on standardized tests, in college admissions.
Article
Full-text available
To determine whether the UK Clinical Aptitude Test (UKCAT) adds value to the selection process for school leaver applicants to medical and dental school, and in particular whether UKCAT can reduce the socioeconomic bias known to affect A levels. Cohort study Applicants to 23 UK medical and dental schools in 2006. 9884 applicants who took the UKCAT in the UK and who achieved at least three passes at A level in their school leaving examinations (53% of all applicants). Independent predictors of obtaining at least AAB at A level and UKCAT scores at or above the 30th centile for the cohort, for the subsections and the entire test. Independent predictors of obtaining at least AAB at A level were white ethnicity (odds ratio 1.58, 95% confidence interval 1.41 to 1.77), professional or managerial background (1.39, 1.22 to 1.59), and independent or grammar schooling (2.26, 2.02 to 2.52) (all P<0.001). Independent predictors of achieving UKCAT scores at or above the 30th centile for the whole test were male sex (odd ratio 1.48, 1.32 to 1.66), white ethnicity (2.17, 1.94 to 2.43), professional or managerial background (1.34, 1.17 to 1.54), and independent or grammar schooling (1.91, 1.70 to 2.14) (all P<0.001). One major limitation of the study was that socioeconomic status was not volunteered by approximately 30% of the applicants. Those who withheld socioeconomic status data were significantly different from those who provided that information, which may have caused bias in the analysis. UKCAT was introduced with a high expectation of increasing the diversity and fairness in selection for UK medical and dental schools. This study of a major subgroup of applicants in the first year of operation suggests that it has an inherent favourable bias to men and students from a higher socioeconomic class or independent or grammar schools. However, it does provide a reasonable proxy for A levels in the selection process.
Article
In 2006, the United Kingdom Clinical Aptitude Test (UKCAT) was introduced as a new medical school admissions tool. The aim of this cohort study was to determine whether the UKCAT has made any improvements to the way medical students are selected. Regression analysis was performed in order to study the ability of previous school type and gender to predict UKCAT, personal statement or interview scores in two cohorts of accepted students. The ability of admissions scores and demographic data to predict performance on knowledge and skills examinations was also studied. Previous school type was not a significant predictor of either interview or UKCAT scores amongst students who had been accepted onto the programme (n = 307). However, it was a significant predictor of personal statement score, with students from independent and grammar schools performing better than students from state-maintained schools. Previous school type, personal statements and interviews were not significant predictors of knowledge examination performance. UKCAT scores were significant predictors of knowledge examination performance for all but one examination administered in the first 2 years of medical school. Admissions data explained very little about performance on skills (objective structured clinical examinations [OSCEs]) assessments. The use of personal statements as a basis for selection results in a bias towards students from independent and grammar schools. However, no evidence was found to suggest that students accepted from these schools perform any better than students from maintained schools on Year 1 and 2 medical school examinations. Previous school type did not predict interview or UKCAT scores of accepted students. UKCAT scores are predictive of Year 1 and 2 examination performance at this medical school, whereas interview scores are not. The results of this study challenge claims made by other authors that aptitude tests do not have a place in medical school selection in the UK.
Article
The Medical College Admission Test (MCAT) has undergone several revisions for content and validity since its inception. With another comprehensive review pending, this study examines changes in the predictive validity of the MCAT's three recent versions. Study participants were 7,859 matriculants in 36 classes entering Jefferson Medical College between 1970 and 2005; 1,728 took the pre-1978 version of the MCAT; 3,032 took the 1978-1991 version, and 3,099 took the post-1991 version. MCAT subtest scores were the predictors, and performance in medical school, attrition, scores on the medical licensing examinations, and ratings of clinical competence in the first year of residency were the criterion measures. No significant improvement in validity coefficients was observed for performance in medical school or residency. Validity coefficients for all three versions of the MCAT in predicting Part I/Step 1 remained stable (in the mid-0.40s, P < .01). A systematic decline was observed in the validity coefficients of the MCAT versions in predicting Part II/Step 2. It started at 0.47 for the pre-1978 version, decreased to between 0.42 and 0.40 for the 1978-1991 versions, and to 0.37 for the post-1991 version. Validity coefficients for the MCAT versions in predicting Part III/Step 3 remained near 0.30. These were generally larger for women than men. Although the findings support the short- and long-term predictive validity of the MCAT, opportunities to strengthen it remain. Subsequent revisions should increase the test's ability to predict performance on United States Medical Licensing Examination Step 2 and must minimize the differential validity for gender.
Article
The need to identify the best applicants for medicine and to ensure that selection is fair and ethical has led to the development of alternative, or additional, selection tools. One such tool is the United Kingdom Clinical Aptitude Test, or UKCAT. To date there have been no studies of the predictive validity of the UKCAT. This study set out to identify whether UKCAT total score and subtest scores predict Year 1 outcomes in medical school. Year 1 students starting in 2007 at the University of Aberdeen or University of Dundee medical schools were included. Data collected were: UKCAT scores; Universities and Colleges Admissions Service (UCAS) form scores; admission interview scores; final Year 1 degree examination scores, and records of re-sitting examinations and of withdrawing from a course. Correlations were used to select variables for multiple regression analysis to predict examination scores. Data were available for 341 students. Examination scores did not correlate with UKCAT total or subtest scores. Neither UCAS form score nor admission interview score predicted outcomes. None of the UKCAT scores were reliably associated with withdrawals (P-values for all comparisons > 0.05). Only the decision analysis subtest was associated with re-sits of examinations, but the difference in means was contrary to the direction anticipated (P = 0.025, 95% confidence interval = 6.1-89.7). UKCAT scores did not predict Year 1 performance at the two medical schools. Although early prediction is arguably not the primary aim of the UKCAT, there is some cause for concern that the test failed to show even the small-to-moderate predictive power demonstrated by similar admissions tools.