ArticlePDF Available

Abstract and Figures

Whilst performance in the school-leaving examination may be a good predictor of academic achievement at medical schools, it is not necessarily a perfect one. The Health Sciences Placement Tests (HSPTs), comprising four components, were adopted by several South African universities as a tool to understand student preparedness. Of 127 first-year students at the University of the Witwatersrand in 2010, those from private schools performed significantly better academically than their public school counterparts on overall HSPT performance and in the Academic Language test, and marginally better in the Mathematics Achievement and Mathematics Comprehension tests. Students from private schools performed better at first-year level in the subjects of Psychology and Fundamentals of Medical and Clinical Sciences. The Academic Language and Mathematics Comprehension tests showed significant correlations with performance in first-year subjects, both at mid-year and year-end assessments. The study points to the importance of the HSPTs as an additional tool in predicting and understanding academic success at first-year university level.
Content may be subject to copyright.
Vaal University of Technology, Van der Bijl Park, Gauteng, South Africa. E-mail:
Centre for Innovation in Learning and Teaching (CILT), University of Cape Town,
Private Bag, Rondebosch, 7700, South Africa. E-mail:
Whilst performance in the school-leaving examination may be a good predictor of
academic achievement at medical schools, it is not necessarily a perfect one. The
Health Sciences Placement Tests (HSPTs), comprising four components, were
adopted by several South African universities as a tool to understand student
preparedness. Of 127 first-year students at the University of the Witwatersrand in
2010, those from private schools performed significantly better academically than
their public school counterparts on overall HSPT performance and in the Academic
Language test, and marginally better in the Mathematics Achievement and
Mathematics Comprehension tests. Students from private schools performed better
at first-year level in the subjects of Psychology and Fundamentals of Medical and
Clinical Sciences. The Academic Language and Mathematics Comprehension tests
showed significant correlations with performance in first-year subjects, both at mid-
year and year-end assessments. The study points to the importance of the HSPTs as
an additional tool in predicting and understanding academic success at first-year
university level.
Keywords: higher education readiness; educational background; learning potential
tests; academic success
Universally, the medical fraternity has required admission of academically excellent
students to a curriculum with strong theoretical and scientific content. Medical school
admission is a continuing topic of interest in education. Intellectual challenge and the
wish to achieve are among the primary motives for choosing a career in medicine
(Johnson et al. 1998). Medical practitioners are expected to possess an
extraordinary blend of academic and personal attributes, in order to interact with
patients and other medical staff. Consequently, medical schools worldwide
constantly review their admissions processes and the criteria that classify potential
candidates as suitable for entry.
Medical school admission policies are multifaceted and incorporate selection on the
basis of academic achievement at high school or university together with the a
test/interview to assess the personal qualities of the candidate (De Clercq, Pearson
and Rolfe 2001). It has been reported that pre-admission, structured interviews may
provide useful additional information not necessarily provided by other selection
processes (Cliff and Hanslo 2009). However, studies have indicated that interviews –
though an important potential component of the admissions process – do not reliably
predict the performance of a student academically, or in the clinical setting (Basco et
al. 2008; Elam and Johnson 1992). Medical admission tests are increasingly used as
an additional source of information to help selectors differentiate between high
achievers and to compare students from different educational backgrounds (Emery
and Bell 2009; McManus et al. 2011). Edwards, Elam and Wagoner (2001) proposed
a model for selection which considered the following components:
1. The applicant pool
2. Criteria for selection
3. The admission committee
4. Selection processes and policies
As a consequence of the growing diversity of students applying to medical schools
from a pool of well-qualified applicants (Al Alwan et al. 2013) and because of the
inequalities of schools in South Africa, there is a need to make use of selection
criteria other than results obtained in the school-leaving examination. Such criteria
are designed to level the playing fields for students from various and diverse socio-
economic and educational backgrounds. A meta-analysis of student achievement
reported by McManus (2002) showed that school attainment in general successfully
predicts performance at medical school. However, in order to adjust selection as a
result of poor schooling encountered by many potentially excellent students from
educationally disadvantaged backgrounds (especially Black students), there is an
argument for lowering entry requirements. This is not an unproblematic solution:
research shows that lowering entry requirements increases the short-term risk of
students dropping out of medical school or the longer term risk of the poorer-qualified
medical entrants becoming less competent doctors (McManus 2002).. Similarly,
other meta-analytic research has clearly shown that General Cognitive Ability (GCA)
is a moderate to strong predictor of occupational achievement and relevant
performance (Bore, Munro and Powis 2009). Although there is an indisputable need
to redress demographic and socio-economic imbalances, especially as a result of the
apartheid legacy in South Africa, lowering the entry standards to medical school is
not the answer to accommodate students from previously disadvantaged
South African medical schools have recognised the need for transformation and
consider academic and non-academic factors in the selection process. Academic
criteria were mostly compiled according to the school-leaving examination pass rate
and subject choices. Owing to the changing of the school evaluation system, the
Faculty of Health Sciences at the University of the Witwatersrand, as well as other
medical schools at South African universities, have introduced additional criteria,
apart from academic performance in the penultimate and final schooling years, in the
selection of medical students.
The selection criteria need to be both reliable and valid in order to ensure successful
academic performance at university level within a medical school programme. In the
United Kingdom, the Biomedical Admissions Test (BMAT), interviews and personal
statements are designed to serve as an adjunct to examination results. This process
is intended to provide a global scoring of eligible students (Emery and Bell 2009).
South African medical schools have adopted a similar strategy by using the Health
Sciences Placement Tests (HSPTs) developed by the Alternate Admissions
Research Project (AARP – now the Centre for Educational Testing for Access and
Placement). The HSPTs have shown that performance in the first year is better
predicted by a set of tests that, in part, are similar to the scientific knowledge section
of the BMAT, particularly for students from educationally disadvantaged backgrounds
(Cliff and Hanslo, 2009; Cliff and Montero, 2010). In the United States,
undergraduate Science scores have also been shown to be strong predictors of
standardised test performance in the medical school curriculum (Basco et al. 2002).
Criteria used to select students arguably need to be effective in predicting competent
performance, both during the course and after graduation, (Kay-Lambkin, Pearson
and Rolfe 2002) but to date no single comprehensive and definitive medical student
selection model has been described (Bore, Munro and Powis 2009). Traditionally,
admissions policies have focused on the selection of applicants with high academic
scores (Kay-Lambkin, Pearson and Rolfe 2002) for the obvious reasons of the high
academic demands of a medical degree. Policies for selection of students were
traditionally based on the assumption of a strong relationship between academic
ability and success in medical school examinations.
Academic and non-academic criteria have to be applied in the selection process,
much of which has been established on intuitive grounds and without any evidence
basis (James and Chilvers 2001). It therefore becomes critical for medical schools to
validate their selection on an ongoing basis where the educational climate changes
and the attributes of graduating doctors have constantly to be considered to meet the
needs of the patient population they serve. Every medical school should identify
those objective factors which predict success on their course and incorporate them
into their selection process (James and Chilvers 2001). Doctors need specialist
knowledge and a complementary array of skills and personality traits if they are to be
professionally competent (Powis 2010), which also suggests that any competency
list for a generic medical practitioner should comprise the following:
1. Excellent academic ability
2. Good cognitive skills
3. Ability to use academic knowledge appropriately in quantitative, verbal and
spatial domains.
Although changes in the selection policies began to take place prior to 1994 and the
intake of medical students in South Africa showed progress with regard to changing
the demographic profile (which demonstrated an improved representation of the
more disadvantaged groups in 1999 as compared to 1994), equitable representation
still remained a challenge that needed to be addressed (Cliff and Yeld 2006). In this
context, a complementary selection mechanism was introduced as part of the
process of selecting medical school students in South Africa. The Health Sciences
Placement Tests (HSPTs) – developed by the then Alternative Admission Research
Project (AARP) – were introduced at seven of the eight medical schools in South
Africa and adopted from 2003 as an additional method of gathering information for
the selection of future medical students. The HSPTs consisted of four tests, which
included generic testing of language applied to an academic context, mathematical
achievement and mathematical comprehension, and scientific reasoning. The HSPTs
were developed by interdisciplinary teams of experts over a time span of several
years and constituted the following tests (Cliff and Hanslo 2009):
The Placement Test in English for Educational Purposes (PTEEP), which is
aimed at assessing students' ability to make meaning of texts that they are
likely to encounter in their studies and understand visually presented textual
information, by using processes such as separating superordinate from
subordinate information; applying inferential reasoning; interpreting features of
academic discourse; and understanding analogous thinking.
The Mathematics Achievement (MACH) test, which measures the extent of a
student’s backlog in basic mathematical knowledge and skills normally
expected to have been acquired by the time the student reaches a senior
secondary school mathematics phase.
The Mathematics Comprehension (MCOM) test, which is designed to provide
information concerning the student’s potential to learn new mathematical
knowledge and skills.
The Scientific Reasoning Test (SRT), which is aimed at assessing the
student's capacity to engage in the type of logical, evidence-based thinking
typically required of students in higher education.
Given the historical context mentioned, the tests were designed to obtain information
about the potential of students to cope with the typical academic and cognitive
demands of higher education. Additionally, the goal of the HSPTs was to enable
talented students whose education had been particularly compromised by unequal
schooling, to demonstrate the extent to which they would be able to cope in higher
education contexts where there would be high levels of academic and non-academic
support and mentoring. These tests were regarded as a diagnostic benchmark of
students' entry-level performance (AARP, 2004), a benchmark which could then be
incorporated into the selection and curriculum placement of students.
Studies have shown correlations between selection test scores and performance
where aptitude selection instruments that assess science, mathematics and linguistic
capabilities of selected candidates were significantly predictive of in-course
performance of students in colleges in Saudi Arabia (Al Alwan et al. 2013).
Internationally, there exist mixed results regarding formal measures of undergraduate
institution selection but they still remain useful and important components to
predicting student performances. Biomedical tests have to be valid and reliable
predictive indicators of student eligibility and success and aspects such as verbal
and numerical reasoning have been strong predictors of student success (Emery and
Bell 2009). Ultimately, communication and interpersonal skills have to be balanced
against academic and scientific ability and this still remains a major challenge for
medical schools worldwide.
Generally in South African medical school selection processes, academic pre-
admission criteria include the prospective student’s final secondary school mark, for
example, a composite grading of all final-year schooling scores, as well as the scores
on a set of pre-admission tests. Although previous academic performance is a good
predictor of success on the medical programme (Ferguson, James and Madeley
2002; Lumb and Vail 2004), it is not a perfect one. For example, one study has
shown that it accounted for 23% of the variance in performance in undergraduate
medical training and only 6% in post-graduate competency (Ferguson, James and
Madeley 2002).
The aim of the current study was to investigate the degree of association between
the scores on the specific test components of the HSPTs and the scores on the in-
course mid-year and final examinations for students in their first year of study
towards a medical degree. First-year academic performance has historically been
shown to be a critical filter of students entering medical school (Ruscingo, Pinto Zipp
and Olson 2010). In terms of the tests, a composite score provides an average of the
four tests that make up the HSPTs. For the purposes of this study, the individual
component scores were teased out of the test and these analysed against the first-
year subjects included in this study in order to identify specific predictor domains of
student success in the first year. Additionally, attempts were made to control for
variation in pre-admission test and academic performance scores by demographic
variables such as gender, and student school background, since these variables are
known historically to be associated with differences in academic performance.
The medical curriculum at the University of the Witwatersrand spans a minimum
period of six years, the first two of which could be considered pre-clinical years
comprising basic sciences, anatomy and physiology subjects. A retrospective study
was performed where in-course performance for the basic and human sciences was
assessed in conjunction with the pre-admission test results of the students who had
been admitted for the 2010 intake.
Four pre-admission predictors of performance were examined and considered for
this study:
2. MACH*
3. MCOM*
4. SRT*
(* Defined above)
These scores and a composite HSPT score were assessed in conjunction with the
mid-year and final first-year results in the following subjects:
1. Physics
2. Chemistry
3. Biology
4. Fundamentals of Medical and Clinical Sciences (SCMD)
5. Sociology
6. Psychology
The component results for pre-admission and corresponding first-year results were
provided by the university with the permission of the Dean of the Faculty of Health
Sciences. Variables analysed included student numbers, the mean scores of each
component test and a composite as well as the class mean values for the first-year
subjects chosen for scrutiny.
Database management and statistical analyses were performed with SAS software,
version 9.1 (SAS Institute Inc., Cary, NC, USA). Results from each component of the
Composite Index are reported as mean ±SD. Unadjusted means were compared by
t-test or Wilcoxon-Mann Whitney tests when appropriate. Spearman Correlation
coefficients were calculated between the pre-admission test components and the
first-year subjects unadjusted and after gender adjustments. Stepwise multiple linear
regression analysis was performed to assess independent relations between pre-
admission test components and the first-year subjects marks with appropriate
adjustors. A p-value of <0.05 was considered statistically significant.
One hundred and twenty seven students were accepted to medical school at the
University of the Witwatersrand for 2010 (Table 1).
The majority were females (62%) and of Black ethnicity (42.5%); 33% were White;
15% Indian; and 9.5% Coloured. Females and males had similar scores for the
medical pre-admission tests (Composite Index (CI), PTEEP, MACH, MCOM and
SRT) with p>0.05. Students from private schools performed higher in all tests
besides the SRT compared to those from public schools (Table1), suggesting that
school background factors continue to impact on the academic performance of entry-
level medical school students.
Table 2 shows the results of the June and November examinations.
No differences between public and private schools were found in the June
examinations. In contrast in the November examination, students from private
schools achieved higher marks than students admitted from public schools in the
subjects of SCMD and Psychology (p<0.05). After adjusting for gender and school
background the marks for Chemistry, SCMD and Sociology increased in November
examinations (p<0.001). However, in Biology a small decrease in the marks was
observed (p=0.03).
Strong positive correlations were noted between the Composite Index and the marks
obtained by the students in their June and November examinations (Spearman
correlation coefficients (rs) between 0.42 and 0.79, with p<0.0001). Furthermore
similar correlations coefficients were achieved after adjusting for school background
(Table 3).
Table 4 depicts student performance according to each subject undertaken using the
pre-admissions test (PTEEP, MACH, MCOM and SRT) and adjusting for gender and
schooling background.
Higher marks of mid-term and year-end examinations were explained by better
results in the MCOM tests for the different subjects (partial r2= 0.20 to 0.25,
p<0.0001) and for PTEEP (partial r2= 0.04 to 0.25, p< 0.05) in June. Similar findings
were seen in November as well (Table 4).
The combined pre-admission test (CI) appeared to be the most important predictor of
the mid-year and final year marks of the majority of the subjects. For the June
examinations, the CI explained 32%, 20% and 32% of the variance in the marks
obtained by the medical students in Biology, Chemistry and Psychology respectively
(p<0.0001). In addition, at the year-end examinations the Composite Index predicted
the scores achieved in Biology, Physics, Sociology and SCMD and Psychology
(p<0.0001), but not Chemistry. MCOM exhibited an independent association with
Biology November marks (partial r2 of 0.02, p=0.03) together with CI. Physics marks
in mid-term instead accounted for 20% (p<0.0001) of the variance in the MCOM
scores only. In addition, MCOM scores predicted Biology, SCMD examinations
results for November (p=0.03) as well. The Academic Language test (PTEEP)
scores were the sole predictor of the Sociology marks (partial r2=0.22, p<0.0001) in
June and explained only 2% of the increase in the Psychology scores (p=0.03).
The present study attempted an investigation of a single overarching question: to
what extent are scores on the HSPTs as individual tests and as a combined,
composite measure, associated with academic performance of first-year students in
key medical school courses at mid-year and year-end? This overarching question
was approached from four angles: (1) an investigation of the extent to which
differences in key demographic variables (gender and school background) were
associated with differences in mean levels of achievement on pre-admissions tests
and in key first-year courses; (2) the extent to which these demographic differences
(if any) were still visible in academic performance at the end of the first year of study;
(3) an investigation of correlations – adjusted for differences in school background –
between a composite score on the pre-admissions tests and academic performance
at mid-year and year-end; and (4) regression analyses – adjusted for gender and
school background variables – to determine the contributions of pre-admissions tests
individually and as a composite towards variation in academic performance in key
courses at mid-year and at year-end.
The results of the present study point to the importance of selection criteria using the
HSPTs as an additional tool in predicting academic success in health sciences at the
University of the Witwatersrand. Our study initially compared the performance of
students admitted from public versus private schools in their achievements on the
HSPTs. The results indicate no gender difference in their performance across the
various assessments. However, when the analysis was performed on admitted
students from public and private schools, students from private schools performed
significantly better than their public school counterparts in the assessments on the
academic language proficiency (PTEEP) test and their overall composite scores
were also significantly higher (p<0.0001; Table 1). Whilst the trend of better
performance was still with students from private schools in the MACH and MCOM
assessments, these did not reach statistical significance.
These findings confirm and augment those reported by Cliff and Hanslo (2009) in
suggesting that pre-admission tests and the PTEEP as a test of academic language
proficiency and academic literacy in particular, have predictive value in underscoring
an ‘advantage’ on entry to higher education that students from private schools have
over students from public schools. The effects of language and academic literacy on
student performance at university are well-documented. (Higgins-Opitz and Tufts
2014; Higgins-Opitz et al. 2014; Fleisch, Schöer and Cliff 2015). Furthermore,
studies have also shown that school-leavers embarking on university studies are
generally inadequately prepared to cope with the language-of-instruction demands of
studies in higher education (Ramukumba and Gravett 2004; Cliff 2014 and 2015).
The results of the present study support the notion of using scores, such as those
achieved on the PTEEP, as indicators of likelihood of succeeding at university.
Further evidence in support of using PTEEP as an additional tool for admission
purposes is provided by the recent work of Mashige, Ramprasad and Venkatash
(2014) who demonstrated a weak correlation between matric English scores and
first-year performance in all subjects.
The present study, and others referred to in the previous paragraph, also add weight
to the importance of focusing on literacy as part of the disciplinary curriculum.
Performance on tests such as the PTEEP and the MCOM are strongly influenced by
the language proficiency and literacy-laden nature of the tests themselves, and the
academic contexts that they simulate. The present study indicates that ‘conventional’
curriculum may not necessarily be sufficient to address the language and literacy
needs of medical school students, especially those from English Second Language
backgrounds, if these students are to be enabled to ‘overturn’ anticipated
relationships between poor entry-level test performance and academic
underperformance during and at the end of first-year studies.
In addition, data from Table 2 suggest that residual effects of the ‘advantage’ private
school background students have over public school background students in terms of
their readiness to cope with the demands of higher education study remain visible
right through the first year of study. Students from private schools also performed
better than their public school counterparts in SCMD and Psychology. The points
made in the previous paragraph about the need to address language and literacy
demands alongside conventional curriculum demands remain apposite.
Finally, we return to the argument about the use of tests such as the HSPTs as an
additional tool in the selection of medical school students (and, by implication,
students for other academic programmes). We believe the findings of this study
emphasise the complementary value of the HSPTs in identifying and understanding
variation in academic readiness of medical school students that is not necessarily
visible on the basis of school-leaving results alone. Many applicants to medical
schools at the University of the Witwatersrand (and other South African medical
schools and equally high-demand academic programmes) obtain equally outstanding
school-leaving examination results, which makes it extremely difficult to make
selections decisions amongst these applicants who exhibit so little evidence of
academic variation in their school-level academic achievement. Furthermore – as
crtiterion-referenced assessments with the assessment ‘target’ being readiness to
cope with first-year academic literacy, mathematical thinking and scientific reasoning
demands in the medium-of-instruction – the HSPTs have value in identifying the
ability of applicants to cope with their study programmes that is not visible in the
school-leaving examination results. Increasingly (as pointed out earlier in this paper),
school-leaving examination results have been difficult to interpret: the diverse
educational backgrounds of applicants make it difficult to establish the meaning of
school-leaving examination scores and the interpretation of what these scores tell us
about what applicants know and can do. The constructs assessed by the HSPTs
produce additional academic readiness information against which the school-leaving
examination results can be interpreted – and on the basis of which selection
decisions can be made.
We believe that the present study also carries implications for the selection of
students from educationally advantaged (well-resourced) and educationally
disadvantaged school backgrounds. For students from educationally advantaged
backgrounds, we believe results from tests such as the HSPTs by and large confirm
the beneficial effects of well-resourced schooling – but the results remain useful at
the level of individual applicants about whom selection decisions need to be made.
Nonethless, HSPT results still provide important information to selection committees
about the academic readiness (and literacies) of applicants from advantaged
backgrounds. For students from educationally disadvantaged backgrounds, HSPT
scores provide alternate academic readiness information to school-leaving
examination results in many cases (consonant with the findings in the Cliff and
Hanslo, 2009, study), particularly in relation to the ability of these students to cope
with the medium-of-instruction demands of tertiary study. For students from
disadvantaged backgrounds, the tests provide important information about the extent
to which these students will cope with their studies in, for example, an English
medium-of-instruction teaching environment and about the extent and kind of
academic support and curriculum responsiveness that might be indicated if such
students are to be successful. From a selection point of view, the HSPTs act as
mechanisms in this instance for making judgments about the level of academic
support such will require once selected. Findings from this study confirm that – in the
presence of conventional curriculum provision (such as that provided by the courses
that formed the focus of this study) – the effects of (disadvantaged) educational
background remain visible through the first year of study. The use of HSPT
information provides a framework for the development of explicit or additional support
aimed at addressing the needs of students from disadvantaged backgrounds.
In a South African context, demographic factors such as school background or
population group continue to play an important part in understanding talent and
achievement. Initially, an unbiased and unadjusted study was performed which did
not consider race or gender. Against the diverse educational and socio-economic
background which South African tertiary institutions face, the same data was further
adjusted to incorporate the effect of race and gender on the same results. This study
was also limited to the student pool at the University of the Witwatersrand as well as
pertaining to the curricula of the final high school year and first-year of university of
2009 and 2010 respectively which become relevant when studying the same
parameters over a longer time span when the curricula changed as desired by
educational outcomes.
All statistical analyses were undertaken by Professor Elana Libhauber of the
University of the Witwatersrand, who is gratefully acknowledged.
AARP 2004. Report to the Health Sciences Consortium on the use of the Health
Sciences Placement Tests in 2004. AARP Research Report IV. Academic
Development Programme, University of Cape Town.
Al Alwan, I., M. Al Kush, H. Tamim, M. Magzaub and M. Elzubeir 2013. Health
Sciences and Medical College Pre-Admission Criteria and Prediction of In-Course
Academic Performance: a Longitudinal Cohort Study. Advances in Health Sciences
Education. 18: 427 438.
Basco, W.T., D.P. Way, G.E. Gilbert and A. Hudson 2002. Predicting Step I Success:
Undergraduate Institutional MCAT scores as Predictors of USMLE Step I
Performance. Academic Medicine. 77 (10) October Supplement: S13 – S16.
Basco, W.T., C.J. Lancaster, G.E. Gilbert, M.E. Carey and A.V. Blue 2008. Medical
school application interview score has limited predictive validity for performance on a
fourth year clinical practice examination. Advances in Health Sciences Education. 13:
151 162.
Bore, M., D. Munro and D. Powis 2009. A Comprehensive Model for the Selection of
Medical Students. Medical Teacher. 31: 1066 1072.
Cliff, A. 2014. Entry-level students’ reading abilities and what these abilities might
mean for academic readiness. Language Matters. 45: 313 324.
Cliff, A. 2015. The National Benchmark Test in Academic Literacy: How might it be
used to support teaching in higher education? Language Matters. 46: 3 21.
Cliff, A. and M. Hanslo 2009. The Design and Use of 'Alternate' Assessments of
Academic Literacy as Selection Mechanisms in Higher Education. Southern African
Linguistics and Applied Language Studies. 27 (3): 265 – 276.
Cliff, A. and E. Montero 2010. The balance between excellence and equity on
admission test: contributions of experiences in South Africa and Costa Rica. Ibero-
American Journal of Educational Evaluation. 3 (2): 8 – 28.
Cliff, A. and N. Yeld 2006. Domain I – Academic Literacy. In Access and Entry Level
Benchmarks: The National Benchmark Tests Project, ed. H.Griesel, 19 – 27.
Pretoria: Higher Education South Africa.
De Clercq, L., S-A. Pearson and I.E. Rolfe 2001. The relationship between previous
tertiary education and course performance in first year medical students at
Newcastle University, Australia. Education for Health. 14 (3): 417 – 426.
Edwards, J.C., C.L. Elam and N.E. Wagoner 2001. An Admission Model for Medical
Schools. Academic Medicine. 76: 1207 1210.
Elam, C.L. and M.M. Johnson 1992. Prediction of Medical Students’ Academic
Performances: Does the Admission Interview Help? Academic Medicine. 67 (10)
October Supplement: S28 – S30.
Emery, J.L. and J.F. Bell 2009. The Predictive Validity of the BioMedical Admissions
Test for Pre-clinical Examination Performance. Medical Education. 43: 557 564.
Ferguson, E., D. James and L. Madeley 2002. Factors Associated with Success in
Medical School: a Systematic Review of the Literature. British Medical Journal. 324:
952 957.
Fleisch, B., V. Schöer and A. Cliff 2015. When Signals are Lost in Aggregation: A
Comparison of Language Marks and Competencies of Entering University Students.
South African Journal of Higher Education. 29 (5): 156 – 178.
Higgins-Opitz, S.B. and M. Tufts 2014. Performance of first-year health sciences
students in a large, diverse, multidisciplinary, first-semester, physiology service
module. Advances in Physiology Education. 38: 161 169.
Higgins-Opitz, S.B., M. Tufts, I. Naidoo and S. Essack 2014. Perspectives of student
performance in the Health Sciences: How do physiology and professional modules
compare? South African Journal of Higher Education. 28 (2): 436 – 454.
James, D. and C. Chilvers 2001. Academic and Non-Academic Predictors of
Success on the Nottingham Undergraduate Medical Course 1970 – 1995. Medical
Education. 35: 1056 1064.
Johnson, M., C. Elam, J. Edwards, D. Taylor, C. Heldberg, R. Hinkley and R.
Comeau 1998. Predicting Performance and Satisfaction: Beyond the Crystal Ball.
Academic Medicine. 73 (10): S41 – S43.
Kay-Lambkin, F., S-A. Pearson and I. Rolfe 2002. The influence of admissions
variables on first year medical school performance: a study from Newcastle
University, Australia. Medical Education. 36: 154 159.
Lumb, A.B. and A. Vail 2004. Comparison of academic, application form and social
factors in predicting early performance on the medical course. Medical Education.
38: 1002 1005.
Mashige, K.P., N. Rampersad and I.S. Venkatas 2014. Do National Senior
Certificate results predict first-year optometry students’ academic
performance at university? South African Journal of Higher Education. 28
(2): 550 – 563.
McManus, I.C., E. Ferguson, R. Wakeford, D. Powis and D. James 2011.
Predictive Validity of the Biomedical Admissions Test: An evaluation and
case study. Medical Teacher. 33: 53 57.
McManus, I.C. 2002. Medical School Applications – A Critical Situation. British
Medical Journal. 325: 786 787.
Powis, D. 2010. Improving the Selection of Medical Students. Non-Academic
Personal Qualities Should be Taken into Account. British Medical Journal. 340: 432
Ramukumba, T.A. and S. Gravett 2004. Factors affecting academic performance
among first year occupational therapy students. South African Journal of
Occupational Therapy. 34: 2 6.
Ruscingo, G., G. Pinto Zipp and V. Olson 2010. Admission Variables and Academic
Success in the First-year of the Professional Phase in a Doctor of Physical Therapy
Program. Journal of Allied Health. 39 (3): 138 – 142.
Table 1: Composite index and PTEEP, MACH, MCOM and SRT scores on
HSPTs for the 2010 intake per school type (public and private)
86 (68%)
41 (32%)
Table 2: Subject results for mid-year and final year 2010 per school type
PSYCHOLOGY (year-end only available)
Data are presented as mean ± SD; ^p<0.05 between public and private schools!
Table 3: Correlations of the Composite Index with mid-year and final year subject
Unadjusted Spearman
Spearman r adjusted
for school background
Fundamentals of Medical and Clinical
Table 4: Stepwise regression analysis for mid-year and final year subject
Results of subjects
ß coefficient ±SEM
Partial r2
Biology - June
2.Composite index:
Biology - November
1.MCOM: 0.37±0.07
2.Composite index:
MCOM: 0.16±0.07
Physics - June
1.MCOM: 0.48±0.09
2. MCOM: 0.48±0.09
Physics - November
1.MCOM: 0.37±0.05
2.Composite index:
MCOM: 0.21±0.09
Chemistry - June
1.MCOM: 0.34±0.06
2.Composite index:
Chemistry - November
1.MCOM: 0.31±0.06
2.MCOM: 0.31±0.06
Sociology - June
1.PTEEP: 0.22±0.03
2.PTEEP: 0.22±0.03
Sociology - November
MCOM: 0.07±0.03
2.Composite index:
Psychology - June
1.MCOM: 0.24±0.07
PTEEP :0.22±0.09
2. Composite index:
Psychology -
1.PTEEP: 0.16±0.05
MCOM : 0.18±0.04
2. Composite index:
PTEEP: 0.11±0.05
Fundamentals of
Medical and Clinical
Sciences (SCMD)–
November (only)
1.MCOM: 0.37±0.04
2.Composite index:
MCOM: 0.16±0.07
1. Individual tests scores (PTEEP, MCOM, MACH and SRT), gender and schools
were included in the model for each subject
2. Individual tests scores (PTEEP, MCOM, MACH and SRT), gender and schools
with the addition of Composite Index scores were included in the model for each
... A successful first year promotes the development of a positive attitude, self-confidence, and a commitment to their studies [2]. Admission tests are used to identify students with cognitive abilities to cope with the intellectual demands of the medical programme and non-cognitive attributes to assimilate the ethical, inter-relational, and motivational challenges [3,4]. All nine South African medical schools use the results for selected subjects from the National Senior Certificate (NSC) in their selection process. ...
... Currently, 40% of the places available in the Bachelor of Medicine and Bachelor of Surgery (MBBCh) degree at Wits University are reserved for top-performing students. The remaining 60% is divided equally into three categories: top-performing rural students, top-performing students from quintile 1 and 2 schools, and top-performing Black 4 and Coloured 4 students. Wits University introduced rurality as a selection criterion in 2015. ...
... The teaching programme for the six-year MBBCh degree at Wits University is divided into clinical and preclinical years [4]. The medical curriculum is shown in Table 4. ...
Full-text available
Background: South African medical schools use the results of the National Senior Certificate (NSC) examination for selecting students. Five of the nine medical schools also use the National Benchmark Test (NBT). The University of the Witwatersrand weights the NSC and NBT results equally in the selection process. This study addresses the predictive validity of the NBT and NSC for academic success. The association between the NBT proficiency levels and students' progression outcomes was also investigated. Methods: Data obtained from the University's Business Intelligence Services for 1652 first-year medical students from 2011 to 2017 were analysed using hierarchical regression models and chi-square tests. The three NBT domains and four of the NSC subjects were the independent variables in the regression models, with the first-year grade point average for students who passed the first year as the dependant variable. The NBT performance levels and first-year progression outcome (passed, failed, or cancelled) were used in the chi-square analysis. Frequency tables were used to describe the cohort's demographic details and NBT results. Crosstabs were used to analyse student performance according to the school quintile system. Results: The three NBT domains explained 26% of the variance, which was statistically significant, R2 = 0.263, F (3, 1232) = 146.78, p < 0.000. When the NSC subjects (Life Sciences, English, Mathematics, and Physical Science) were added to the regression equation, they accounted for an additional 19% of the variance, R2 = 0.188, F (3, 1229) = 137.14, p < 0.000. All independent variables contributed 45% of the variance, R2 = 0.451, F (6, 1229) = 166.29, p < 0.000. A strong association between the NBT proficiency levels and first-year students' progression outcomes was observed. Conclusion: The NBT results, when weighted equally to the NSC results, explained more variance than the NSC alone in predicting academic success in the first year of the medical degree. The NBT should not only be used for selecting medical students but should also be used to place students with lower entry-level skills in appropriate foundation programmes and to identify students who are admitted to regular programmes who may need additional support.
... Until five years ago, five out of nine Faculty of Health Sciences/medical schools in South Africa used the NSC and the NBT to select students for the medical programmes [3,12,13]. These medical schools use different weightings of these two components to select students, and the compulsory NSC subjects are required at specified levels of achievements [3]. ...
... Considering that academic difficulties tend to manifest in the early years of a medical programme, this study investigated the predictors of academic difficulties in the first three years of study. The medical curriculum comprises two preclinical years and four clinical years [13], with the introduction of the basic sciences during the first two years [31]. Medical students begin clinical training in the third and fourth year, and the fifth and sixth are full clinical years in which the students are placed in clinical clerkships in academic hospitals and in community and rural sites [31]. ...
Full-text available
Background The National Benchmark Test (NBT) that determines academic readiness is widely used by Faculties as an additional measure to select students for the study of medicine. Despite this, many students continue to experience academic challenges that culminate in delayed graduation and sometimes academic exclusion or discontinuation of studies. Aim This study aimed to understand academic and non-academic variables linked with academic difficulties in the first three years of medical education. Methods The study sample consisted of six cohorts of medical students for the period 2011 to 2016 (n = 1392). Only the first three of the six-year medical programme were selected for analysis. Survival analysis and Cox Proportional Hazard (CPH) was used to identify academic and non-academic variables associated with academic difficulties. Results A total of 475 students (34%) experienced academic difficulty; 221 (16%) in the first year of study, 192 (14%) in the second year and 62 (5%) in the third year of study. The results show that Intermediate Upper, Lower and Basic levels for all NBT domains, living in university residence, rurality and male gender were risk factors for academic difficulty. Conclusion In mitigating these factors, the NBT must inform the type of support programmes to augment the students' skills and promote academic success. Additionally, existing support programmes should be evaluated to ascertain if they reach students at risk and whether participating in these programmes yield positive academic outcomes.
... Even traditional advocates of selection tests in medical schools who have always argued for the use of admission tests as filters to select-in students who stand the chance to succeed (Al Alwan et al. 2013) and select-out the underprepared are beginning to change their views. General pre-admission tests that were used as a diagnostic benchmark to predict student academic performance at an entrylevel do not fully explain the student potential without factoring in various academic and nonacademic factors that shape student performance (Wadee and Cliff 2016). Schwartz and Washington (1999) also suggest that, although selection tests became essential elements in university admissions, as tools they add little to predicting the success of students at university. ...
... Most studies done on student support tend to focus mainly on teachers, knowledge content and pedagogical interventions (Elen et al. 2007) or module scores to find solutions to student learning challenges (Wadee and Cliff 2016). The challenge with isolated student grades-linked data analytics is that it often leads to interventions that come too late in the learning process, designed to look for solutions to fix the student's cognitive skills, the curriculum or the environment. ...
Full-text available
This article argues for the establishment of big data early alert systems that inform data-driven student support mechanisms in universities. It proposes a guiding framework for integrated big data to enhance student success premised on a comprehensive understanding of students as people in the world who arrive at universities with various complex life problems that may disrupt their learning opportunities. It argues that various data components should be linked together to foster coherence and seamlessness in understanding student socio-economic and academic needs to develop responsive learning-enhancement intervention programmes. This is based on action research conducted through projects launched at the University of Witwatersrand in 2015 and at the University of Zululand in 2018. The systems were launched, and data was collected using the proposed student performance tracking system. This article explores conceptual and theoretical underpinnings of establishing big data-based student support systems in South African universities. A big student data model is proposed for wider use in South African universities. Keywords: big data, data analytics, student performance, student success, undergraduate student experience, responsive academic support, graduate attributes, first year experience, student retention, graduateness.
... In our study, the assessment achieved a modest reliability of 0.64, which is lower than assessments that examine student preparedness. [4,7] It can be attributed to the fact that we purposely selected key themes that served as indicators of entry-level foundational knowledge. This resulted in a limited number of items generated, and the associated shorter length of test administration most likely played a part in the lower reliability that is usually associated with formative assessments (0.70 -0.79). ...
Full-text available
BACKGROUND. Universities in South Africa use the Grade 12 school-leaving examinations to measure whether students have the knowledge and skills needed to enter tertiary-level education. However, there is much discussion on the effectiveness of these assessments to measure the preparedness of students for their first year at university. To facilitate the appropriate teaching and learning of anatomy and physiology, there is a need to assess students' baseline knowledge of life sciences at entry to their first year at university. OBJECTIVES. To develop and refine an anatomy and physiology foundational knowledge assessment (A foundational knowledge assessment), which looks back to the content of the Grade 12 life sciences curriculum and forward to the first-year anatomy and physiology curricula. METHODS. Three hundred and seventy-one first-year students (occupational therapy, physiotherapy and MB ChB) wrote the A foundational knowledge assessment. Classic item and test analysis was done using Iteman 4.3 software (Assessment Systems Corp., USA). RESULTS. The Kuder-Richardson formula 20 (KR-20) reliability score, which ranges from 0 to 1, was 0.64 for all the students. For MB ChB students, the KR-20 value was lower (0.57) compared with that for occupational therapy and physiotherapy students (0.66). The KR-20 scores for the 21 physiology and 16 anatomy items were 0.48 and 0.57, respectively. A KR-20 score of >0.50 is considered acceptable. The mean difficulty index (range 0 - 1) for physiology was 0.60, and the mean discrimination index was 0.15. For anatomy, the mean item difficulty index was 0.57 and mean discrimination index was 0.21. CONCLUSION. Based on the acceptable reliability value, the assessment was shown to be an effective instrument to measure students' foundational knowledge in human anatomy and physiology, which is part of life sciences.
... (verbal and numerical reasoning) more consistent predictors of academic performance in comparison with personal competencies (coping with pressure, adapting to change, achieving personal goals, and working with people). Waldee and Cliff (2016) also found the preadmissions test of learning potential to be a better predictor of the potential of students to cope with their programs of study than the results of the school-leaving test. Finally, showed that in terms of student academic performance, Grade 12 National Senior Certificate results were a weak predictor of academic success, while grades weighted by module credits were a statistically better predictor of performance and throughput. ...
Full-text available
Empirical evidence on the relationship between student funding and academic performance is unclear. Some studies have found a positive relationship, some have suggested a negative one, while others maintain that there is no relationship between them. Acknowledging that a range of factors, other than funding, impact on student success, in this paper, we aim to contribute to a small, but emerging, body of literature on the relationship between student funding and academic performance, proxied by the average individual academic mark for the year. We applied descriptive and inferential statistics to a dataset of 29,619 students registered at two South African universities for the 2018 academic year. The results highlight that in an examination of the impact of being funded by the National Student Financial Aid Scheme (NSFAS) in a bivariate context, it is possible to find a negative relationship with performance. However, at an aggregate level and controlling for the impact of other variables, a positive (albeit weak) and statistically significant correlation between being NSFAS funded and average academic performance emerges.
... Tests of Academic Literacy based on the specifications in Table 4.1 have been widely used in Higher Education in South Africa; have been demonstrated to be an indication of students' academic potential (cf. Yeld & Haeck, 1997); have been used as mechanisms for widening access to students from poorly resourced educational backgrounds (cf. the work of the Alternative Admissions Research Project at the University of Cape Town); have been shown to have diagnostic and predictive value (see, for example, Visser & Hanslo, 2005); and have been useful as selection and placement mechanisms that yield alternate information about students' ability and capacity Wadee & Cliff, 2016). ...
South African universities face major challenges in meeting the needs of their students in the area of academic language and literacy. The dominant medium of instruction in the universities is English and, to a much lesser extent, Afrikaans, but only a minority of the national population are native speakers of these languages. Nine other languages can be media of instruction in schools, which makes the transition to tertiary education difficult enough in itself for students from these schools. The focus of this book is on procedures for assessing the academic language and literacy levels and needs of students, not in order to exclude students from higher education but rather to identify those who would benefit from further development of their ability in order to undertake their degree studies successfully. The volume also aims to bring the innovative solutions designed by South African educators to a wider international audience.
... have been widely used in Higher Education in South Africa; have been demonstrated to be an indication of students' academic potential (cf.Yeld and Haeck, 1997); have been used as mechanisms for widening access to students from poorly-resourced educational backgrounds (cf. the work of the Alternative Admissions Research Project at the University of Cape Town); have been shown to have diagnostic and predictive value (see, for example,Visser & Hanslo, 2005;Cliff et al., 2007); and have been useful as selection and placement mechanisms that yield alternate information about students' ability and capacity(Cliff & Hanslo, 2009;Wadee & Cliff, 2016).Incorporated into the design of such tests are elements of task mediation (such as text as teaching mechanism; examples of tasks given and explained in the test; tasks being made increasingly complex). What the present chapter attempts is an explication of the use of further kinds of mediation, in particular the use of qualitative feedback to test-takers on a selected number of their responses and the use of test questions where an answer is provided to the test-taker, together with an explanation of what their possible response to the question might imply about their understanding of the task. ...
Full-text available
This chapter considers the possibilities of using mediation and feedback in a standardised test of academic literacy in an attempt to surface the potential to learn for test-takers from educationally disadvantaged backgrounds. Theoretically, the chapter is set against the background of international studies exploring the assessment of learning potential from a number of complementary perspectives. In essence, the design and research work reported here sets out to determine whether the use of mediation and feedback in a static test artefact enables test-takers to demonstrate learning, i.e. to respond to progressively more complex test tasks than might have been possible without the mediation and feedback. The construct under consideration here is the potential to demonstrate academic literacy: the ability of test-takers to cope with conventional academic reading, writing and reasoning tasks they will face on entry into Higher Education across a range of disciplinary contexts. A significant component of the chapter is devoted to design considerations for a test of academic literacy that has as its aim the testing of potential. The chapter situates the foregoing in the context of research studies that have been conducted in South African and Costa Rican Higher Education contexts. Data from the South African context specifically is presented and analysed to shed light on the effects of test item mediation on subsequent test performance. Concluding discussion considers the promise and challenge of using mediation in a test-taking situation to enable test-takers to learn – especially in contexts of learner diversity.
... In South Africa (SA), 17 of 26 public higher education institutions currently use the National Benchmark Tests (NBTs), alongside the National Senior Certificate and other high school-leaving examination results, to admit students who are likely to succeed at university. [23][24][25][26][27][28][29] The NBTs are a set of criterion-referenced pre-university admission aptitude tests, similar to pre-admission aptitude tests written in the UK (UK Clinical Aptitude Test), [30][31][32] the USA (North American Medical College Admission Test), [33,34] Australia (Australian Graduate Medical School Admissions Test) [35] and Research other countries such as Chile, Pakistan and Saudi Arabia. [36][37][38] These tests provide information about school-leavers that is supplementary to their pre-university academic achievements. ...
Full-text available
Background: Strong generic learning skills may improve academic performance at medical school. Studies evaluating the generic learning skills proficiency of medical students use self-reported data. It is not known whether self-evaluation of discipline-independent skills exhibits the same problems of widely variable accuracy as self-assessment of discipline-related skills. Objective: To investigate whether the self-reported generic learning skills proficiency of medical school entrants was related to three objective measures of performance: pre-university admission aptitude-test scores, information technology (IT) proficiency on entry and early academic performance at university. Methods: This prospective study used a previously validated 31-item questionnaire to document the self-reported proficiency of medical school entrants (2011 - 2013) with regard to 6 categories of generic learning skills: information handling, technical and numeracy, computer, organisational, managing self-learning and presentation skills. The results of the questionnaire were compared with performance in pre-university admission aptitude tests, an IT placement test on entry and end-of-semester 1 examinations (after 6 months at university), which are the basis for promotion to semester 2. Results. A total of 640 of 648 (98.8%) students completed the questionnaire. Self-reported generic learning skills proficiency was found to be significantly related to pre-university admission aptitude test scores (medium effect size), IT proficiency on entry to university (large effect size) and early academic performance at university (small effect size). Academically weak students did not overestimate their skills proficiency. Conclusion. These findings support the opinion that self-reported generic skills proficiency can credibly contribute to determining the academic preparedness of medical school entrants.
Misalignment in teaching pedagogies between secondary schools and tertiary institutions have exacerbated educational disparities amongst students from different backgrounds. Given the variation in students' educational background and competencies there was a need to develop an Anatomy and Physiology (A&P) Foundational Knowledge Assessment to establish the levels of preparedness of first-year medical students. Previous work that focused on the development of the assessment showed it to be effective in measuring students' foundational knowledge in human anatomy and physiology. The aim of this study is to assess the validity of the A&P Foundational Knowledge Assessment in determining students' prior knowledge and predicting academic performance of first-year students in their anatomy and physiology studies. Three hundred and seventy first-year students, across two cohort years 2017 and 2018, completed the A&P Foundational Knowledge Assessment. Data was analysed using descriptive statistics, analysis of variance, and Pearson's correlation. Results show that for both cohorts, approximately 30% of students scored less than or equal to 55% and were potentially at risk of performing poorly in their anatomy and physiology studies. Pearson's correlation showed a significant relationship between students' performance on the foundational knowledge assessment and their anatomy and physiology assessments. For both cohorts more than 10% of students identified by the A&P Foundational Knowledge Assessment were either at risk of failing the course, entering an extended degree programme, or being excluded from the programme. Results indicate that the assessment is a good predictor for differentiating medical students' performance in first-year anatomy and physiology.
INTRODUCTION: The use of selection, diagnostic, proficiency, placement, admission, manual dexterity and aptitude tests can reportedly predict students' academic success. Predictive admission procedures help to reduce dropout rates, improve academic performance, increase success rates, and selectively exclude applicants who are unlikely to be successful in the course. There is an absence of research, however, in this area of work in Dental Technology. AIM: To examine the association between pre-admission assessments and Dental Technology students' academic performance in a South African University of Technology. DESIGN: A quantitative and cross-sectional study design was used. METHODS: The target populations were the 2018 and 2019 first-year Dental Technology students. Retrospective data extracted from academic records and programme files were statistically analysed to measure the correlations against students' academic performance RESULTS: Despite there being no significant differences between pre-admission tests and students' academic performance, there were significant positive correlations between first-year university subjects. CONCLUSIONS: There are indications of horizontal coherence between the discipline-specific subjects in the first-year Dental Technology curriculum. Examining the association between pre-admission tests and students' academic results through to graduation, together with the horizontal and vertical alignments of all subjects in the undergraduate Dental Technology curriculum, can facilitate the learning pathways for students to succeed academically at universities.
Full-text available
Although English Home Language and English First Additional Language marks from the National Senior Certificate (NSC) are used for university admission in South Africa, no studies have explored their predictive value. This paper shed light on English language marks and English language competence through a comparative analysis of NSC marks and National Benchmark Test in Academic Literacy test results for a cohort of first year education students at the University of the Witwatersrand. To provide in-depth insight, the analysis includes fine-grained analysis of specific academic language competencies. The results of the analysis in this study show that the same mark in Home Language and First Additional Language does not necessarily reflect the same level of English-language academic competence as measured by the NBT Academic Literacy test. On average, students that wrote the First Additional Language papers scored between .5 and .9 of a standard deviation below students who wrote the Home Language papers.
Full-text available
Se describen dos experiencias en torno al reto de maximizar excelencia y equidad en la admisión para la educación superior. En la Universidad de Costa Rica (UCR) se desarrolla y valida una prueba de razonamiento con figuras, para medir inteligencia fluida, tomando como marco de referencia los conceptos desarrollados por Raymond Cattell. Por su parte, la Universidad de Ciudad del Cabo en Sudáfrica aplica métodos de evaluación dinámica, en donde los exámenes van "enseñando" a lo largo de su ejecución y están basados en enfoques Vygotskianos. Estos instrumentos sudafricanos ya han brindado evidencia de validez predictiva en estudios de análisis de sobrevivencia y son parte de un programa de admisión alternativo para estudiantes que provienen de ambientes con desventajas educativas, muchos de ellos forman parte de grupos que fueron educativamente segregados durante el "apartheid". Mientras que en la Universidad de Costa Rica (UCR), el proyecto es reciente y se encuentra aún en su fase diagnóstica y de investigación, en la Universidad de Ciudad del Cabo se cuenta con una trayectoria de más de 20 años. Ambas propuestas buscan identificar de manera más precisa estudiantes que poseen potencial académico y cognitivo para la educación superior, que provienen de ambientes con desventajas educativas, y cuyas habilidades podrían ser subestimadas si solo se usan pruebas de admisión o evaluaciones "tradicionales". El artículo no pretende hacer un análisis comparativo para concluir cual de las dos aproximaciones es más válida, sino que busca ilustrar dos formas posibles de abordar la problemática de equidad en la admisión para la educación superior, desde una perspectiva científica y no solamente política.
Full-text available
The National Benchmark Tests Project (NBTP) was commissioned by Higher Education South Africa and became operational in 2009. One of the main aims of the NBTP is to assess the extent to which entry-level students might be said to be ready to cope with the conventional demands of academic study in three key areas: academic literacy; quantitative literacy; and mathematics. This paper presents an analysis of the academic literacy readiness of a sample of registered students as reflected in their performance on the NBT in Academic Literacy, a standardised assessment developed in the context of the wider project. The paper presents a theoretical analysis of the construct of academic literacy as operationalised in the test. This is followed by a categorised empirical analysis of test-takers’ performance on the test, in which the levels of academic readiness of these test-takers are presented and discussed. The argument presented highlights the diverse range of academic literacy levels of entry-level students, as well as implying the teaching and learning interventions that might be necessary to improve readiness. Concluding comments argue that some groups of students may be unable to cope with conventional academic literacy demands in the absence of explicit intervention.
Full-text available
The National Benchmark Test in Academic Literacy is designed to assess the ability of first-year students to cope with the typical language-of-instruction, academic reading and reasoning demands they will face on entry to higher education. Drawing on quantitative data, this paper reports on the overall performance levels of a large-scale (n = 6500) national sample of test-takers who took the test as applicants for the 2013 intake into higher education. Overall test-taker performance is disaggregated by performance on sub-scales of the overall construct of academic literacy. The argument is made that the National Benchmark Test provides a framework for a nuanced and practicable understanding of test-takers’ academic literacy ‘proficiencies’. The conclusion to the paper evaluates the extent to which the test enables higher education lecturers’ greater engagement with students’ academic literacy shortcomings and with research-led information aimed at the improvement of teaching and learning.
Full-text available
Matriculation results have previously been used as reasonable predictors of first-year students' academic performance at university. Although there have been some improvements in access to education for many South Africans, the quality of the National Senior Certificate (NSC) introduced in 2008 remains uncertain. The purpose of the study reported on was to determine whether matriculation subjects' scores can be predictors of student's academic success in the first year of the Bachelor of Optometry (BOptom) programme. The files of 84 first-year optometry students who wrote the NSC examination from 2009–2011 were reviewed and their matriculation scores were recorded. These scores were compared to their results in modules in their first-year BOptom programme. There was a weak correlation between students' matriculation and first-year optometry results. Overall, the matriculation scores showed a weak correlation between the first semester average and overall first-year marks. Thus, the study found that NSC scores cannot be used as sole predictors of students' academic success in the first year of the BOptom programme.
Full-text available
Physiology has an anecdotal track record of having lower pass rates than other professional modules in the Health Sciences (HS). The aim of this study was to compare the performance and associated contributory factors of students in physiology modules with professional modules at the same level of study. This was done by way of overall pass rates and average, maximum, and minimum marks for the period 2008–2010 stratified by programme/qualification, matriculation/National Senior Certificate achievement and language. The latter two served as proxies for alternative access and previously disadvantaged students, respectively. There was a notable difference in the mean 2008– 2010 pass rates of students from the different professional qualifications and students generally performed considerably better in their professional modules as compared with their performance in the physiology modules. The performance in physiology modules of English first language (EFL) students was not significantly different from that of English second language students (ESL). The implications of these findings require further discourse on, inter alia, issues around physiology teaching; student learning modes; admission criteria; student preparedness for university; and student monitoring and support mechanisms. There also needs to be a greater interaction between physiologists and health professionals involved in the curriculum design.
Full-text available
Health Science students at the University of KwaZulu-Natal perform better in their professional modules compared with their physiology modules. The pass rates of physiology service modules have steadily declined over the years. While a system is in place to identify "at-risk" students, it is only activated after the first semester. As a result, it is only from the second semester of their first year studies onward that at-risk students can be formally assisted. The challenge is thus to devise an appropriate strategy to identify struggling students earlier in the semester. Using questionnaires, students were asked about attendance, financing of their studies, and relevance of physiology. After the first class test, failing students were invited to complete a second questionnaire. In addition, demographic data were also collected and analyzed. Correlation analyses were undertaken of performance indicators based on the demographical data collected. The 2011 class comprised mainly sport science students (57%). The pass rate of sport science students was lower than the pass rates of other students (42% vs. 70%, P < 0.001). Most students were positive about physiology and recognized its relevance. Key issues identified were problems understanding concepts and terminology, poor study environment and skills, and lack of matriculation biology. The results of the first class test and final module marks correlated well. It is clear from this study that student performance in the first class test is a valuable tool to identify struggling students and that appropriate testing should be held as early as possible.
Full-text available
In a context where applicants to higher education study vary widely in terms of their prior educational, linguistic and socio-economic backgrounds, it becomes extremely important to assess the extent to which these applicants might be said to be ready to cope with the typical academic reading and writing demands of higher education study. This assessment becomes even more crucial in a country like South Africa, where issues of equity of access, selection and redress remain a central challenge. Put simply, the challenge is to identify academically talented students from educationally diverse backgrounds, especially in cases where the educational backgrounds of these applicants may have militated against them, fully demonstrating their talent in conventional (e.g. school-leaving) examinations. This article describes the theoretical basis for the development of tests of academic literacy that downplay the role of prior learning in the assessment of academic readiness. The uses of these tests as selection mechanisms complementary to conventional academic assessments are also outlined. Empirical data are presented that demonstrate associations between these tests and academic performance in higher education. Issues and challenges regarding the validity and reliability of these tests are presented, and the implications of major research findings on the tests debated and deliberated upon.
Full-text available
High School, Aptitude and Achievement Tests have been utilized since 2002 in Saudi Arabia for the purpose of student selection to health sciences and medical colleges. However, longitudinal studies determining the predictive validity of these so-called cognitive tests for in-course performance is lacking. Our aim was to assess the predictive validity of Saudi health sciences and medical school pre-admission selection tools for in-course performance over a three year period and therefore we conducted a retrospective review of pre-admission (High School Test, Saudi Aptitude and Achievement tests) and in-course academic performance data (Grade Point Average, GPA) for all students enrolled in undergraduate Health Sciences Colleges and College of Medicine, 2007-2010. Correlation and linear regression analyses were performed for the whole cohort. Data are reported on 87 of 1,905 (4.6 %) students who applied to Health Sciences and Medical Colleges. The results indicate that in-course GPA scores in year three were significantly positively correlated with High School (r = 0.65; p < 0.05), Aptitude (r = 0.65; p < 0.05) and Achievement (r = 0.66; p < 0.05) selection test scores. Furthermore, the High School Exam was the best predictor of achievement in year three. Regression analysis revealed that 54 % of the variance in predicting academic performance is explained by the three test scores. Results confirmed our hypothesis that High School, Aptitude and Achievement tests are statistically predictive of academic performance in health sciences and medical colleges. Further longitudinal, national work is nevertheless required to determine the extent to which pre-admissions cognitive and non-cognitive tests, socio-demographic and educational process variables predict undergraduate and postgraduate achievement and performance.