ArticlePDF Available

Entry-level students’ reading abilities and what these abilities might mean for academic readiness


Abstract and Figures

The National Benchmark Tests Project (NBTP) was commissioned by Higher Education South Africa and became operational in 2009. One of the main aims of the NBTP is to assess the extent to which entry-level students might be said to be ready to cope with the conventional demands of academic study in three key areas: academic literacy; quantitative literacy; and mathematics. This paper presents an analysis of the academic literacy readiness of a sample of registered students as reflected in their performance on the NBT in Academic Literacy, a standardised assessment developed in the context of the wider project. The paper presents a theoretical analysis of the construct of academic literacy as operationalised in the test. This is followed by a categorised empirical analysis of test-takers’ performance on the test, in which the levels of academic readiness of these test-takers are presented and discussed. The argument presented highlights the diverse range of academic literacy levels of entry-level students, as well as implying the teaching and learning interventions that might be necessary to improve readiness. Concluding comments argue that some groups of students may be unable to cope with conventional academic literacy demands in the absence of explicit intervention.
Content may be subject to copyright.
Entry-level students’ reading abilities and what these abilities might mean for academic
Alan Cliff
Centre for Innovation in Learning and Teaching (CILT), University of Cape Town
The National Benchmark Tests Project (NBTP) was commissioned by Higher Education
South Africa and became operational in 2009. One of the main aims of the NBTP is to assess
the extent to which entry-level students might be said to be ready to cope with the
conventional demands of academic study in three key areas: academic literacy; quantitative
literacy; and mathematics. This paper presents an analysis of the academic literacy readiness
of a sample of registered students as reflected in their performance on the NBT in Academic
Literacy, a standardised assessment developed in the context of the wider project. The paper
presents a theoretical analysis of the construct of academic literacy as operationalised in the
test. This is followed by a categorised empirical analysis of test-takers’ performance on the
test, in which the levels of academic readiness of these test-takers are presented and
discussed. The argument presented highlights the diverse range of academic literacy levels of
entry-level students, as well as implying the teaching and learning interventions that might be
necessary to improve readiness. Concluding comments argue that some groups of students
may be unable to cope with conventional academic literacy demands in the absence of
explicit intervention.
Keywords: academic literacy, academic readiness, higher education students, language
testing, reading proficiency, standardised assessment
In South Africa, there is increasing concern in the Higher Education sector about the
academic readiness of entry-level students (Scott, Yeld and Hendry 2007). A central
component of this concern relates to the extent to which these entry-level students are able to
cope with the typical academic reading demands that will be placed on them in the context of
their studies. There has been much debate in the sector relating to the extent to which
secondary schooling adequately prepares students to read and in an academic manner (see,
for example, Pretorius 2002). This debate has also been extended to include considerations of
the extent to which Higher Education curricula might be said to be responsive to the reading
needs of incoming students and the extent to which these curricula enable students to develop
– sometimes both acquire and develop – the necessary reading abilities that will enable them
to successfully negotiate their studies (Griesel 2006). In part, the debate centres around
questions of whose responsibility it is to ensure that students are academically ‘literate’, i.e.
are able to engage meaningfully with a range of academic texts they will face in the context
of the disciplines they study and be in a position to make meaning of these texts. A fuller
exposition of the meaning of the concept of academic literacy will follow in the next section
of this paper.
There is currently a sense that responsibility for enabling students to be academically literate
lies both within the secondary and the Higher Education sectors (and with the student
him/herself), but the sense of who bears primary responsibility is certainly not uncontested.
Whatever the nature of this contestation – whether students are academically literate on entry
to Higher Education and whose responsibility it is to ensure this academic literacy – there
seems to be an emerging consensus that Higher Education needs, at minimum, to address the
development (or further development) of academic literacy in disciplinary contexts if the
sector is to (a) improve student graduation rates; (b) provide curricula that ensure holistic
development of students; and (c) deal meaningfully with the academic transitions between
secondary and Higher Education.
A key challenge in developing students’ academic literacy is arguably to understand what it is
that students already know and can do with regard to academic reading on entry to Higher
Education. It should follow that an understanding of what students already know and can do
provides a platform for the development of teaching and learning interventions and support
and the ability to address explicitly the disciplinary reading abilities required if students are to
become academically ‘literate’. Clearly, this is no simple task, but having some
understanding of students’ academic literacy (or literacies) represents a starting point for the
development of teaching and learning support.
The intention of the present paper is to present and analyse notions of academic literacy – or
literacies – and to reflect on and analyse the achievement of school-leavers in terms of these
academic literacies. The paper will argue that understanding what it is that school-leavers can
and cannot do in an academic literacy sense provides an important starting-point for the
development of students’ academic literacy in Higher Education. The next section of this
paper outlines the conception and construct of academic literacy.
Academic literacy as a construct
Theoretically, delineation of the construct of academic literacy that is the central
consideration in this paper can be found in the work of Yeld (2001), work which, in turn, has
its theoretical antecedents. In Yeld’s study, the construct of academic literacy is based on a
complex intersection of functional, sociolinguistic, grammatical and textual aspects of
language knowledge – derived from the work of Bachman (1990) and Bachman and Palmer
(1996). The functional aspects of language knowledge relate to a reader’s ability to make
essential meaning from text; understand and interpret communicative purpose; and make
meaning and develop an interpretation of one’s own. The sociolinguistic aspects of language
refer to the ability to penetrate and understand cultural distinctions, non-literal forms or
analogous nuance in language as these find expression in different contexts and linguistic
forms – such as word-based, image or icon-based and diagrammatic representation-based
language. Grammatical language knowledge refers to a reader’s ability to understand the
semantic (word-meanings) and syntactical (structural) basis of words and sentences. Finally,
textual language knowledge relates to the ability to understand and interpret textual cohesion
and organisation, and to be able to ‘see’ beyond immediate text, through making
extrapolations and inferences.
Clearly implied from discussion in the previous paragraph, are wider notions of academic
literacy that relate to the extent to which literacy (and academic literacy) goes beyond reading
and writing to include, for example, epistemological, digital, technological and multi-modal
literacies. Indeed, it has become important in Higher Education to consider and engage with
literacies rather than literacy – the latter conveys impressions of uni-dimensionality or
reductionism in understanding and reflection. As Paxton and Frith (2014) point out,
conceptions of academic literacies have evolved into descriptions of socially situated cultural
practice – thanks substantively to the contributions from the work of Lea and Street (1998;
2000), Lea (2004) and Lillis and Scott (2007), amongst others.
For the purposes of the present paper, however, this wider conception of literacy is applied to
the literacy practice of reading in particular. In essence, then, the construct of academic
literacy which is based on the foregoing describes a reader’s ability to understand and
integrate four dimensions of language knowledge as these apply to academic – specifically
Higher Education – settings. If readers are unable or less able to engage with an academic
context in ways which indicate that they understand and can manipulate these forms of
language knowledge (without necessarily being able to ‘label’ them), their academic literacy
is compromised to a lesser or greater extent. The elegance of the construct of academic
literacy as found in the preceding paragraph lies in its holistic reflection of reading
competence as a set of integrated abilities that require an understanding of the context in
which language operates; the different functional and cultural forms that language assumes;
the underlying syntactical and analytical base of a language; and the particularities of this
language expressed in academic forms and contexts.
A more detailed exposition of the development of the construct of academic literacy is not
attempted in this paper. Such an exposition is to be found in the work of Yeld (2001) and
Cliff and Yeld (2006). The development of a theoretically and conceptually rich construct is
presented there. What is worth emphasising, however, is that this construct has formed the
framework for an assessment of academic literacy in South African Higher Education for
more than 10 years now (see, for example, Cliff and Hanslo 2009; Cliff, Pearce and Ramaboa
2007; Petersen-Waughtal and Van Dyk 2011; Weideman 2009). Academics from across the
Higher Education sector have interacted critically with research that has been grounded in the
assessment of academic literacy and the consequences of this research for teaching and
learning and for student academic achievement.
From this previous research, it has emerged that:
Generic assessment of academic literacy offers important complementary
understandings of academic readiness alongside other forms of assessment, such as
school-leaving examination results;
Ability in generic academic literacy assessments appears associated with subsequent
academic performance in a wide variety of contexts;
The strength of this association depends upon the extent to which academic literacy as
defined above is explicitly required in discipline-specific contexts;
The teaching of courses explicitly designed to heighten students’ awareness of the
requirements of academic literacy appears to have had some success in improving
these students’ academic readiness.
To date, this research has not attempted a detailed analysis of the components of academic
literacy that entry-level students appear to cope with well and the components that they do
not appear to cope with. Research has also not yet assessed the extent to which there might be
differences amongst sub-groups of entry-level students in terms of what these groups do or do
not cope with in academic literacy. These are arguably important focuses for research, since
they may enable the Higher Education sector to develop prior knowledge of the academic
literacy strengths and weaknesses of entry-level students, and to design teaching and learning
interventions that enable these students to better cope with the disciplinary academic literacy
demands they will face in their studies.
The development of the National Benchmark Test in Academic Literacy (hereafter, the NBT
AL) in the South African Higher Education sector represents an attempt to (a) delineate a
theorised set of understandings about the meaning of academic literacy; (b) operationalise
these understandings in the form of a standardised assessment; and (c) provide information
about students’ academic literacy that will enable the development of teaching and learning
interventions aimed at the improvement or consolidation of this literacy in various
disciplinary contexts (Griesel 2006). It should be emphasised that the NBT AL is deliberately
a generic assessment of academic literacy, developed on the assumption that entry-level
students ought to possess at least a degree of academic reading competence if they are to
successfully negotiate the disciplinary contexts they embark on. In the NBT AL, it is
furthermore assumed that, if students do not provide evidence of this degree of competence,
they are likely to need some form of support/intervention in order to achieve such levels of
competence that will enable them to proceed with their disciplinary studies.
The goal of a standardised test of academic literacy is the development of an artefact that
attempts to make a judgment about the kinds of academic reading entry-level students are
likely to require in Higher Education; to design test items that attempt to assess these
academic reading levels; to determine conceptually and empirically the structure of such a
test; and, finally, to make a judgment about the inferences that can be drawn from test-taker
performance on the test. This information is arguably of importance for the development of
Higher Education curricula that will enable students to successfully engage the content and
context of their learning and will allow academics to offer directed, meaningful teaching
Flowing from attempts to delineate notions of academic literacy and to render these notions
amenable to assessment, the NBT AL is based on the following ‘blueprint’ or set of
specifications. This blueprint has been presented and discussed in a number of research
studies in South African Higher Education in recent years (see, for example, Cliff and Hanslo
2009; Cliff and Yeld 2006; Cliff, Ramaboa and Pearce 2007; Scholtz 2012; Van Dyk and
Weideman 2004; Yeld 2001), but it is worth re-presenting in the current context since it
forms the basis for the subsequent discussion of entry-level students’ academic literacy
reading abilities.
Table 1: Academic literacy skills assessed in the NBT AL (Adapted from Bachman and
Palmer 1996 and Yeld 2001)
Skill Assessed
Explanation of Skill Area
Separating the essential from the non-
Readers’ capacities to ‘see’ main ideas and supporting
detail; statements and examples; facts and opinions;
propositions and their arguments; being able to
classify, categorise and ‘label’
Extrapolation, application and
Readers’ capacities to draw conclusions and apply
insights, either on the basis of what is stated in texts or
is implied by these texts.
Understanding discourse relations
between parts of text
Readers’ capacities to ‘see’ the structure and
organisation of discourse and argument, by paying
attention – within and between paragraphs in text – to
transitions in argument; superordinate and subordinate
ideas; introductions and conclusions; logical
Readers’ abilities to derive/work out word meanings
from their context
Metaphorical Expression
Readers’ abilities to understand and work with
metaphor in language. This includes their capacity to
perceive language connotation, word play, ambiguity,
idiomatic expressions, and so on
Perceiving and understanding cohesion in
Readers’ abilities to be able to ‘see’ anaphoric and
cataphoric links in text, as well as other mechanisms
that connect parts of text to their antecedents or to
what follows
Understanding the communicative
function of sentences
Readers’ abilities to ‘see’ how parts of sentences /
discourse define other parts; or are examples of ideas;
or are supports for arguments; or attempts to persuade
Understanding text genre
Readers’ abilities to perceive ‘audience’ in text and
purpose in writing, including an ability to understand
text register (formality / informality) and tone (didactic
/ informative / persuasive / etc.)
Readers’ understanding of how the syntactical, lexical
and punctuation features of basic language structures
affect academic text meaning
From Table 1, it can be seen that academic literacy is here conceptualised as comprising
textual and contextual meaning-making processes at word, sentence, paragraph and whole-
text levels. Additionally, meaning-making might be described as including the reader’s
ability to understand or ‘penetrate’ analogous or non-literal language (metaphor); perceive
implications within (extrapolation) and beyond (inferencing) text; understand underlying
purpose within and beyond text (communicative function); understand and interpret text-type
and form (genre); and perceive text structure and development (text relations and separating
essential from non-essential components of text). As has been argued elsewhere (cf.
references above), academic literacy is operationalised here as a holistic and integrated set of
reading abilities that relate to reading in and for Higher Education contexts. These abilities
are seen as somewhat different from (although also related to) ‘pure’ language proficiency. It
is important to note that the approach to and use of language referred to here is that which is
applied to academic reading in the Higher Education language of instruction and not to the
use of ‘everyday’ language. Whilst it is assumed that ‘everyday’ language proficiency will
likely impact on students’ ability to be academically literate, it is the ability to make
academic meaning that is of most importance in Higher Education contexts.
Entry-level students’ performance on an Academic Literacy test
What follows in this section of the paper are the presentation and analysis of data from an
empirical study of entry-level students’ performance on the NBT AL. These data present the
operationalisation of the construct of academic literacy discussed above and provide an
illumination of the argument about the academic readiness and reading levels of entry-level
Study Sample
Data in Figure 1 below depict the performance of registered students (n = 1301) at one
Higher Education institution. These students took the NBT AL as applicants to Higher
Education in 2009 and were registered in the 2010 intake of students. Of the sample, n = 617
are female and n = 684 are male. The majority of students (n = 909) self-report English as
Home Language; the rest self-report a variety of Home Languages (Afrikaans; isiXhosa and
isiZulu predominantly). The mean age of the group is 18 years.
The NBT AL as test artefact
The NBT AL is a standardised test with typical alpha coefficient of reliability indices of 0.90.
Factor analyses show the test construct to be coherent and to have uni-dimensionality, with
moderate correlations amongst sub-constructs or specifications (see Cliff, Ramaboa and
Pearce 2007).
Test score classification procedure
Test-takers’ overall test performance on the NBT AL is classified according to four
benchmark performance levels: proficient; top intermediate; bottom intermediate; basic.
These benchmark levels were derived psychometrically from a standard-setting process
conducted in 2009 and involving Higher Education academics across a range of disciplines
and institutional contexts. The split between ‘bottom intermediate’ and ‘top intermediate’ was
derived arithmetically rather than from the standard-setting process and is presented here
because it enables the development of the educational argument about the readiness of entry-
level students. For more details of this process, see Pitoniak, Cliff and Yeld (2008).
According to these benchmarks, students whose performance is classified as ‘proficient’
ought to be able to cope with the typical entry-level academic literacy demands they will face
in conventional academic settings. Students whose performance is classified as ‘intermediate’
will experience some difficulties with the academic literacy demands they will face and ought
to be provided with forms of academic support additional to conventional curriculum
provision, for example, extra academic literacy tutorial support or placement in an extended
or foundation programme. Students whose performance is classified as ‘basic’ will
experience significant difficulties with the academic literacy demands they will face and will
require explicit and ongoing curriculum support and intervention – perhaps best provided by
bridging-type programmes – if they are to cope with these demands.
Data analysis
Figure 1 depicts the mean levels of performance of test-takers for each of the sub-constructs.
So, for example, test-takers whose overall test performance was classified as ‘proficient’
scored an average of approximately 70% on the ‘essential’ (see Table 1) test sub-construct.
Figure 1: Entry-level students’ performance on the NBT AL
The data in Figure 1 appear to provide support for the uni-dimensionality of the construct of
the NBT AL: sub-groups of test-takers (i.e. proficient, intermediate, basic) who do well in
one sub-construct of the test (i.e. essential, inferencing, and so on) appear also to do well in
other sub-constructs. However, there is also evidence of variation by sub-groups in
performance across the sub-constructs and there is evidence that what is a relative strength
for one sub-group may not be so for other sub-groups.
As a sub-group, ‘proficient’ test-takers appear to be proficient in all sub-constructs of the
NBT AL. They perform particularly well on the ‘discourse’ and ‘communicative function’
sub-clusters of the test and perform weakest on the ‘metaphor’ and ‘cohesion’ sub-clusters –
although these areas of performance are still classified as proficient. Test scores for
‘proficient’ test-takers suggest that – as a group – they ought to be able to cope with the
conventional generic academic literacy demands they will face in Higher Education.
‘Top intermediate’ test-takers appear to be proficient on average on the following sub-
clusters: ‘discourse’, ‘vocabulary’, ‘cohesion’ and ‘communicative function’. On all other
sub-clusters, they appear on average to be in the ‘intermediate’ category of overall test
performance. For ‘top intermediate’ test-takers, ‘discourse’ and ‘communicative function’
sub-clusters are areas of strongest performance on average (as for ‘proficient’ test-takers) and
areas of weakest average performance are ‘essential’ and ‘genre’ sub-clusters. In some areas
of academic literacy, ‘top intermediate’ test-takers as a group ought to be able to cope with
the conventional generic academic literacy demands they will face; in others, they may need
additional forms of teaching support if they are to cope.
‘Bottom intermediate’ test-takers are not proficient on average on any of the sub-clusters of
the NBT AL. Their performance on average is classified as ‘intermediate’ except for the
‘essential’ sub-cluster where their performance is classified as ‘basic’. For ‘bottom
intermediate’ test-takers, ‘vocabulary’ and ‘communicative function’ sub-clusters are areas of
strongest average performance (although still ‘intermediate’) and ‘essential’ and ‘grammar’
sub-clusters are on average areas of weakest performance. For the most part, ‘bottom
intermediate’ test-takers are going to require additional forms of teaching support if they are
to cope with the academic literacy demands they will face; in one particular area, i.e.
‘essential’, they will require ongoing explicit support if they are to cope.
For test-takers whose overall test performance is classified as ‘basic’, average performance
on all sub-clusters of the test – with the possible exception of ‘cohesion’ and ‘communicative
function’ – is classified as ‘basic’. For ‘basic’ test-takers, areas of relative strongest
performance on average are ‘cohesion’ and ‘communicative function’ and areas of weakest
performance on average are ‘essential’ and ‘genre’. ‘Basic’ test-takers will require ongoing
and explicit forms of teaching support if they are to cope with the academic literacy demands
they will face in Higher Education.
Of further interest in the data from Figure 1 are the differences on average sub-cluster
performance between ‘proficient’ and ‘basic’ test-takers, as these differences might suggest
areas that most separate test-takers who ought to cope with the academic literacy demands
they will face from those who might not. Differences between average performance for
‘proficient’ and ‘basic’ test-takers are greatest for the following sub-clusters of test
performance: ‘discourse’, ‘metaphor’ and ‘communicative function’. It is worth noting that
each of these three test sub-clusters are characterised by particularities in Higher Education
that associate with academic argument, analogous or non-literal reasoning and academic
communicative purpose. Accordingly, it might be reasonable that test-takers whose overall
test performance is classified ‘proficient’ are much better prepared to cope with these kinds
of academic engagements than those whose test performance is classified ‘basic’. Having
said this, it must be noted that there are significant differences on average between
‘proficient’ and ‘basic’ test-takers across all sub-clusters of the construct, differences which
underline the differential levels of academic readiness of these sub-groups of test-takers.
Concluding discussion
This paper has attempted an analysis of the academic readiness of entry-level Higher
Education students in terms of their performance on a test explicitly designed to assess this
readiness. The paper has focused on differential levels of readiness across sub-groups of
students whose overall test performance was classified against pre-determined benchmarks.
The paper has not focused in a direct manner on the extent of academic readiness across the
sector, i.e. on the proportions of test-takers whose performance was classified as ‘proficient’,
‘intermediate’ or ‘basic’. The national report to Higher Education South Africa (HESA) on
NBT performance (Yeld, Prince, Cliff and Bohlmann 2012) provides these data, which show
that the test performance of approximately one-fourth of all applicants is classified
‘proficient’, two-fourths is classified ‘intermediate’ and a further one-fourth is classified
‘basic. In other words, only approximately one-fourth of applicants to Higher Education
might be said to be sufficiently academically ready to deal with the generic reading and
reasoning demands they will face in their studies. These data about the academic readiness of
Higher Education applicants become particularly important when it is noted that many of
these applicants produce school-leaving examination results in the language of teaching and
learning that render the applicants eligible to enter Higher Education. In other words, while
applicants appear to be ready to cope with the language of teaching and learning – on the
basic of their cognate school-leaving language examination results – many of them do not
appear to be so in terms of their NBT AL results. At least part of the resolution of this
apparent contradiction lies in the focus of the NBT AL on its Higher Education-specific
context academic readiness. Against this focus, the cognate school-leaving examination
results do not appear to be directly equatable with the NBT AL results; hence, the usefulness
of such a test as a targeted, complementary assessment.
This paper has attempted an analysis of how academic literacy readiness – or lack thereof –
might be described; what (if any) differences there might be between different groups of test-
takers; and what the general implications might be for teaching and learning in Higher
Education when applicants become registered students. Clearly, there are many dimensions
of the construct of academic literacy on which test-takers are under-prepared to a lesser or
greater extent. The general view of academics across the sector (cf. the standard setting
process referred to earlier) is that this under-preparedness means that these test-takers will
fail to cope with the academic literacy demands they will face in Higher Education.
The operationalisation of the construct of academic literacy in terms of a set of specifications
on a test means that the sector has an opportunity to illuminate the components of academic
readiness which test-takers do not cope with well. Perhaps more importantly, data of the
kinds presented in this paper suggest specific and tangible lines of teaching and learning
intervention that can be developed, as well as the extent to which this intervention is
necessary. For example, if it is known that entry-level students whose test performance is
classified as ‘intermediate’ have particular weaknesses in ‘essential’ and ‘genre’ areas of
academic literacy, particular kinds of interventions can be designed that address these areas.
And these interventions might be more intense or explicit depending upon the extent of the
vulnerability identified.
Students whose test performance is classified as ‘basic’ or ‘lower intermediate’ are likely to
be unable to cope with fundamental academic literacy reading demands, such as separating
core textual points from their supporting detail – or perhaps more disturbingly, misinterprting
supporting detail to be the core point. They may also be unable to extrapolate textual meaning
beyond the immediate context; struggle to distinguish amongst key discourse features and
signals in academic argument; misinterpret analogous and non-literal language and its
connotations and socially-situated nuances; and be unable to discern the cohesive features of
text and argument. Arguably, these are fundamental components of academic contexts,
which will need to be addressed directly in teaching and learning if such students are to
negotiate meaning and be successful in their studies.
The data presented in this paper also suggest lines for continuing exploration and research.
For example, average performance on a cluster clearly does not capture the distribution of
test-takers’ scores within the cluster, and this kind of distribution could provide a ‘snapshot’
of the diversity of individual test-taker performance on the cluster. What the average
performance of sub-groups within a cluster does suggest, however, are possibilities for
further research into the different kinds of teaching intervention that might be necessary for
each sub-group – both at a generic and at a discipline-specific level. The generic assessment
of academic literacy holds relevance for the discipline-specific ways in which academic
literacy plays itself: the construct of academic literacy here discussed assesses fundamental
reading abilities at a process level as they are manifest across a range of disciplinary
discourses, contexts and text forms. In addition, this generic assessment points to the need to
grapple with the nuances of discourse, context and form such that teaching interventions can
be designed in ways that apply and adapt the generic to the particular.
The author acknowledges the assistance of Andrew Deacon (CILT) in the production of the
statistical data.
Bachman, L.F. 1990. Fundamental considerations in language testing. Oxford: Oxford
University Press.
Bachman, L.F. and A.S. Palmer. 1996. Language testing in practice. Hong Kong: Oxford
University Press.
Cliff, A. and M. Hanslo. 2009. The design and use of ‘alternate’ assessments of academic
literacy as selection mechanisms in higher education. Southern African Linguistics and
Applied Language Studies 27(3): 265-276.
Cliff, A. and N. Yeld. 2006. Domain 1 – Academic Literacy. In Access and entry-level
benchmarks: The National Benchmark Tests Project, ed. H. Griesel, 19-27. Pretoria: Higher
Education South Africa.
Cliff, A., K. Ramaboa and C. Pearce. 2007. The assessment of entry-level students’ academic
literacy: does it matter? Ensovoort 11(2): 33-48.
Griesel, H. 2006. Access and entry-level benchmarks: The National Benchmark Tests Project.
Pretoria: Higher Education South Africa.
Lea, M. 2004. Academic literacies: A pedagogy for course design. Studies in Higher
Education 29(6): 739–756.
Lea, M. and B. Street. 1998. Student writing in higher education: An academic literacies
approach. Studies in Higher Education 11(3): 182–189.
Lea, M. and B. Street. 2000. Student writing and staff feedback in higher education: An
academic literacies approach. In Student writing in higher education: New contexts, ed. M.
Lea and B. Stierer, 32–46. Buckingham: Open University Press.
Lillis, T. and M. Scott. 2007. Defining academic literacies research: Issues of epistemology,
ideology and strategy. Journal of Applied Linguistics 4(1): 5–32.
Paxton, M. and V. Frith. 2014. Implications of academic literacies research for knowledge
making and curriculum design. Higher Education 67(2): 171-182.
Petersen-Waughtal, M. and T. Van Dyk. 2011. Towards informed decision making: the
importance of baseline academic literacy assessment in promoting responsible university
access and support. Journal for Language Teaching 45(1): 99-114.
Pitoniak, M., A. Cliff and N. Yeld. 2008. Technical report on the standard setting process for
Higher Education South Africa’s National Benchmark Tests – Academic Literacy Test.
Centre for Higher Education, University of Cape Town.
Pretorius, E.J. 2002. Reading ability and academic performance in South Africa: are we
fiddling while Rome is burning? Language Matters 33(1): 169-196.
Scott, I., N. Yeld and J. Hendry. 2007. Higher Education Monitor 6: A case for improving
teaching and learning in South African Higher Education. Pretoria: Council on Higher
Scholtz, D. 2012. Using the National Benchmark Tests in Engineering diplomas: revisiting
generic academic literacy. Journal for Language Teaching 46(1): 46-58.
Van Dyk, T. and A. Weideman. 2004. Switching constructs: on the selection of an
appropriate blueprint for academic literacy assessment. Journal for Language Teaching
38(1): 1-13.
Weideman, A. 2009. Constitutive and regulative conditions for the assessment of academic
literacy. Southern African Linguistics and Applied Language 27(3): 235-251.
Yeld, N. 2001. Equity, assessment and language of learning: key issues for higher education
selection and access in South Africa. Unpublished PhD thesis, University of Cape Town.
Yeld, N., R. Prince, A. Cliff and C. Bohlmann 2012. The National Benchmark Tests Project
report. Centre for Higher Education, University of Cape Town.
... There are sometimes differing results between the NSC English results and students' NBT AL performance, indicating that students may not be able to cope with the tertiary-level language of instruction (Cliff, 2014). Students who were classified as 'basic' and 'intermediate' in the NBT tests struggled with their studies in the absence of explicit AL interventions. ...
Conference Paper
Full-text available
The South African accounting profession needs racial transformation. Consequently, students pursuing the chartered accountant (South Africa) (CA(SA)) designation, especially at-risk Black students, require adequate support. To be successful, the support must be driven by factors influencing students’ academic performance. As prior academic performance is one such factor, this study examines the relationship between the National Senior Certificate (NSC) exams and the National Benchmark Test (NBT) for students enrolled in an accounting degree at a South African university. Due to numerous moderate and strong correlations between NSC and NBT results, without multicollinearity, it was concluded that both sets of results should be considered as factors contributing to students’ academic performance. The findings highlight the need for further empirical research on NSC and NBT results as determinants of success for accounting students.
... NBTs are used by most HEIs. According to Cliff (2014) the main aim of the NBTs is to assess the extent to which entry-level students might be ready to cope with the conventional demands of academic study in three key areas: academic literacy; quantitative literacy; and mathematics. Griesel (2006) points out that the impetus for the development of NBTs is located in a complex of policies, a changing schooling-higher education interface and the realities of a restructured higher education landscape and changing institutional profiles, all of which in one way or another impact upon access and the practices associated with higher education enrolment. ...
Full-text available
Refereed papers from the Proceedings of the Colloquium on Extended Curriculum Programmes, August 14-15 2019
... (Fleisch et al. 2015: 167) Not only did Fleisch et al.'s (2015) study reveal that English FAL school-leavers perform poorly overall on the NBT AL when compared to their HL counterparts, it also showed that this too was the case at the level of the subdomains of academic literacy that constitute the construct of academic literacy as defined for this test. These subdomains include students' ability to recognise/understand/use cohesion, communicative function, essential versus non-essential information, grammar (syntax), inferencing, metaphorical expressions, discourse relations, text genre, and vocabulary (see Cliff and Yeld 2006;Cliff 2014Cliff , 2015. With regard to differential performance between the two groups on the ability to work out the meaning of vocabulary in context, Fleisch et al. (2015: 170) found that the gap for the students whose marks in English HL and FAL were around 60% was less than .5 of the standard deviation, but that this gap widened as these marks increased. ...
Full-text available
Twenty-five years into the post-apartheid period, South African universities still struggle to produce the number of graduates required for the country's socio-economic development. The reason most often cited for this challenge is the mismatch that seems to exist between the knowledge that learners leave high school with, and the kind that academic education requires them to possess for success. This gap, also known as the "articulation gap", has been attributed to, amongst others, the levels of academic language ability among arriving students. The school-leaving English examination, and a pre-university test of academic literacy are the commonly used measures to determine these levels. The aim of this article is to investigate whether predetermined standards of performance on these assessments relate positively with academic performance. In order to determine this, Pearson Correlations and an Analysis of Variance (ANOVA) were carried out on the scores obtained for these assessments by a total of 836 first-year students enrolled at Stellenbosch University. The results show that the performance standards set for the standardised test of academic literacy associate positively with first-year academic performance, while the scores on the levels of performance set for the school-leaving English examination do not.
... So in language testing and course design, we depend on the construct of the language ability which is the focus of these interventions. In respect of academic language ability, this paper refers cursorily only to the latest rationale, but its history and how it was arrived at are discussed at length elsewhere Weideman 2013a, 2013b;Weideman, Patterson and Pot 2016;Cliff 2014Cliff , 2015; the paradigmatic origins of this in the work of sociolinguists such as Hymes (1972), Habermas (1970) and Halliday (1978) have also been noted in more detail in Weideman, Du Plessis and Steyn (2017). ...
Full-text available
The management of language diversity and the level of mastery of language required by educational institutions affect those institutions from early education through to higher education. This paper deals with three dimensions of how language is managed and developed in education. The first dimension is the design of interventions for educational environments at policy level, as well as for instruction and for language development. The second concerns defining the kind of competence needed to handle the language demands of an academic institution. The interventions can be productive if reference is made throughout to the conditions or design principles that language policies and language courses must meet. The third dimension concerns meeting an important requirement: the alignment of the interventions of language policy, language assessment and language development (and the language instruction that supports the latter). This paper employs a widely-used definition of "academic literacy" to illustrate how this definition supports the design of language assessments and language courses. It is an additional critical condition for effective intervention design that assessments and language instruction (and development) work together in harmony. Misalignment among them is likely to affect the original intention of the designs negatively. Similarly, if those interventions are not supported by institutional policies, the plan will have little effect. The principle of alignment is an important, but not the only, design condition. The paper will therefore conclude with an overview of a comprehensive framework of design principles for language artefacts that may serve to enhance their responsible design.
... The defensibility of using the academic and quantitative literacy ('AQL') component of the NBT for predicting performance has been questioned in a study that was recently undertaken on students of a university of technology; as Sebolai's (2016) analyses indicate, the predictive validity of that test for future performance is not only suspect, but non-existent, even as an incremental indicator together with others (for other discussions of the situation at different universities, and a possibly more nuanced interpretation, see Fleisch, Schöer & Cliff 2015; Van Rooy and Van Rooy-Coetzee 2015; also Scholtz 2015). At the higher education institution where Sebolai's (2016) study was done, the only (incrementally) better predictor among the academic literacy and other tests of language ability employed in this higher education context is the Test of Academic Literacy Levels (TALL), which is, perhaps not so incidentally, also the most thoroughly scrutinized test in the assessment literature (see the more than 70 analyses, in the form of doctoral theses, master's dissertations and scholarly publications in accredited journals and books that are listed on the 'Research' tab of the ICELDA website: ICELDA 2017; for examples of where the NBT has been scrutinized, see Cliff 2014). It should be noted, in addition, that Sebolai's (2016) study focussed not only on the use of TALL and the NBT, but on all of the various (and in cases highly problematic) language assessments in use at his institution. ...
Full-text available
The massification of higher education has led to a substantial increase in enrolments since 1993, and an astonishing 300% rise in firstdegree completion among black students. Yet questions remain about the level and adequacy of students’ preparation at school for such study. Drop-out rates of learners remain unacceptably high both at school and university level. Language ability is often identified as being one of several hurdles that prevent success, especially in higher education. At school there is an apparent misalignment between the aims of the current Curriculum and Assessment Policy Statement (CAPS), and the subsequent instruction and assessment of students. CAPS requires that students should be prepared to handle academic discourse, yet no clear outline of what academic discourse entails is given.Consequently, many higher education institutions across the country require of students to write additional pre-admission or post-entry tests of language ability. In some cases the National Benchmark Test (NBT) is used to grant or deny access, or in others for placement of at-risk students on language development interventions, usually defined as “academic literacy” courses. The clear expectation is that these tests will have some measure of predictive value, or at least be useful as regards minimising risk of failure. Ideally, it would then be advantageous if students who need to improve their academic literacy levels could be identified at an earlier stage than university entry, whilst they are still in school. To monitor and gauge the value of language assessments and courses, however, one would first need appropriate, adequate and defensible assessment instruments. This paper discusses the need for and the refinement of an academic literacy test for Grade 10 students as a first step towards measuring and then developing the required level of academic literacy before entry into higher education.
Full-text available
Responsibly designing academic literacy interventions is becoming increasingly important in a higher education environment where resources for student development are scarce. Providing proof of the effectiveness of such interventions is equally important – academic literacy specialists must show that what they are doing has a significant and meaningful impact on student success. If certain aspects of an intervention were shown not to work optimally, these should be addressed. This cycle of providing evidence of an intervention’s successes and shortcomings, and addressing any such shortcomings, is the goal of impact measurement. However, very few studies have attempted to comprehensively measure the impact of academic literacy interventions, probably because measuring impact in the social sciences is a challenging undertaking. The goal of this study was to develop an evaluation design that could be used to effectively and responsibly measure the impact of a wide range of academic literacy interventions. The first step in developing this evaluation design was to survey the available literature on impact measurement, specifically in the field of academic literacy, so as to propose a theoretically sound design and accompanying research instruments. Through a process of inquiry, piloting and critical reflection, several of these research instruments were adapted to make them applicable to the wide variety of academic literacy interventions that are presented in the South African context. After having proposed an initial evaluation design, this design was verified and validated by i) implementing it in measuring the impact of an academic literacy course in the South African context and ii) obtaining feedback from academic literacy specialists from across the country on how the design could be further improved to suit their respective contexts. After having critically reflected on the implementation process, and after having analysed responses from academic literacy specialists, a revised evaluation design and accompanying research instruments were proposed. These should assist researchers in comprehensively and responsibly measuring the impact of a wide range of academic literacy interventions, and consequently benefit the field of academic literacy as a whole.
Full-text available
Academic literacy interventions are becoming increasingly important in a country where the secondary education system no longer adequately prepares students for the literacy demands of higher education. This article investigates whether there was an improvement in students’ academic literacy levels between the onset and completion of an academic literacy module at a South African university. This is done by using a combination of instruments selected from a proposed evaluation design for academic literacy interventions, suggested by Fouché, Van Dyk and Butler (2016). A pre-test / post-test design is used, where, firstly, students’ results in a validated and reliable generic academic literacy test are considered. Secondly, students’ writing abilities are assessed by means of two instruments: a rubric and quantitative measures. Finally, students’ academic literacy marks are correlated to other variables, and interpreted within the context of the study, to give additional insight into the impact of the academic literacy course. Findings indicate that students showed an improvement across a wide array of academic literacy abilities, in particular their ability to use source material in their writing assignments, and their usage of a wider range of academic vocabulary. However, there were also areas where students did not display any improvement. Based on the experience of implementing various evaluation instruments, several recommendations are made on how future researchers could avoid pitfalls that were encountered in this study.
Recently a novel instrument – the Core Academic Language Instrument (CALS-I) – aimed at testing a constellation of school-relevant English language skills was developed and validated for use in the United States (Uccelli, Barr, Dobbs, Galloway, Meneses and Sánchez, 2015. Core Academic Language Skills: An expanded operational construct and a novel instrument to chart school-relevant language proficiency in preadolescent and adolescent learners. Applied Psycholinguistics 36: 1077–1109). The unitary construct tested by the CALS-I was dubbed Core Academic Language Skills (CALS) and it aimed to identify and describe a set of skills that comprise academic language proficiency of high utility across curricular content areas. This study piloted a version of the CALS-I that was slightly modified for use in South Africa targeting specifically middle-school learners. The results of this study reveal that the CALS construct functions almost identically to the United States sample when tested in South Africa, and this provides strong evidence for the fundamental and cross-cutting nature of these pedagogically relevant skills.
Full-text available
Some research in student level of preparedness in Higher Education has signalled that student under-preparedness might be compounded by an over-focus on student-centred engagement, learning and teaching methods and the acquisition of academic skills, at the expense of focusing on the knowledge itself that is the actual subject of the learning. This paper is an analysis of the test-taker performance on a National Benchmark Test (NBT) Academic Literacy (AL) assessment, used by South African higher education institutions for admission and/or placement. Using Legitimation Code Theory (LCT) and illustrated by semantic waves in particular, the analysis focuses on the structure of the knowledge that underpins the NBT AL test, its indicators for success and their relation to the test-takers. The approach highlights the affordances of LCT as a tool to tease out specific areas of the test that reveal student academic under-preparedness, and how this tool can be used to obtain complementary information from test-taker performance that could be crucial for a foundation programme provider.
In the past decade, testing for academic readiness has become common practice across the South African higher education landscape. The aim of this assessment is to identify students who might not be ready for academic education and to put support programmes in place to help bridge this gap. Given the implied contribution that these programmes are expected to make towards higher graduation rates, it is important that the tests used for identifying those who need the support are valid for all students regardless of background. The aim of this study was to investigate the differential predictive validity of a test of academic literacy for two groups of students; those who took English at Home Language level and those who took it as First Additional Language at school. A regression analysis of a total of 564 scores obtained by the two groups on the Test of Academic Literacy Levels (TALL) as a predictor of their end of first-year average scores was carried out to determine this. The results show that although the Home Language group performed better than their counterpart on the test on average, the test predicted the outcome variable marginally better for the First Additional Language group than it did for their counterpart. This difference was too negligible, however, for one to conclude that the test was predictively biased against any of the two groups.
Full-text available
The assessment of entry-level students' academic literacy: does it matter? In Higher Education both nationally and internationally, the need to assess incoming students' readiness to cope with the typical reading and writing demands they will face in the language-of-instruction of their desired place of study is (almost) common cause. This readiness to cope with reading and writing demands in a generic sense is at the heart of what is meant by notions of academic literacy. 'Academic literacy' suggests, at least, that entry-level students possess some basic understanding of – or capacity to acquire an understanding of – what it means to read for meaning and argument; to pay attention to the structure and organisation of text; to be active and critical readers; and to formulate written responses to academic tasks that are characterised by logical organisation, coherence and precision of expression. This paper attempts to address two crucial questions in the assessment of students' academic literacy: (1) Does such an assessment matter, i.e. does understanding students' academic literacy levels have consequence for teaching and learning, and for the academic performance of students, in Higher Education? (2) Do generic levels of academic literacy in the sense described above relate to academic performance in discipline-specific contexts? Attempts to address these two questions draw on comparative data based on an assessment of students' academic literacy and subsequent academic performance across two disciplines at the University of Cape Town and the Cape Peninsula University of Technology. Quantitative analyses illustrate relationships between students' academic literacy levels and the impacts these have on academic performance. Conclusions to the paper attempt a critical assessment of what the analyses tell us about students' levels
Full-text available
Proficiency tests are being used moreextensively at institutions of higher learningfor selection, placement, for diagnostic purposes and as a means of early identificationfor first year entering students who might be at risk of under-performance. Given thatat some institutions a high premium is placed on these test results, one of the issues atstake is the extent to which the generic test content relates to curriculum practices inthe various disciplines. This article focuses on three Engineering diplomas and exploresthe extent to which the test specifications of the National Benchmark Test in academicliteracy relate to reading and writing practices in the discipline. The contention is thatthere should be a relationship between the test specifications and academic literacypractices at first year level in order to provide the data necessary to appropriately placeand support students who might be at risk of under-performance.
Full-text available
This article explores the issue of what academic literacies research can bring to the study of knowledge and curriculum in higher education from a theoretical perspective and by means of illustrations from a work in progress academic literacies research project in the natural sciences. It argues that reading and writing are central to the process of learning in any discipline and that discipline specialists need to take this into consideration when planning their curricula. It also considers what knowledge means in the context of academic literacies research and how this conception of knowledge may differ from the knowledge structures researchers’ concern with knowledge as an object with its own properties. It comes to the conclusion that academic literacies research with its ethnographic-type exploration of social practice and theorisations of knowledge in the knowledge structures research can complement one another because each field of enquiry brings a lens that the other lacks.
Full-text available
Tests of language ability are based on a certain construct that defines this ability, and this blueprint determines what it is that will be measured. The University of Pretoria has, since 2000, annually administered a test of academic language proficiency to more than 6000 first-time students. The intention of this test is to identify those who are at risk academically as a result of too low a level of academic language proficiency. If their academic literacy levels are too low, students are required to enrol for a set of four courses in order to minimise their risk of failure. The Unit for Language Skills Development at the University of Pretoria has now embarked on a project to design an alternative test to the one used initially, specifically with a view to basing it on a new construct. The reason is that the construct of the current test has become contested over the last decade as a result of its dependence on an outdated concept of language, which equates language ability with knowledge of sound, vocabulary, form, and meaning. Present-day concepts emphasise a much richer view of language competence, and their focus has, moreover, shifted from discrete language skills to the attainment of academic literacy. In this paper the abilities encompassed by this view will be discussed in order to compare the construct of the current test with the proposed construct.
Full-text available
In a context where applicants to higher education study vary widely in terms of their prior educational, linguistic and socio-economic backgrounds, it becomes extremely important to assess the extent to which these applicants might be said to be ready to cope with the typical academic reading and writing demands of higher education study. This assessment becomes even more crucial in a country like South Africa, where issues of equity of access, selection and redress remain a central challenge. Put simply, the challenge is to identify academically talented students from educationally diverse backgrounds, especially in cases where the educational backgrounds of these applicants may have militated against them, fully demonstrating their talent in conventional (e.g. school-leaving) examinations. This article describes the theoretical basis for the development of tests of academic literacy that downplay the role of prior learning in the assessment of academic readiness. The uses of these tests as selection mechanisms complementary to conventional academic assessments are also outlined. Empirical data are presented that demonstrate associations between these tests and academic performance in higher education. Issues and challenges regarding the validity and reliability of these tests are presented, and the implications of major research findings on the tests debated and deliberated upon.
Low levels of academic literacy in the language(s) of teaching and learning are regarded as one of the main reasons for a lack of academic success amongst undergraduate students. Indeed, at Unisa, current concerns about the predictive validity of the National Senior Certificate has motivated a need for a reliable and valid instrument, used under standardised conditions, to measure the academic literacy levels of first year students. The aims of this project were to gather diagnostic data and empirical evidence about the current levels of academic literacy of prospective students of Unisa, and to identify specific reasons for their poor performance during the NQF5 in-service training. A quantitative research approach in the form of an interrupted time-series design was followed. A simple random sample of students, who underwent in-service training in 2009, was drawn, and the Test of Academic Literacy Levels (TALL) was employed as measuring instrument. T-tests were performed on the data to compare the actual differences between the pre- and post-test scores and regression analyses were used to determine the correlation between the two tests. The article concludes with recommendations on how language tests, like TALL, can assist higher education to make more informed, and thus responsible, decisions about issues of access.Lae vlakke van akademiese geletterdheid in die onderrig- en leertaal, of -tale, word beskou as een van die hoofredes vir gebrek aan akademiese sukses by voorgraadse studente. By Unisa het die heersende kommer oor die voorspellingsgeldigheid van die Nasionale Senior Sertifikaat inderdaad gelei tot ’n behoefte aan ’n betroubare en geldige instrument wat in vasgestelde omstandighede gebruik kan word om akademiese geletterdheidsvlakke onder eerstejaarstudente te meet. Die oogmerke met hierdie projek was om diagnostiese data en empiriese getuienis oor die bestaande akademiese geletterdheidsvlakke onder voornemende studente aan Unisa te versamel, en om spesifieke redes vir hulle swak prestasie tydens NKR 5-indiensopleiding uit te wys. ’n Kwantitatiewe navorsingsbenadering in die vorm van ’n onderbroketydreeks-ontwerp is gebruik. ’n Eenvoudige ewekansige steekproef is geneem van studente wat in 2009 indiensopleiding ondergaan het, en die Toets van Akademiese Geletterdheidsvlakke (TAG) is as meetinstrument gebruik. T-toetse is op die data uitgevoer om die werklike verskille tussen die voortoets- en natoetstellings te vergelyk, terwyl regressieontledings uitgevoer is om die korrelasie tussen die twee toetse te bepaal. Die artikel sluit af met aanbevelings oor hoe taaltoetse soos TAG hoër onderwys van hulp kan wees om meer ingeligte – en gevolglik meer verantwoordelike – besluite oor toelatingskwessies te neem.
The two studies reported in this article investigated the relationship between reading skill and academic performance at undergraduate level. The findings showed clear and consistent differences in reading ability between the different academic groups, with reading skills improving the higher the academic group. The findings indicate that many additional language (AL) students have serious reading comprehension problems, which means that they have ineffective and limited access to the rich sources of declarative knowledge provided by print-based materials in the learning context. Reading is important in the learning context not only because it affords readers independent access to information in an increasingly information-driven society, but more importantly because it is a powerful learning tool, a means of constructing meaning and acquiring new knowledge. If developing countries aim to produce independent learners, then serious attention will need to be given to improving the reading skills of students and to creating a culture of reading. Reading is not simply an additional tool that students need at tertiary level - it constitutes the very process whereby learning occurs.