Content uploaded by Alan Cliff
Author content
All content in this area was uploaded by Alan Cliff on Jun 24, 2015
Content may be subject to copyright.
Content uploaded by Alan Cliff
Author content
All content in this area was uploaded by Alan Cliff on Jun 24, 2015
Content may be subject to copyright.
Content uploaded by Alan Cliff
Author content
All content in this area was uploaded by Alan Cliff on Jun 24, 2015
Content may be subject to copyright.
1
Entry-level students’ reading abilities and what these abilities might mean for academic
readiness
Alan Cliff
Centre for Innovation in Learning and Teaching (CILT), University of Cape Town
E-mail: alan.cliff@uct.ac.za
Abstract
The National Benchmark Tests Project (NBTP) was commissioned by Higher Education
South Africa and became operational in 2009. One of the main aims of the NBTP is to assess
the extent to which entry-level students might be said to be ready to cope with the
conventional demands of academic study in three key areas: academic literacy; quantitative
literacy; and mathematics. This paper presents an analysis of the academic literacy readiness
of a sample of registered students as reflected in their performance on the NBT in Academic
Literacy, a standardised assessment developed in the context of the wider project. The paper
presents a theoretical analysis of the construct of academic literacy as operationalised in the
test. This is followed by a categorised empirical analysis of test-takers’ performance on the
test, in which the levels of academic readiness of these test-takers are presented and
discussed. The argument presented highlights the diverse range of academic literacy levels of
entry-level students, as well as implying the teaching and learning interventions that might be
necessary to improve readiness. Concluding comments argue that some groups of students
may be unable to cope with conventional academic literacy demands in the absence of
explicit intervention.
Keywords: academic literacy, academic readiness, higher education students, language
testing, reading proficiency, standardised assessment
Introduction
In South Africa, there is increasing concern in the Higher Education sector about the
academic readiness of entry-level students (Scott, Yeld and Hendry 2007). A central
component of this concern relates to the extent to which these entry-level students are able to
cope with the typical academic reading demands that will be placed on them in the context of
their studies. There has been much debate in the sector relating to the extent to which
secondary schooling adequately prepares students to read and in an academic manner (see,
for example, Pretorius 2002). This debate has also been extended to include considerations of
the extent to which Higher Education curricula might be said to be responsive to the reading
needs of incoming students and the extent to which these curricula enable students to develop
– sometimes both acquire and develop – the necessary reading abilities that will enable them
to successfully negotiate their studies (Griesel 2006). In part, the debate centres around
questions of whose responsibility it is to ensure that students are academically ‘literate’, i.e.
are able to engage meaningfully with a range of academic texts they will face in the context
of the disciplines they study and be in a position to make meaning of these texts. A fuller
exposition of the meaning of the concept of academic literacy will follow in the next section
of this paper.
2
There is currently a sense that responsibility for enabling students to be academically literate
lies both within the secondary and the Higher Education sectors (and with the student
him/herself), but the sense of who bears primary responsibility is certainly not uncontested.
Whatever the nature of this contestation – whether students are academically literate on entry
to Higher Education and whose responsibility it is to ensure this academic literacy – there
seems to be an emerging consensus that Higher Education needs, at minimum, to address the
development (or further development) of academic literacy in disciplinary contexts if the
sector is to (a) improve student graduation rates; (b) provide curricula that ensure holistic
development of students; and (c) deal meaningfully with the academic transitions between
secondary and Higher Education.
A key challenge in developing students’ academic literacy is arguably to understand what it is
that students already know and can do with regard to academic reading on entry to Higher
Education. It should follow that an understanding of what students already know and can do
provides a platform for the development of teaching and learning interventions and support
and the ability to address explicitly the disciplinary reading abilities required if students are to
become academically ‘literate’. Clearly, this is no simple task, but having some
understanding of students’ academic literacy (or literacies) represents a starting point for the
development of teaching and learning support.
The intention of the present paper is to present and analyse notions of academic literacy – or
literacies – and to reflect on and analyse the achievement of school-leavers in terms of these
academic literacies. The paper will argue that understanding what it is that school-leavers can
and cannot do in an academic literacy sense provides an important starting-point for the
development of students’ academic literacy in Higher Education. The next section of this
paper outlines the conception and construct of academic literacy.
Academic literacy as a construct
Theoretically, delineation of the construct of academic literacy that is the central
consideration in this paper can be found in the work of Yeld (2001), work which, in turn, has
its theoretical antecedents. In Yeld’s study, the construct of academic literacy is based on a
complex intersection of functional, sociolinguistic, grammatical and textual aspects of
language knowledge – derived from the work of Bachman (1990) and Bachman and Palmer
(1996). The functional aspects of language knowledge relate to a reader’s ability to make
essential meaning from text; understand and interpret communicative purpose; and make
meaning and develop an interpretation of one’s own. The sociolinguistic aspects of language
refer to the ability to penetrate and understand cultural distinctions, non-literal forms or
analogous nuance in language as these find expression in different contexts and linguistic
forms – such as word-based, image or icon-based and diagrammatic representation-based
language. Grammatical language knowledge refers to a reader’s ability to understand the
semantic (word-meanings) and syntactical (structural) basis of words and sentences. Finally,
textual language knowledge relates to the ability to understand and interpret textual cohesion
and organisation, and to be able to ‘see’ beyond immediate text, through making
extrapolations and inferences.
Clearly implied from discussion in the previous paragraph, are wider notions of academic
literacy that relate to the extent to which literacy (and academic literacy) goes beyond reading
and writing to include, for example, epistemological, digital, technological and multi-modal
literacies. Indeed, it has become important in Higher Education to consider and engage with
3
literacies rather than literacy – the latter conveys impressions of uni-dimensionality or
reductionism in understanding and reflection. As Paxton and Frith (2014) point out,
conceptions of academic literacies have evolved into descriptions of socially situated cultural
practice – thanks substantively to the contributions from the work of Lea and Street (1998;
2000), Lea (2004) and Lillis and Scott (2007), amongst others.
For the purposes of the present paper, however, this wider conception of literacy is applied to
the literacy practice of reading in particular. In essence, then, the construct of academic
literacy which is based on the foregoing describes a reader’s ability to understand and
integrate four dimensions of language knowledge as these apply to academic – specifically
Higher Education – settings. If readers are unable or less able to engage with an academic
context in ways which indicate that they understand and can manipulate these forms of
language knowledge (without necessarily being able to ‘label’ them), their academic literacy
is compromised to a lesser or greater extent. The elegance of the construct of academic
literacy as found in the preceding paragraph lies in its holistic reflection of reading
competence as a set of integrated abilities that require an understanding of the context in
which language operates; the different functional and cultural forms that language assumes;
the underlying syntactical and analytical base of a language; and the particularities of this
language expressed in academic forms and contexts.
A more detailed exposition of the development of the construct of academic literacy is not
attempted in this paper. Such an exposition is to be found in the work of Yeld (2001) and
Cliff and Yeld (2006). The development of a theoretically and conceptually rich construct is
presented there. What is worth emphasising, however, is that this construct has formed the
framework for an assessment of academic literacy in South African Higher Education for
more than 10 years now (see, for example, Cliff and Hanslo 2009; Cliff, Pearce and Ramaboa
2007; Petersen-Waughtal and Van Dyk 2011; Weideman 2009). Academics from across the
Higher Education sector have interacted critically with research that has been grounded in the
assessment of academic literacy and the consequences of this research for teaching and
learning and for student academic achievement.
From this previous research, it has emerged that:
• Generic assessment of academic literacy offers important complementary
understandings of academic readiness alongside other forms of assessment, such as
school-leaving examination results;
• Ability in generic academic literacy assessments appears associated with subsequent
academic performance in a wide variety of contexts;
• The strength of this association depends upon the extent to which academic literacy as
defined above is explicitly required in discipline-specific contexts;
• The teaching of courses explicitly designed to heighten students’ awareness of the
requirements of academic literacy appears to have had some success in improving
these students’ academic readiness.
To date, this research has not attempted a detailed analysis of the components of academic
literacy that entry-level students appear to cope with well and the components that they do
not appear to cope with. Research has also not yet assessed the extent to which there might be
differences amongst sub-groups of entry-level students in terms of what these groups do or do
not cope with in academic literacy. These are arguably important focuses for research, since
they may enable the Higher Education sector to develop prior knowledge of the academic
4
literacy strengths and weaknesses of entry-level students, and to design teaching and learning
interventions that enable these students to better cope with the disciplinary academic literacy
demands they will face in their studies.
The development of the National Benchmark Test in Academic Literacy (hereafter, the NBT
AL) in the South African Higher Education sector represents an attempt to (a) delineate a
theorised set of understandings about the meaning of academic literacy; (b) operationalise
these understandings in the form of a standardised assessment; and (c) provide information
about students’ academic literacy that will enable the development of teaching and learning
interventions aimed at the improvement or consolidation of this literacy in various
disciplinary contexts (Griesel 2006). It should be emphasised that the NBT AL is deliberately
a generic assessment of academic literacy, developed on the assumption that entry-level
students ought to possess at least a degree of academic reading competence if they are to
successfully negotiate the disciplinary contexts they embark on. In the NBT AL, it is
furthermore assumed that, if students do not provide evidence of this degree of competence,
they are likely to need some form of support/intervention in order to achieve such levels of
competence that will enable them to proceed with their disciplinary studies.
The goal of a standardised test of academic literacy is the development of an artefact that
attempts to make a judgment about the kinds of academic reading entry-level students are
likely to require in Higher Education; to design test items that attempt to assess these
academic reading levels; to determine conceptually and empirically the structure of such a
test; and, finally, to make a judgment about the inferences that can be drawn from test-taker
performance on the test. This information is arguably of importance for the development of
Higher Education curricula that will enable students to successfully engage the content and
context of their learning and will allow academics to offer directed, meaningful teaching
support.
Flowing from attempts to delineate notions of academic literacy and to render these notions
amenable to assessment, the NBT AL is based on the following ‘blueprint’ or set of
specifications. This blueprint has been presented and discussed in a number of research
studies in South African Higher Education in recent years (see, for example, Cliff and Hanslo
2009; Cliff and Yeld 2006; Cliff, Ramaboa and Pearce 2007; Scholtz 2012; Van Dyk and
Weideman 2004; Yeld 2001), but it is worth re-presenting in the current context since it
forms the basis for the subsequent discussion of entry-level students’ academic literacy
reading abilities.
5
Table 1: Academic literacy skills assessed in the NBT AL (Adapted from Bachman and
Palmer 1996 and Yeld 2001)
Skill Assessed
Explanation of Skill Area
Separating the essential from the non-
essential
Readers’ capacities to ‘see’ main ideas and supporting
detail; statements and examples; facts and opinions;
propositions and their arguments; being able to
classify, categorise and ‘label’
Extrapolation, application and
inferencing
Readers’ capacities to draw conclusions and apply
insights, either on the basis of what is stated in texts or
is implied by these texts.
Understanding discourse relations
between parts of text
Readers’ capacities to ‘see’ the structure and
organisation of discourse and argument, by paying
attention – within and between paragraphs in text – to
transitions in argument; superordinate and subordinate
ideas; introductions and conclusions; logical
development
Vocabulary
Readers’ abilities to derive/work out word meanings
from their context
Metaphorical Expression
Readers’ abilities to understand and work with
metaphor in language. This includes their capacity to
perceive language connotation, word play, ambiguity,
idiomatic expressions, and so on
Perceiving and understanding cohesion in
text
Readers’ abilities to be able to ‘see’ anaphoric and
cataphoric links in text, as well as other mechanisms
that connect parts of text to their antecedents or to
what follows
Understanding the communicative
function of sentences
Readers’ abilities to ‘see’ how parts of sentences /
discourse define other parts; or are examples of ideas;
or are supports for arguments; or attempts to persuade
Understanding text genre
Readers’ abilities to perceive ‘audience’ in text and
purpose in writing, including an ability to understand
text register (formality / informality) and tone (didactic
/ informative / persuasive / etc.)
6
Grammar
Readers’ understanding of how the syntactical, lexical
and punctuation features of basic language structures
affect academic text meaning
From Table 1, it can be seen that academic literacy is here conceptualised as comprising
textual and contextual meaning-making processes at word, sentence, paragraph and whole-
text levels. Additionally, meaning-making might be described as including the reader’s
ability to understand or ‘penetrate’ analogous or non-literal language (metaphor); perceive
implications within (extrapolation) and beyond (inferencing) text; understand underlying
purpose within and beyond text (communicative function); understand and interpret text-type
and form (genre); and perceive text structure and development (text relations and separating
essential from non-essential components of text). As has been argued elsewhere (cf.
references above), academic literacy is operationalised here as a holistic and integrated set of
reading abilities that relate to reading in and for Higher Education contexts. These abilities
are seen as somewhat different from (although also related to) ‘pure’ language proficiency. It
is important to note that the approach to and use of language referred to here is that which is
applied to academic reading in the Higher Education language of instruction and not to the
use of ‘everyday’ language. Whilst it is assumed that ‘everyday’ language proficiency will
likely impact on students’ ability to be academically literate, it is the ability to make
academic meaning that is of most importance in Higher Education contexts.
Entry-level students’ performance on an Academic Literacy test
What follows in this section of the paper are the presentation and analysis of data from an
empirical study of entry-level students’ performance on the NBT AL. These data present the
operationalisation of the construct of academic literacy discussed above and provide an
illumination of the argument about the academic readiness and reading levels of entry-level
students.
Study Sample
Data in Figure 1 below depict the performance of registered students (n = 1301) at one
Higher Education institution. These students took the NBT AL as applicants to Higher
Education in 2009 and were registered in the 2010 intake of students. Of the sample, n = 617
are female and n = 684 are male. The majority of students (n = 909) self-report English as
Home Language; the rest self-report a variety of Home Languages (Afrikaans; isiXhosa and
isiZulu predominantly). The mean age of the group is 18 years.
The NBT AL as test artefact
The NBT AL is a standardised test with typical alpha coefficient of reliability indices of 0.90.
Factor analyses show the test construct to be coherent and to have uni-dimensionality, with
moderate correlations amongst sub-constructs or specifications (see Cliff, Ramaboa and
Pearce 2007).
7
Test score classification procedure
Test-takers’ overall test performance on the NBT AL is classified according to four
benchmark performance levels: proficient; top intermediate; bottom intermediate; basic.
These benchmark levels were derived psychometrically from a standard-setting process
conducted in 2009 and involving Higher Education academics across a range of disciplines
and institutional contexts. The split between ‘bottom intermediate’ and ‘top intermediate’ was
derived arithmetically rather than from the standard-setting process and is presented here
because it enables the development of the educational argument about the readiness of entry-
level students. For more details of this process, see Pitoniak, Cliff and Yeld (2008).
According to these benchmarks, students whose performance is classified as ‘proficient’
ought to be able to cope with the typical entry-level academic literacy demands they will face
in conventional academic settings. Students whose performance is classified as ‘intermediate’
will experience some difficulties with the academic literacy demands they will face and ought
to be provided with forms of academic support additional to conventional curriculum
provision, for example, extra academic literacy tutorial support or placement in an extended
or foundation programme. Students whose performance is classified as ‘basic’ will
experience significant difficulties with the academic literacy demands they will face and will
require explicit and ongoing curriculum support and intervention – perhaps best provided by
bridging-type programmes – if they are to cope with these demands.
Data analysis
Figure 1 depicts the mean levels of performance of test-takers for each of the sub-constructs.
So, for example, test-takers whose overall test performance was classified as ‘proficient’
scored an average of approximately 70% on the ‘essential’ (see Table 1) test sub-construct.
Figure 1: Entry-level students’ performance on the NBT AL
8
The data in Figure 1 appear to provide support for the uni-dimensionality of the construct of
the NBT AL: sub-groups of test-takers (i.e. proficient, intermediate, basic) who do well in
one sub-construct of the test (i.e. essential, inferencing, and so on) appear also to do well in
other sub-constructs. However, there is also evidence of variation by sub-groups in
performance across the sub-constructs and there is evidence that what is a relative strength
for one sub-group may not be so for other sub-groups.
As a sub-group, ‘proficient’ test-takers appear to be proficient in all sub-constructs of the
NBT AL. They perform particularly well on the ‘discourse’ and ‘communicative function’
sub-clusters of the test and perform weakest on the ‘metaphor’ and ‘cohesion’ sub-clusters –
although these areas of performance are still classified as proficient. Test scores for
‘proficient’ test-takers suggest that – as a group – they ought to be able to cope with the
conventional generic academic literacy demands they will face in Higher Education.
‘Top intermediate’ test-takers appear to be proficient on average on the following sub-
clusters: ‘discourse’, ‘vocabulary’, ‘cohesion’ and ‘communicative function’. On all other
sub-clusters, they appear on average to be in the ‘intermediate’ category of overall test
performance. For ‘top intermediate’ test-takers, ‘discourse’ and ‘communicative function’
sub-clusters are areas of strongest performance on average (as for ‘proficient’ test-takers) and
areas of weakest average performance are ‘essential’ and ‘genre’ sub-clusters. In some areas
of academic literacy, ‘top intermediate’ test-takers as a group ought to be able to cope with
the conventional generic academic literacy demands they will face; in others, they may need
additional forms of teaching support if they are to cope.
‘Bottom intermediate’ test-takers are not proficient on average on any of the sub-clusters of
the NBT AL. Their performance on average is classified as ‘intermediate’ except for the
‘essential’ sub-cluster where their performance is classified as ‘basic’. For ‘bottom
intermediate’ test-takers, ‘vocabulary’ and ‘communicative function’ sub-clusters are areas of
strongest average performance (although still ‘intermediate’) and ‘essential’ and ‘grammar’
sub-clusters are on average areas of weakest performance. For the most part, ‘bottom
intermediate’ test-takers are going to require additional forms of teaching support if they are
to cope with the academic literacy demands they will face; in one particular area, i.e.
‘essential’, they will require ongoing explicit support if they are to cope.
For test-takers whose overall test performance is classified as ‘basic’, average performance
on all sub-clusters of the test – with the possible exception of ‘cohesion’ and ‘communicative
function’ – is classified as ‘basic’. For ‘basic’ test-takers, areas of relative strongest
performance on average are ‘cohesion’ and ‘communicative function’ and areas of weakest
performance on average are ‘essential’ and ‘genre’. ‘Basic’ test-takers will require ongoing
and explicit forms of teaching support if they are to cope with the academic literacy demands
they will face in Higher Education.
Of further interest in the data from Figure 1 are the differences on average sub-cluster
performance between ‘proficient’ and ‘basic’ test-takers, as these differences might suggest
areas that most separate test-takers who ought to cope with the academic literacy demands
they will face from those who might not. Differences between average performance for
‘proficient’ and ‘basic’ test-takers are greatest for the following sub-clusters of test
performance: ‘discourse’, ‘metaphor’ and ‘communicative function’. It is worth noting that
each of these three test sub-clusters are characterised by particularities in Higher Education
that associate with academic argument, analogous or non-literal reasoning and academic
9
communicative purpose. Accordingly, it might be reasonable that test-takers whose overall
test performance is classified ‘proficient’ are much better prepared to cope with these kinds
of academic engagements than those whose test performance is classified ‘basic’. Having
said this, it must be noted that there are significant differences on average between
‘proficient’ and ‘basic’ test-takers across all sub-clusters of the construct, differences which
underline the differential levels of academic readiness of these sub-groups of test-takers.
Concluding discussion
This paper has attempted an analysis of the academic readiness of entry-level Higher
Education students in terms of their performance on a test explicitly designed to assess this
readiness. The paper has focused on differential levels of readiness across sub-groups of
students whose overall test performance was classified against pre-determined benchmarks.
The paper has not focused in a direct manner on the extent of academic readiness across the
sector, i.e. on the proportions of test-takers whose performance was classified as ‘proficient’,
‘intermediate’ or ‘basic’. The national report to Higher Education South Africa (HESA) on
NBT performance (Yeld, Prince, Cliff and Bohlmann 2012) provides these data, which show
that the test performance of approximately one-fourth of all applicants is classified
‘proficient’, two-fourths is classified ‘intermediate’ and a further one-fourth is classified
‘basic. In other words, only approximately one-fourth of applicants to Higher Education
might be said to be sufficiently academically ready to deal with the generic reading and
reasoning demands they will face in their studies. These data about the academic readiness of
Higher Education applicants become particularly important when it is noted that many of
these applicants produce school-leaving examination results in the language of teaching and
learning that render the applicants eligible to enter Higher Education. In other words, while
applicants appear to be ready to cope with the language of teaching and learning – on the
basic of their cognate school-leaving language examination results – many of them do not
appear to be so in terms of their NBT AL results. At least part of the resolution of this
apparent contradiction lies in the focus of the NBT AL on its Higher Education-specific
context academic readiness. Against this focus, the cognate school-leaving examination
results do not appear to be directly equatable with the NBT AL results; hence, the usefulness
of such a test as a targeted, complementary assessment.
This paper has attempted an analysis of how academic literacy readiness – or lack thereof –
might be described; what (if any) differences there might be between different groups of test-
takers; and what the general implications might be for teaching and learning in Higher
Education when applicants become registered students. Clearly, there are many dimensions
of the construct of academic literacy on which test-takers are under-prepared to a lesser or
greater extent. The general view of academics across the sector (cf. the standard setting
process referred to earlier) is that this under-preparedness means that these test-takers will
fail to cope with the academic literacy demands they will face in Higher Education.
The operationalisation of the construct of academic literacy in terms of a set of specifications
on a test means that the sector has an opportunity to illuminate the components of academic
readiness which test-takers do not cope with well. Perhaps more importantly, data of the
kinds presented in this paper suggest specific and tangible lines of teaching and learning
intervention that can be developed, as well as the extent to which this intervention is
necessary. For example, if it is known that entry-level students whose test performance is
classified as ‘intermediate’ have particular weaknesses in ‘essential’ and ‘genre’ areas of
academic literacy, particular kinds of interventions can be designed that address these areas.
10
And these interventions might be more intense or explicit depending upon the extent of the
vulnerability identified.
Students whose test performance is classified as ‘basic’ or ‘lower intermediate’ are likely to
be unable to cope with fundamental academic literacy reading demands, such as separating
core textual points from their supporting detail – or perhaps more disturbingly, misinterprting
supporting detail to be the core point. They may also be unable to extrapolate textual meaning
beyond the immediate context; struggle to distinguish amongst key discourse features and
signals in academic argument; misinterpret analogous and non-literal language and its
connotations and socially-situated nuances; and be unable to discern the cohesive features of
text and argument. Arguably, these are fundamental components of academic contexts,
which will need to be addressed directly in teaching and learning if such students are to
negotiate meaning and be successful in their studies.
The data presented in this paper also suggest lines for continuing exploration and research.
For example, average performance on a cluster clearly does not capture the distribution of
test-takers’ scores within the cluster, and this kind of distribution could provide a ‘snapshot’
of the diversity of individual test-taker performance on the cluster. What the average
performance of sub-groups within a cluster does suggest, however, are possibilities for
further research into the different kinds of teaching intervention that might be necessary for
each sub-group – both at a generic and at a discipline-specific level. The generic assessment
of academic literacy holds relevance for the discipline-specific ways in which academic
literacy plays itself: the construct of academic literacy here discussed assesses fundamental
reading abilities at a process level as they are manifest across a range of disciplinary
discourses, contexts and text forms. In addition, this generic assessment points to the need to
grapple with the nuances of discourse, context and form such that teaching interventions can
be designed in ways that apply and adapt the generic to the particular.
Acknowledgement
The author acknowledges the assistance of Andrew Deacon (CILT) in the production of the
statistical data.
References
Bachman, L.F. 1990. Fundamental considerations in language testing. Oxford: Oxford
University Press.
Bachman, L.F. and A.S. Palmer. 1996. Language testing in practice. Hong Kong: Oxford
University Press.
Cliff, A. and M. Hanslo. 2009. The design and use of ‘alternate’ assessments of academic
literacy as selection mechanisms in higher education. Southern African Linguistics and
Applied Language Studies 27(3): 265-276.
Cliff, A. and N. Yeld. 2006. Domain 1 – Academic Literacy. In Access and entry-level
benchmarks: The National Benchmark Tests Project, ed. H. Griesel, 19-27. Pretoria: Higher
Education South Africa.
11
Cliff, A., K. Ramaboa and C. Pearce. 2007. The assessment of entry-level students’ academic
literacy: does it matter? Ensovoort 11(2): 33-48.
Griesel, H. 2006. Access and entry-level benchmarks: The National Benchmark Tests Project.
Pretoria: Higher Education South Africa.
Lea, M. 2004. Academic literacies: A pedagogy for course design. Studies in Higher
Education 29(6): 739–756.
Lea, M. and B. Street. 1998. Student writing in higher education: An academic literacies
approach. Studies in Higher Education 11(3): 182–189.
Lea, M. and B. Street. 2000. Student writing and staff feedback in higher education: An
academic literacies approach. In Student writing in higher education: New contexts, ed. M.
Lea and B. Stierer, 32–46. Buckingham: Open University Press.
Lillis, T. and M. Scott. 2007. Defining academic literacies research: Issues of epistemology,
ideology and strategy. Journal of Applied Linguistics 4(1): 5–32.
Paxton, M. and V. Frith. 2014. Implications of academic literacies research for knowledge
making and curriculum design. Higher Education 67(2): 171-182.
Petersen-Waughtal, M. and T. Van Dyk. 2011. Towards informed decision making: the
importance of baseline academic literacy assessment in promoting responsible university
access and support. Journal for Language Teaching 45(1): 99-114.
Pitoniak, M., A. Cliff and N. Yeld. 2008. Technical report on the standard setting process for
Higher Education South Africa’s National Benchmark Tests – Academic Literacy Test.
Centre for Higher Education, University of Cape Town.
Pretorius, E.J. 2002. Reading ability and academic performance in South Africa: are we
fiddling while Rome is burning? Language Matters 33(1): 169-196.
Scott, I., N. Yeld and J. Hendry. 2007. Higher Education Monitor 6: A case for improving
teaching and learning in South African Higher Education. Pretoria: Council on Higher
Education.
Scholtz, D. 2012. Using the National Benchmark Tests in Engineering diplomas: revisiting
generic academic literacy. Journal for Language Teaching 46(1): 46-58.
Van Dyk, T. and A. Weideman. 2004. Switching constructs: on the selection of an
appropriate blueprint for academic literacy assessment. Journal for Language Teaching
38(1): 1-13.
Weideman, A. 2009. Constitutive and regulative conditions for the assessment of academic
literacy. Southern African Linguistics and Applied Language 27(3): 235-251.
Yeld, N. 2001. Equity, assessment and language of learning: key issues for higher education
selection and access in South Africa. Unpublished PhD thesis, University of Cape Town.
12
Yeld, N., R. Prince, A. Cliff and C. Bohlmann 2012. The National Benchmark Tests Project
report. Centre for Higher Education, University of Cape Town.