ChapterPDF Available

Test Domains and Constructs: Academic Literacy

Authors:

Figures

Content may be subject to copyright.
edited by Hanlie Griesel
Higher Education South Africa-HESA
may 2006
access and entry
level benchmarks
the national benchmark tests project
Higher Education South African – HESA
PO Box 27392 Sunnyside 0132 Pretoria SOUTH AFRICA
TEL +27 (12) 481 2842 FAX +27 (12) 481 2843
EMAIL admin@hesa.org.za WEBSITE www.hesa.org.za
ISBN: 0-9585086-5-8
The copyright of this report is retained by HESA
LAYOUT & DESIGN: J A M STREET Design
PRINTING: L L O Y D G R A Y Lithographers
Acknowledgements
This publication was made possible through a grant received from Carnegie Corporation. The
operations of the National Benchmark Tests Project are supported by HESA’s Matriculation Board.
The support is gratefully acknowledged.
CONTENTS
Acronyms and Abbreviations .................................. iv
Preface ........................................................... v
Section 1 ........................................................ 1
The Context of the National Benchmark Tests Project ........................... 1
Section 2 ........................................................ 7
The Nature of Benchmark Tests ........................................... 7
Section 3 ....................................................... 17
Test Domains and Constructs: Overview ................................... 17
Domain 1– ................................................. 19
Academic Literacy ............................................... 19
&
Quantitative Literacy ............................................. 28
Domain 2– ................................................ 35
Cognitive Academic Mathematical Proficiency .......................... 35
Appendix 1 ..................................................... 43
Test Development Teams and Reference Group .............. 43
Appendix 2 ..................................................... 47
Elaboration on the Elements of the Definition of Quantitative Literacy .............. 47
References ..................................................... 55
Acronyms and Abbreviations
AARP Alternative Admissions Research Project
AL Academic literacy
ALL Adult Literacy and Lifeskills survey
APA American Psychology Association
BICS Basic Interpersonal Communicative Skills
CALP Cognitive Academic Language Proficiency
CAMP Cognitive Academic Mathematical Proficiency
CAO Central Applications Office
CHED Centre for Higher Education Development
CRTs Criterion-referenced tests
CTP Committee of Technikon Principals
DIF Differential item functioning
DoE (National) Department of Education
EFPA European Federation of Psychological Associations
ETS Educational Testing Service
FET Further Education and Training
HDIs Historically Disadvantaged Institutions
HE Higher Education
HEAIDS Higher Education HIV and AIDS programme
HELM Higher Education Leadership and Management programme
HESA Higher Education South Africa
IELTS International English Language Testing Service
ITC International Test Commission
LO Learning Outcome
MB Matriculation Board
MSEB Mathematical Sciences Educational Board
NAEP (US) National Assessment of Educational Progress
NBTs National benchmark tests
NBTP National Benchmark Tests Project
NCHE National Commission on Higher Education
NiSHE National Information Service for Higher Education
NLS New Literacy Studies
NQF National Qualifications Framework
NRTs Norm-referenced tests
NSC National Senior Certificate
PISA Programme for International Student Assessment
QA Quality Assurance
QL Quantitative literacy
SADC Southern African Development Community
SAT Scholastic Achievement Test
SATAP Standardised Assessment Tests for Access and Placement
SAUVCA South African Universities Vice-Chancellors Association
SC (current) Senior Certificate
TEEP Test in English for Educational Purposes
TELP Tertiary Education Linkages Project
TIMSS Trends in International Mathematics and Science Study
TOEFL Teaching of English as a Foreign Language
Preface
The idea of national benchmark tests reaches back to the mid-2004. A set of proposals was
discussed and agreed upon by the leadership of public higher education within the broad
vision of building a responsive enrolment system.
Yet the reasons for “benchmarking” entry level proficiencies are often misconstrued or
misunderstood. The purpose of this publication is therefore to describe in some detail
the reasons for and context of the National Benchmark Tests Project, what benchmark tests
entail, and the frameworks that will shape the development of tests in academic and
quantitative literacies, and mathematical proficiency.
As authors observe in this publication, the development of benchmark tests is complex and
will continue to require the support and goodwill of experts and institutions. But the potential
value is undoubtedly enormous and there is every reason to believe that high quality national
benchmark tests can realistically be achieved by 2008.
In the South African context the changing interface between schooling and higher education
demands of both sectors to re-examine curricula, approaches to assessment and, indeed, the
degree of fit between the outcomes of schooling and the entry requirements of higher
education (or, for that matter, the world of work).
The project represents an attempt to provide both schooling and higher education with
important information on the competencies of their exiting (in the case of schools) and enter-
ing (in the case of universities) students; information that does not duplicate the essential
information delivered by the school-leaving examination, but that provides an important extra
dimension.
Who gains access and to which levels of study will remain high on the agenda of this society
and its diverse range of higher education institutions – universities, comprehensive universities
and universities of technology. At present approximately one in five school-leavers gains entry;
yet we simultaneously recognise that access must reach beyond mere entry and must also
entail students’ successful engagement with the demands of “higher” studies.
Professor Brian O’Connell
Vice-Chancellor and Principal, University of the Western Cape &
Chair of the HESA Enrolment Steering Committee
Overview
The impetus for the development of national
benchmark tests is located in a complex
of policies, a changing schooling-higher
education interface and the realities of a
restructured higher education (HE) landscape
and changing institutional profiles – all factors
which in one way or another impact upon
access and the practices associated with
higher education enrolment.
By mid-2004 the vision for distinct yet
interrelated sector-level enrolment services
was well established; services that will
entail:
An information service that sends a
clear message to schools, FET colleges,
parents and the public on the role of
higher education in preparing students
for future careers, what is on offer and
what is expected at entry levels;
An admissions service that regulates
minimum thresholds for entry into higher
education study; and
An assessment service that benchmarks
entry levels in order to inform both
admission and placement practices and
curriculum responsiveness.
In 2005 a further project was initiated:
Monitoring systems flow in terms of enrol-
ment, throughput/retention and graduation
in order to provide accurate information and
analyses on system flow, efficiencies and
strategic sector action that may be deemed
necessary.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
1___
Overview 1
Diagram: HESA’s Enrolment Services
Programme 2
The services: 3
National Information Service for
Higher Education 3
Matriculation Board and future
Minimum Admissions Services 3
The National Benchmark Tests
Project 4
Monitoring Systems Flow 4
The challenges 5
Concluding comment 6
1
Hanlie Griesel
Higher Education South Africa (HESA)
THE CONTEXT
OF THE
NATIONAL
BENCHMARK
TESTS
PROJECT
THE CONTEXT OF THE NATIONAL BENCHMARK TESTS PROJECT
2___
chapter one
HIGHER EDUCATION SOUTH AFRICA – HESA
Hesa’s Enrolment Services Programme
2006 =
baseline year
2005 =
baseline year
PROJECTS ACTIVITIES PROGRAMMES
HELM
HEAIDS
Monitoring
systems flow
National
Information
Service for
HE-NiSHE
Current MB &
future Minimum
Admissions
Service
National Benchmark
Tests Project &
future Assessment
Service
HE ENROLMENT
Phase 1 =
2005 to 2008
It is in the context of this broad vision and the specific consequences of a changing schooling-
higher education interface that Vice-Chancellors approved the proposal for the establishment
of the National Benchmark Tests Project (NBTP) in the General Meetings of the then South
African Universities Vice-Chancellors Association (SAUVCA) and the Committee of Technikon
Principals (CTP). Institutions were subsequently invited to nominate assessment specialists in
the areas of academic literacy and numeracy/mathematics to participate in the project. The
first national meeting with institutional representatives was held on 25 November 2004, the
central purpose to lay the foundations of the project and build a common understanding of
the task ahead.
An initial core team or strategy group was appointed and the next phase of development
became focused on the development of pilot tests by specialist test development teams (see
Section 2, page 15 for a more detailed description). In parallel, the two related sector-level
enrolment services have continued their development in line with what is now known as
HESA’s Enrolment Services Programme. The diagram on the opposite page illustrates.
The idea of a responsive higher education enrolment system offering a range of services to
students, institutions and the sector reaches back to the access imperatives of the late 1980s
and early 1990s, and the concern with equity and quality of opportunity. Subsequently these
imperatives have found expression in a range of policy documents and planning frameworks,
most notably the National Commission on Higher Education (NCHE, 1996); The White Paper
3, A Programme for the Transformation of Higher Education (July 1997);
and the more recent National Plan for Higher Education (2001).
Two particular policies in the past year have created further impetus for the kinds of services
that make up HESA’s enrolment programme:
The policy on Minimum Admission Requirements for Higher Certificate, Diploma and
Degree Programmes requiring the National Senior Certificate (August 2005); and
The amended policy on the new schools curriculum for Grades 10-12 and the National
Senior Certificate: A Qualification at Level 4 on the NQF (July 2005).
Further, and at a macro level, the enrolment management planning framework
developed by the Department of Education in early 2005 compels higher education
to develop a responsive enrolment system for the sector.
The services
In outline, the nature of the distinct yet interrelated HESA enrolment services can
be captured as follows:
National Information Service for Higher Education – NiSHE
In addressing the dearth of information and career counselling available in the majority
of schools and broader society, NiSHE was established in 2005 with a two-fold strategic
objective:
To provide information and guidance on the role and requirements of higher education in
South Africa to learners at schools, teachers, parents, FET colleges and prospective higher
education students; and
To link higher education programme and qualification pathways to future career directions
and possibilities.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
3___
In its first year of operation the focus was on materials for Grade 9 as well as Grades 11-12
learners; in 2006 the focus has shifted to the development of a vibrant data-driven website
that will become the source for future materials development.
In addition, a data system will be developed to monitor systems flow in terms of enrolment,
retention and graduation rates.
The Matriculation Board and future Minimum Admissions Service
The Matriculation Board and future Minimum Admissions Service will continue to fulfil the
statutory function of regulating minimum thresholds to first degree study, in terms of current
regulations; and to certificate, diploma and degree study, in terms of the interim transition
policy under development and the new policy on minimum admission requirements that will
come into effect in 2008/9.
Its current services entail:
Certifying applications for exemption from the matriculation endorsement requirements;
Benchmarking foreign and SADC qualifications and maintaining international country
profiles; and
Providing advisory services to schools, parents and HE institutions.
It is envisaged that these services will continue with the phasing in of the new school-leaving
exit qualification from 2008 onwards, and in future will also be available to private HE
institutions.
The National Benchmark Tests Project (NBTP) and future
Assessment Service
The purpose of the NBTP is four-fold:
To assess entry-level academic and quantitative literacy and mathematics proficiency of
students;
To assess the relationship between entry level proficiencies and school-level exit
outcomes;
To provide a service to HE institutions requiring additional information in the admission
and placement of students; and
To inform the nature of foundation courses and curriculum responsiveness.
The pilot phase of the project is underway and its management and development is
outsourced to the Centre for Higher Education Development (CHED) at the University of Cape
Town (see Section 2).
It is anticipated that by early 2007 the project will be able assess entry level proficiencies
across the HE system, and that by 2008 the test development phase will be completed and
an assessment service can be offered to individuals and institutions to aid the process of fair
and accurate admission and placement decisions.
In summary, the overall goal of the distinct yet interrelated services is clear: to develop
responsiveness to the different challenges entailed in access and the consequences of a
changing schooling-HE interface.
It seems necessary, however, briefly to take stock of the challenges related to access and
entry-level testing.
THE CONTEXT OF THE NATIONAL BENCHMARK TESTS PROJECT
4___
chapter one
The challenges
There are at least three assumptions and conditions which underpin the need for bench-
marking entry level proficiencies to higher education study:
Where school-leaving results are an inaccurate reflection of the knowledge, skills
and applied competencies of students – or an inadequate reflection of their “potential” (i.e.
future) intellectual ability – it is necessary to develop additional forms of assessment in
order accurately and fairly select and place students. And in order to be both accurate and
fair, test development and the interpretation of results need to be informed by specific
constructs, psychometric qualities and standards or benchmarks (see Sections 2 and 3).
If HE curricula are to be responsive to the needs of a changing profile of students, higher
education ought to have a full grasp of the nature of preparedness – and the varying levels
of under-preparedness – of entry cohorts, and what this diversity means in terms of the
educational tasks of individual institutions and the HE system as a whole.
Given the variability of school-leaving results and the reality of a new schools curriculum
and exit qualification which have as yet not been “benchmarked” against comparable
(international and local) qualifications, it seems necessary for higher education to set
minimum entry thresholds and to assess levels of proficiency, at least until such time as
the implementation of the new curriculum and National Senior Certificate have stabilised.
The first condition has resulted in the development of a range of assessment protocols in the
South African higher education context, specifically over the past two decades. The National
Benchmark Tests Project deliberately wants to pool this expertise, even though it is recognised
that expertise is by no means evenly spread across the sector. Further, while different
theoretical frameworks and assessment practices have shaped the expertise developed within
the sector, a similar discourse seems to be in use, especially with regard to assessment being
linked to the policy goal of increased participation and broadened access. This apparent
“sameness” obscures important differences, however.
The challenge is therefore both to de-construct our taken-for-granted assumptions about
assessment and access to higher education studies, and to re-construct a common
understanding that will inform the National Benchmark Tests Project.
The second assumption is that we need an accurate assessment of entry levels in order to
inform institutions’ understanding of and response to the nature of entry cohorts, including
the varying levels of “preparedness” that must responsibly be addressed in first year curricula
and foundation courses, in particular. Too often we underestimate the reality of higher
education’s limited window of opportunity to develop the kinds of graduates required by a
21st century world of work. The fact remains – and needs continually to be restated – that
higher education must build on the foundation created by the education and training
opportunities which precede students’ progression into higher education.
The third condition is perhaps the crux of our current problem: if the current Senior
Certificate were a stable index of levels of achievement and proficiency, higher education may
not have needed to develop “alternative” forms of assessment. And if the proposed National
Senior Certificate (NSC) had been “benchmarked” against comparable qualifications, there
indeed may also not have been the need for higher education to develop entry level bench-
marks. While the NSC shows much promise in addressing many of the deficiencies in the
current Senior Certificate (SC), the fact remains that this promise needs to be translated into
practice. Until the first cohort of NSC learners has completed higher education studies, the
predictive validity of this exit qualification remains a promise on paper and not an empirical
reality.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
5___
Concluding comment
The task at hand is certainly challenging and its success will continue to require the support
and goodwill of all the experts and institutions involved. Section 2 describes in some detail
the nature of benchmark tests and the steps entailed in test development, while Section 3
focuses on the test domains and constructs which guide item development.
There is no doubt that the end product of the NBTP will be a valuable addition to
the tools currently available to guide admissions decision-making and programme placement
at HE institutions in South Africa. In addition – and given that access to HE studies will remain
high on the agenda of HE institutions – it is anticipated that the NBTP will both inform
HE curricula and teaching and learning practices, as well as provide “2nd chance” entry
opportunities to those whose school-leaving results prevent them from gaining access to
higher education study.
There is also no doubt that the NBTP will strengthen HESA’s enrolment services and, at a
national systems level, increase the sector’s responsiveness to the different challenges
entailed in access and the consequences of a changing schooling-HE interface.
THE CONTEXT OF THE NATIONAL BENCHMARK TESTS PROJECT
6___
chapter one
What is a benchmark
test?
In trying to define the concept “benchmark
tests”, it is important that the terms “bench-
mark” and “test” be defined separately
before contemplating their joint meaning.
The first question to tackle is, “What is
a benchmark?” In educational settings,
abenchmark is a point of reference
for evaluating and monitoring the adequacy
of the achievement and educational develop-
ment of learners. The following illustrates:
Benchmarks should be thought of as a
collection of references for evaluating the
growth of individual students. Benchmarks
do not put a ceiling on that growth, limit the
growth to a narrow band of intellectual
activities, or suggest that performance at a
lower level means failure. Benchmarks
represent a growth model of learning.
(Larter, 1991: 5)
It is important to note that sets of desired
learning outcomes, such as those found in
the subject statements of the new schools
curriculum for Grades 10-12, are sometimes
referred to as “content” standards. This is
because the outcomes are related to the
typical educational progress through a
content domain (learning area or subject).
However, such outcomes do not provide
performance standards (benchmarks) as
they do not specify what the minimum
expected levels of achievement should be by
the time that the learner reaches a particular
grade level. The learning outcomes provide
us with a range of learning outcomes that
could be achieved according to differing
levels of complexity. Some learners could
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
7___
What is a benchmark test? 7
Table 1: The benchmarks for
science literacy (structure of
matter) outcomes expected at
particular grade levels 8
Table 2: Performance continuum
and categories for the NAEP
reading achievement test 10
Steps in developing benchmark
tests 11
1. Planning phase 11
2. Field testing, psychometric
evaluation and standard
setting phase 12
3. Finalisation of tests and ongoing
evaluation phase 14
Managing the National Benchmark
Tests Project 15
Concluding comment 16
2
Cheryl Foxcroft
Nelson Mandela Metropolitan University
THE NATURE
OF BENCHMARK
TESTS
achieve the first level of complexity of an outcome, while others could achieve more advanced
levels of an outcome.
In contrast, performance standards (benchmarks) provide us with the expected level of
attainment of the learning outcomes that all learners should reach by certain grade
levels. An example of the benchmarks set for Science Literacy (with respect to the Structure
of Matter) is provided in Table 1 below.
Table 1
The Benchmarks for Science Literacy (Structure of Matter) Outcomes
Expected at Particular Grade Levels1
End of 12th grade (all students should know that):
Atoms are made of a positive nucleus surrounded by negative electrons. An atom’s
electron configuration, particularly the outermost electrons, determines how the atom
can interact with other atoms. Atoms form bonds to other atoms by transferring or
sharing electrons.
The nucleus, a tiny fraction of the volume of an atom, is composed of protons and
neutrons, each almost two thousand times heavier than an electron. The number of
positive protons in the nucleus determines what an atom’s electron configuration can
be and so defines the element. In a neutral atom, the number of electrons equals the
number of protons. But an atom may acquire an unbalanced charge by gaining or
losing electrons.
Neutrons have a mass that is nearly identical to that of protons, but neutrons have no
electric charge. Although neutrons have little effect on how an atom interacts with
others, they do affect the mass and stability of the nucleus. Isotopes of the same
element have the same number of protons (and therefore of electrons) but differ in the
number of neutrons.
End of 8th grade (all students should know that):
All matter is made up of atoms, which are far too small to see directly through a
microscope. The atoms of any element are alike but are different from atoms of other
elements. Atoms may stick together in well-defined molecules or may be packed together
in large arrays. Different arrangements of atoms into groups compose all substances.
Equal volumes of different substances usually have different weights.
Atoms and molecules are perpetually in motion. Increased temperature means greater
average energy of motion, so most substances expand when heated. In solids, the atoms
are closely locked in position and can only vibrate. In liquids, the atoms or molecules
have higher energy, are more loosely connected, and can slide past one another; some
molecules may get enough energy to escape into a gas. In gases, the atoms or molecules
have still more energy and are free of one another except during occasional collisions.
End of 5th grade (all students should know that):
Heating and cooling cause changes in the properties of materials. Many kinds of changes
occur faster under hotter conditions.
No matter how parts of an object are assembled, the weight of the whole object made
is always the sum of the parts; and when a thing is broken into parts, the parts have the
same total weight as the original thing.
Materials may be composed of parts that are too small to be seen without magnification.
When a new material is made by combining two or more materials, it has properties that
are different from the original materials. For that reason, a lot of different materials can
be made from a small number of basic kinds of materials.
End of 2nd grade (all students should know that):
Objects can be described in terms of the materials they are made of (clay, cloth, paper,
etc.) and their physical properties (color, size, shape, weight, texture, flexibility, etc.).
Things can be done to materials to change some of their properties, but not all
materials respond in the same way to what is done to them.
THE NATURE OF BENCHMARK TESTS
8___
chapter two
Masters and Forster note that benchmarks can be either comparative or absolute (1996a:
49):
Comparative benchmarks (or norm-referenced benchmarks) are set with reference
to the achievements of others (e.g., to the performance of learners in previous years, in
different provinces, or in other countries).
Absolute benchmarks, or performance standards, are set at a desired level of
performance on a criterion (or content domain) for a specific purpose. For example, in
educational settings, absolute benchmarks or standards are set in terms of what the
minimum level of knowledge and skill should be that is required of learners in Grade 9,
Grade 12, or at entry to higher education studies.
With the above delineation of “benchmark” in place, the second concept that needs defining
is that of a “test”. Broadly speaking, in educational assessment a test provides a sample
of behaviour or a content domain (cf. Foxcroft & Roodt, 2005). From this sample, inferences
are made regarding the level of performance of an individual or a group. A test is usually
administered under standardised (controlled) conditions and systematic procedures are used
to score it and to interpret test performance.
There are various types of tests. Tests inter alia vary according to what they assess
(e.g., intelligence, personality, achievement in mathematics), the assessment mode used
(e.g., paper-based, computer-based, performance-based), whether they are administered
individually or in a group context, and how they are interpreted (e.g., scores are compared
to a norm group or are interpreted with respect to the level achieved on the criterion of
interest). With reference to the latter aspect, a discussion on the use of norm-referenced and
criterion-referenced tests (CRTs) when assessing educational achievement is pertinent when
trying to conceptualise benchmark tests.
In norm-referenced tests (NRTs), test scores are compared to those of
a reference or norm group (e.g., age, grade or gender groups) and performance is interpreted
as being below average, average, above average, etc. with respect to the norm group. The
use of norm-referenced tests in achievement testing has been increasingly criticised on the
basis that they are biased against culturally different and English Second Language or
bilingual learners and that they provide little information that can guide the facilitation of
learning and educational programme planning and development (Stratton & Grindler, 1990).
In educational testing, criterion-referenced tests (CRTs) are constructed to
provide information about the level of a test-taker’s performance in relation to clearly defined
domain of content and/or behaviours (e.g., reading, writing, mathematics) that requires
mastery. These criteria are usually stated as performance objectives (learning outcomes).
Test scores are associated with performance categories such as “developing”,
“expanding”, and “advanced”, each containing a thick (detailed) description of what learners
whose performance falls in the category know and can do. Such performance information can
be used to individually-tailor learning programmes, for example. Table 2 contains the
performance categories and the corresponding scores on the reading achievement test of the
US National Assessment of Educational Progress (NAEP). Positions along the performance
continuum are indicated by numbers that range from 0 to 500, which are divided into five
levels (performance categories), namely, Rudimentary (0-150), Basic (151-200),
Intermediate (201-250), Adept (251-300), and Advanced (301-500).
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
9___
1The benchmarks were developed by the American Association for the Advancement of Science (1995:
XI) and represent statements of what all learners should know and be able to do in various aspects of
science by the end of Grades 2, 5, 8, and 12.
Table 2
Performance Continuum and Categories for the NAEP Reading
Achievement Test2
Advanced (test score range : 301-350)
Readers who use advanced reading skills and strategies can extend and restructure the ideas
presented in specialized and complex texts. Examples include scientific materials, literary
essays, historical documents, and materials similar to those found in professional and technical
working environments. They are also able to understand the links between ideas even when
those links are not explicitly stated and to make appropriate generalizations even when the texts
lack clear introductions or explanations. Performance at this level suggests the
ability to synthesize and learn from specialized reading materials.
Adept (test score range : 251-300)
Readers with adept reading comprehension skills and strategies can understand complicated
literary and informational passages, including materials about topics they study at school. They
can also analyze and integrate less familiar material and provide reactions to and explanations
of the text as a whole. Performance at this level suggests the ability to find,
understand, summarize, and explain relatively complicated information.
Intermediate (test score range : 201-250)
Readers with the ability to use intermediate skills and strategies can search for, locate and
organize the information that they find in relatively lengthy passages and can
recognize paraphrases of what they have read. They can also make inferences and
reach generalizations about main ideas and author’s purpose from passages dealing with
literature, science, and social studies. Performance at this level suggests the
ability to search for specific information, interrelate ideas, and make
generalizations.
Basic (test score range : 151-200)
Readers who have learned basic comprehension skills and strategies can locate and
identify facts from simple informational paragraphs, stories, and news articles. In
addition, they can combine ideas and make inferences based on short, uncomplicated passages.
Performance at this level suggests the ability to understand specific or
sequentially related information.
Rudimentary (test score range 0-150)
Readers who have acquired rudimentary reading skills and strategies can follow brief written
directions. They can also select words, phrases, or sentences to describe a simple picture and
can interpret simple written clues to identify a common object. Performance at this level
suggests the ability to carry out simple, discrete reading tasks.
In addition to performance categories (levels) along a continuum, minimum
performance standards (benchmarks) can be set on the performance
continuum and the test score scale so that test-takers who obtain scores below this standard
are considered to have a definite weakness that requires intervention as they have not been
able to achieve the expected minimum level of proficiency.
When the aim is to benchmark test performance in a content domain against a point along a
performance continuum, CRTs are considered to be more appropriate than norm-referenced
tests.
By synthesising what a benchmark and a criterion-referenced test is, the following
working definition of a benchmark test can be derived:
THE NATURE OF BENCHMARK TESTS
10___
chapter two
2Adapted from Masters & Forster, 1996b: 66.
Benchmark tests assess performance with respect to learning outcomes (content
standards) in a specific content domain (subject, learning area) along a continuum on which the
expected level of minimum proficiency (benchmarks/performance standards) has been set for a
specific purpose (e.g., entry into higher education).
Steps in developing benchmark tests
As criterion-referenced tests (CRTs) are more appropriate to use when setting benchmarks,
the description of the steps in developing a benchmark test is closely aligned with the
development of CRTs.3
The steps in developing and researching benchmarks tests can be divided into three broad
phases, namely:
1. Planning;
2. Field testing, psychometric evaluation and standard setting; and
3. Finalisation of the test and ongoing evaluation.
Below follows a brief outline of specific steps in each phase with attention to how each step
will be implemented in developing and researching the NBTs.
1. Planning Phase
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
11___
3The brief description provided draws extensively on the work of Hambleton & Zenisky (2003:
377-404). For a more comprehensive discussion, readers are referred to these authors.
Test Development Research Activities
Review research related to
the development of CRTs to
guide the NBT test
development process.
Initial planning:
1. Specify the purpose of the test and the domains
and/or behaviours of interest. (See Section 3 for the
purpose of the NBTs and the domains to be tapped.)
2. Target group: The NBTs will be administered to
learners entering HE.
3. Estimate of test length: Approximately 3 hours.
4. Available expertise: Has been and continues to be
scoped.
5. Financial resources: A budget has been drawn up
for the NBTs and some funding has been secured.
6. Schedule: A schedule has been compiled to
complete the development and validation of the
NBTs by 2008.
Undertake literature scoping
of the content domains,
performance categories and
test specifications.
Obtain input from experts
regarding the content domains,
performance categories and
test specifications.
Review each content domain:
This entails reviewing and finalising the description of the
content domain and performance categories (objectives/
outcomes) to be included in the NBTs (see Section 3),
as well as preparing test specifications for each content
domain and reviewing them for completeness, accuracy,
clarity and practicality.
2. Field Testing, Psychometric Evaluation and Standard Setting Phase
THE NATURE OF BENCHMARK TESTS
12___
chapter two
Test Development Research Activities
Review various item types
that could be used.
Identify potential item banks.
Item writing and development of scoring
rubrics:
A sufficient number of items need to be drafted for field
testing; draft items must be edited and scoring rubrics
need to be developed.
Expert panels review each
item to see if it meets the
content standard and to
classify it into a performance
category.
Expert panels review each
item for appropriateness of
language level and for
potential gender and cultural
bias.
Assessment of content validity:
Identify a pool of content and measurement experts.
The experts review the items to determine whether they
match the content standards, their representativeness,
their cultural appropriateness and freedom from potential
bias, and their technical adequacy.
Review and evaluate each
revised item for content,
linguistic, cultural and
gender appropriateness.
Revise test items:
Based on the input of the experts, revise the test
items (or delete them) and write additional items,
if necessary.
Test Development Activities Research Activities
Item analysis
DIF analysis
Field test the experimental items:
Administer items to appropriately selected groups of
test-takers. Conduct item analysis and perform item
bias studies to identify differential item functioning
(DIF).
Review and evaluate each
revised item for content,
linguistic, cultural and gender
appropriateness.
Compile information on item
characteristics for each item
in item bank.
Revise test items and establish item
bank:
Using the results from the field testing, revise or
delete items where necessary. A final item bank should
then be established. Testlets should be identified in item
bank.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
13___
Test Development Activities Research Activities
Compile and administer a pilot test:
Decide on the number of items per content standard
and the length of the test. Select test items and testlets
from the item bank. Include anchor test items if the
test is to be linked to a previous test or tests. Prepare
the test booklet, practice examples, answer sheets,
scoring keys, etc. Administer test to appropriate
samples.
Evaluate the validity and
reliability of the NBTs.
Focus especially on the
cross-cultural and cross-
linguistic appropriateness and
validity of the NBTs.
Guidelines prepared by the
International Test Commission
(ITC), the American
Psychological Association
(APA), the European
Federation of Psychological
Associations (EFPA), and the
Professional Board for
Psychology will be used to
guide and evaluate the
research into the psychometric
properties and quality of the
NBTs.
Explore psychometric properties:
The following needs to be established:
Validity (content and criterion-referenced especially)
Reliability (internal consistency)
Construct equivalence for inter alia different language,
cultural and gender groups.
Evaluate and verify
performance standards and
benchmarks set against
actual performance of learners
in first-year and in subsequent
years of study.
Suggest adjustments to
performance categories, item
placement, and benchmarks
on the basis of verification
research.
Enrich the descriptions of
performance categories on
the basis of verification data
collected.
Finalise and verify performance
standards (benchmarks):
Develop a process to determine the performance
standards to separate test-takers into performance
categories. Compile procedural, internal and external
validity evidence to support the performance standards
(Cizek, 2001). Specify factors that may affect the
performance standards when they are applied to test-
takers with special needs (i.e., alternative administration
procedures to accommodate test-takers and the
concomitant alternative test score interpretations).
3. Finalisation of Test and Ongoing Evaluation Phase
THE NATURE OF BENCHMARK TESTS
14___
chapter two
Test Development Activities Research Activities
Collect data on aspects of
test administration procedure
which are problematic and
whether the language used
in the instructions poses any
difficulties.
Finalise the test administration procedures through pilot
testing.
Ensure that the technical
manual clearly documents
the test development process
and provides sufficient
information on the
psychometric properties of
the test.
Prepare a test manual for administrators as well as
a manual containing technical and psychometric
information.
Conduct reliability and
validity studies on an ongoing
basis.
Statistically equate the tests
used in any one intake-year
and also from year to year.
This will entail ensuring that
anchor items are included in
the tests to be equated and
then performing the necessary
statistical procedures to
establish whether the tests
are equivalent.
Ongoing collection of psychometric information and the
equating of tests. If a different test is to be compiled
each year, it is important that each new test is statistically
linked or equated to tests administered previously so that
scores are comparable across the tests. This will ensure
that the previously established performance standards
can be used with new tests and any growth or change
can be identified. Equating usually involves using “anchor
test items” and then “statistically equating” new tests to
those given previously (Hambleton & Zenisky, 2003:
384). Anchor (common) items are items that were
administered in previous tests and are included in the
new test. Usually, anchor items are chosen to match
the content of the tests being equated, and to be of
comparable difficulty. If there are 10 to 15 anchor
(common) test items and 1000 or more test-takers, this
is sufficient to statistically equate two tests, although the
larger the number of test-takers and anchor/common
items, the better (Hambleton & Zenisky, 2003). As
expressed by these authors, ‘When tests are statistically
equated, fairness can be achieved, the same performance
standards can be used over time, and progress in
achievement over time can be monitored’ (2003: 384).
Managing the National Benchmark Tests Project
(NBTP)
Three major tasks will constitute the NBT Project:
1. Test development;
2. Research; and
3. Project management.
The first two tasks were extensively discussed in the section above. Consequently, the only
aspect that still needs elaboration is the way in which the NBT Project will be managed.
HESA’s Executive Office has outsourced the management of the NBT Project to the Centre for
Higher Education Development (CHED) at the University of Cape Town.
In the short- to medium-term the project will be managed by a Strategy Group made up of
the following individuals:
Professor Nan Yeld, NBT Project Director
Professor Cheryl Foxcroft, Research and QA Coordinator
Leaders of the test development teams – Drs Alan Cliff (verbal reasoning), Robert Prince
(quantitative literacy) and Kwena Masha, in collaboration with Carol Bohlmann and
Max Braun (cognitive academic mathematical proficiency).
George van der Ross, Project Manager.
The leaders of the specialist task teams have recruited through a rigorous and transparent
process test development experts to participate in test development. The process of test
development and research will continue to involve both expert-driven small development and
review teams and a larger reference group drawn from a range of institutional expertise (see
Appendix 1).
The Strategy Group will include liaison with the HESA Executive Office and the Matriculation
Board through the following individuals:
Hanlie Griesel, with regard to the framework and approach of the NBTP, its accountability
to HESA structures and the synergies which need to be developed with HESA’s enrolment
services (see diagram in Section 1, page 2); and
Cobus Lötter, in terms of budgetary provisions and fund-raising activity.
In terms of sector-level governance, the NBTP will be accountable to the HESA Enrolment
Steering Committee responsible for the overall strategic direction and alignment of four
distinct yet interrelated enrolment services, i.e.:
An information service – the current National Information Service for Higher Education
(NiSHE) which may in future extend to include a central applications service;
An minimum admissions regulation service – the current Matriculation Board and future
Minimum Admissions Service; and
HESA’s project on monitoring systems flow focused on enrolment, retention and
graduation trends.
It is clearly important that the NBTP is guided by a governance model that is in line with the
vision and functions of a coherent HE enrolment system, as well as the strategic plan of HESA.
In addition, the outcomes of the NBTP will be of direct interest to the newly established
Admissions Committee of HESA, responsible for regulating minimum admissions requirements
and preparing the higher education sector for system readiness in 2008 when the current
Senior Certificate is replaced by the new National Senior Certificate.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
15___
Concluding comment
The task of developing national benchmark tests is by no means simple and it will
take a while for the tests and benchmarks (standards) to be developed. However, there is a
large pool of international and national knowledge and literature that can be drawn on in
terms of the steps that need to be followed when developing criterion-referenced national
tests, verifying their psychometric properties, and setting minimum performance standards
(benchmarks). Furthermore, a pool of test development experts has emerged across HE
institutions in South Africa over the last decade or more. Consequently, there is every reason
to believe that the complex task of developing high quality national benchmark tests by 2008
can realistically be achieved.
Section 3 focuses on the test domains and constructs that the NBTs will tap. These test
domains and constructs will guide item development and the delineation of
performance standards.
THE NATURE OF BENCHMARK TESTS
16___
chapter two
Overview
Nan Yeld
Centre for Higher Education
Development, University of Cape Town
The testing of academic literacy (verbal
reasoning and quantitative reasoning or
numeracy) for admissions and placement
purposes has a long history, in South Africa
as well as internationally. The critical thinking
skills in these domains are widely believed to
be central to academic success in higher
education study, irrespective of area of
study. Similarly, ability and knowledge in
mathematics are generally held to be essential
for satisfactory progress in areas of study
requiring a sound mathematical and
quantitative foundation. The domains are
fully explored in Section 3 below.
Examples of international testing initiatives
widely used for benchmarking of school-
leaving examinations, or for direct admission
to institutions here and elsewhere in the
world, include:
the SAT 1 Verbal Reasoning test
developed by the Educational Testing
Service (ETS) in the United States, which
incorporates critical reading, mathematics
and writing;
the Queensland Core Skills Test scheme
in Australia;
the ETS Test of English as a Foreign
Language (TOEFL);
the University of Cambridge Local
Examinations Syndicate’s International
English Language Testing Service
(IELTS); and
the Test in English for Educational
Purposes (TEEP) in the United Kingdom.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
17___
Overview 17
Domain 1– 19
Academic Literacy 19
What is academic literacy? 19
Operationalising academic literacy 23
Table 1: Language knowledge
specifications 24
Concluding comment 27
Quantitative Literacy 28
What is quantitative literacy at
tertiary level? 28
Context 29
Content 29
Reasoning behaviours 29
Table 2: An expanded definition
of quantitative literacy 30
Operationalisation of the definition
for test development and analysis 31
Table 3: Competencies specification
for a quantitative literacy test 32
Concluding comment 34
Domain 2– 35
Cognitive Academic
Mathematical Proficiency 35
Overview 35
Figure 1: Levels of mathematical
educational goals 36
The specific role of the CAMP test 37
Sources of skills and competencies
that the CAMP test must address 38
The nature of the test 38
Table 4: Definition of
competencies that should be
assessed in the CAMP test 39
Concluding comment 41
3
TEST DOMAINS
AND
CONSTRUCTS
In South Africa, two system-level projects have developed and implemented testing schemes,
based on academic literacy, numeracy and (in some cases) scientific reasoning4:
The Alternative Admissions Research Project (AARP) was established in the late 1980s at
the University of Cape Town.
AARP’s goals are, broadly speaking, to provide a student selection and placement testing
service, and to contribute to the development of national policies relating to admissions. From
its small beginnings, AARP has grown dramatically, both in numbers of students writing the
tests, numbers of institutions participating, and their reasons for doing so. Testing is
undertaken for a number of institutions and, in recent years, its activities have included the
development of the Health Sciences consortium, which is responsible for a national entrance
examination (in language, mathematics and scientific reasoning) written by almost all
applicants wishing to enter the Health Sciences profession.
The second major local testing initiative, the Assessment Project (TELP II), was initiated
in November 1998.
The brief from the Tertiary Education Linkages Project (TELP) was to develop three diagnostic
tests, in the areas of English Language (academic literacy), Mathematics and Science, which
could be used to identify students at risk in the Historically Disadvantaged Institutions (HDIs)
and provide a basis for the development of appropriate courses and curricula. In the 1990s,
a further, complementary aim and use of the tests arose: to identify talented students whose
Senior Certificate results did not make them eligible for selection to higher education studies.
Although funding for the TELP project officially came to an end in 2002, the project has since
continued on limited funding and pro-bono commitment of time and resources from AARP
staff, working with staff at the formerly TELP institutions, along with commitment from the
institutions themselves to print, administer and mark the tests. It is now known as the
Standardised Assessment Tests for Access and Placement (SATAP) Project.
Together, the two projects process about 78,196 scripts annually, which represent
approximately 26,734 writers.
The deliberate intention of the NBTP is to build on the expertise developed in these two
national system-level testing initiatives – as well as on expertise developed in institution-
specific assessment practices – in the development entry-level benchmarks in similar
domains; i.e. academic and quantitative literacy and mathematics proficiency.
In conclusion, it is important briefly to comment on the language of the tests under
development. The decision has been reached that in the development phase of the NBTP, the
language of the tests will initially be English. Although less than 10% of the South African
population speaks English as a first language, it is the medium of instruction in the majority
of the country’s secondary schools and higher education institutions. This depiction of the
status quo should not, however, be taken to imply uncritical support for the hegemony of
English in formal education in South Africa. Neither is there any reason why the tests should
not be translated into Afrikaans where this is an institution’s medium of instruction or, for that
matter, into any one of the other official South African languages that may, in future, become
the medium of instruction at a particular institution.
TEST DOMAINS AND CONSTRUCTS
18___
chapter three
4In addition, several regional and institution-specific initiatives were developed that, in various forms,
created access opportunities for a wider group of students than could be identified through total
reliance on the “matric” examination. Prominent amongst these were the Teach-Test-Teach (TTT)
programme at the University of Natal and the University Foundation Year (UNIFY) project at the then
University of the North. Expertise from these and other projects is contributing crucially to the NBTP.
Domain 1–
Academic Literacy
Alan Cliff and Nan Yeld
Quantitative Literacy
Vera Frith and Robert Prince
Centre for Higher Education Development, University of Cape Town
Academic Literacy
A national benchmark test in academic literacy needs to address the following central
question:
What are the core academic literacy competencies that an entry-level student should demonstrate
that will be sufficient indication that s/he will be able to cope with the typical demands of higher
education in the medium-of-instruction of an institution, in a context of appropriate teaching, learning
and curriculum support?
This question foregrounds certain key elements that should be considered in developing an
academic literacy benchmark test:
It suggests that there should be an identification of what exactly is meant by “core”
academic literacy competencies, i.e. what are the key language and thinking competencies
and approaches that should be assessed? This question is explored further on in this
section.
There is an important focus on “entry-level” competencies, i.e. there should be careful
debate about how entry-level competencies might differ from exit-level (or graduate-level)
outcomes and how it is the former which should be assessed in a benchmark test. It
should also be noted that the current prevalent view that a large proportion of South
Africa’s school leavers do not in fact possess the required competencies should not be
allowed to influence the benchmark levels: rather, until schooling improves, the onus
needs to be on higher education to provide educational support so that students develop
to meet these levels.
The question of what exactly constitutes “sufficient” indication of academic literacy
competence also needs debate. In the context of higher education in South Africa,
“sufficient” indication of academic literacy may depend at least upon 1) the level of
qualification (e.g. certificate, diploma or degree) for which the student is studying; and
2) the extent of curriculum provision and support (e.g. foundation, mainstream) provided
for that programme.
The medium-of-instruction issue is also of key importance: an academic literacy test should
aim to assess a student’s competence in dealing with the language demands of medium-of-
instruction, not first language per se– insofar as medium of instruction and first language are
separable.
What is academic literacy?
The view of academic literacy put forward here represents a focus on students’ capacities to
engage successfully with the demands of academic study in the medium of instruction of the
particular study environment. In this sense, success is constituted of the interplay between
the language (medium-of-instruction) and the academic demands (typical tasks required in
higher education) placed upon students. Perhaps it is successful negotiation of this interplay
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
19___
that might reasonably be regarded as the notion of “academic literacy”. A successful student
in this sense, then, is one who is able to negotiate the demands of academic study in a higher
education context, in the medium-of-instruction of that context, and eventually graduate with
a meaningful qualification. “Success” is here constituted as the student having perceived the
nature of the language and thinking demands placed upon him or her and having made
appropriate responses to those demands.
So, what are the language and thinking demands of the academic context that a student is
expected to negotiate, and negotiate successfully – i.e. what is the target language use
domain?
Bachman and Palmer (1996: 44) define this domain as “a set of specific language use tasks
that the test taker is likely to encounter outside of the test itself and about which we want
our inferences about language ability to generalise”. Clearly, these tasks pose demands that
are complex and multi-dimensional, and difficult to assess.
Bachman and Palmer’s research, which occupies a central place in the field of language
assessment, highlights various language and thinking approaches associated with successful
higher education study engagement. Of particular interest here is their focus on the knowledge
and understanding of the organisational and functional aspects of the language of instruction.
Successful students, by implication, are those who are able to negotiate the grammatical and
textual structure of the language of instruction and to understand its functional and socio-
linguistic bases. In a higher education context, what this translates to is that successful
students are able to:
1. negotiate meaning at word, sentence, paragraph and whole-text level;
2. understand discourse and argument structure and the text “signals” that underlie this
structure;
3. extrapolate and draw inferences beyond what has been stated in text;
4. separate essential from non-essential and super-ordinate from sub-ordinate information;
5. understand and interpret visually encoded information, such as graphs, diagrams and
flow-charts;
6. understand and manipulate numerical information;
7. understand the importance and authority of “own voice”;
8. understand and encode the metaphorical, non-literal and idiomatic bases of language; and
9. negotiate and analyse text genre.
Cummins (e.g. 2000, 1984, 1980) proposes two different conceptions of language proficiency
that are useful in this context. He was concerned to understand why students who appear
fluent in a language frequently experience difficulties when that language is used as the
medium of instruction. That is, he was interested in the abilities lying behind the successful
deployment of language in academic settings and whether, and in what ways, these were
different from the abilities underlying the use of language in non-school settings. It was
in this context that he argued for language proficiency to be defined in a way that could be
related to academic performance. In suggesting that academic success requires using and
understanding language in context-reduced situations, he drew a distinction between the use
of language in context-reduced as opposed to context-embedded situations.
Cummins’s argument was that tests arising from communicative competence theories of
language use would tend to tap only one dimension of the language abilities required to
function effectively in formal schooling. He called this dimension basic interpersonal commu-
nicative skills (BICS), and contrasted it with that of cognitive academic language proficiency
(CALP) which was intended to capture the kinds of language ability needed to function
effectively in schooling. As Cummins and Swain (1986: 151) point out, it is “... necessary to
distinguish between the processing of language in informal everyday situations and the
language processing required in most academic situations”.
TEST DOMAINS AND CONSTRUCTS
20___
chapter three
The situations being referred to here are typically decontextualised and found in formal
schooling contexts, and some of the processing demands arise from the absence, in many
academic situations, of the normal supports found in conversation (e.g. nods, interpolations,
gestures), and the need to function as both audience and speaker (Bereiter & Scardamalia,
1982). More specifically, decontextualised language use refers to “... language used in ways
that eschew reliance on shared social and physical context in favour of reliance on a context
created through the language itself” (Snow, Cancino, De Temple & Schley, 1991). It can be
defined as requiring:
“... the linguistic skills prerequisite to giving, deleting, and establishing relationships among the
right bits of information” (Snow 1987: 6), and control of “... the complex syntax necessary to
integrate and explicate relations among bits of information, and maintaining cohesion and
coherence” (op cit: 7).
proficiency in processing text irrespective of mode (i.e. spoken or written) or medium (e.g.
books, journals, visual material, electronic forms) where meaning is supported by linguistic
rather than paralinguistic cues (Tannen 1985; Cummins 1980, 1984; Wells 1981).
communication, irrespective of mode or medium, where the emphasis is on the message rather
than the act of communication (Tannen 1985; Wells 1981; Arena 1975).
The first of Cummins’s two conceptions of language proficiency relates to the concept of
communicative competence, which is firmly embedded in linguistic and sociolinguistic theory.
In this view, it is the act of communication that is prioritised, rather than the message. The
communicative competence movement, which arose in reaction to a view of language as the
sum of numerous discrete elements, understandably fore-grounded interaction, context, and
authenticity in human communication. The role of world knowledge, and other relatively
non-linguistic cognitive factors was not, however, adequately addressed. It is this lack that
the second of Cummins’s proposals attempts to address and, in doing so, it draws on
psychological theory for its insights, rather than linguistics or sociolinguistics. It is based on
“...an analysis of the requirements of language tasks with respect to two dimensions: the degree to
which the language task is supported by non-linguistic contextual cues, and the degree of cognitive
effort involved in task performance “ (Cummins & Swain 1986: 205).
Language proficiency is thus conceptualised along two continua, 1) degree of contextual
support; and 2) cognitive complexity. The use of two continua rather than one represents an
attempt to avoid the oversimplifying effects of dichotomising constructs into two categories,
as well as more adequately representing the two kinds of task demand characteristics
identified by Cummins. It also explicitly links language proficiency and cognitive theories of
knowing and learning.
The first continuum relates to the degree of contextual support available for a task, and the
second to the degree of cognitive effort required by tasks. As with the contextual support
continuum, a particular task or situation does not have a predetermined place on the continuum.
The degree of effort (i.e. how difficult the task is for the individual) is more closely related,
for an individual, to the individual’s degree of mastery of the linguistic tools necessary for the
task, than to some inherent quality of the task itself. These linguistic tools include, following
the Bachman (1990) and Bachman and Palmer (1996) models outlined below:
a. topical knowledge (e.g. the individual’s knowledge, broadly defined, about the topic at hand);
b. language knowledge (which includes the following categories – grammatical, textual, functional
and sociolinguistic). Interacting with and directing these components are
c. strategic competence, or metacognitive strategy use (the effectiveness of the individual at
planning, monitoring, and modifying the language required and used by the test task); and
d. the individual’s affective schemata, which affect the way in which tasks are approached and
undertaken.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
21___
Returning to Cummins’s model: as the degree of mastery of the linguistic tools necessary for
the task increases, the degree of cognitive effort required for the task decreases, and the
same task assumes a different place on the continuum (it moves up towards the undemanding
end). This, of course, begs the question of how mastery of the linguistic tools can be
acquired: in this respect Cummins (2000: 71) suggests that “... language and content will be
acquired most successfully when students are challenged cognitively but provided with the
contextual and linguistic support or scaffolds required for successful task completion”.
It is with this second conception of language proficiency that connections with academic
performance are often made. In particular, the linking of linguistic tools to task performance
rather than to the more general notion of communicative competence makes a good deal of
sense in contexts of widespread educational disadvantage. Poor educational systems do not,
in general, provide appropriate opportunities for the development of task-related academic
language skills. In this connection, the situation of educationally disadvantaged South African
students is particularly difficult, and it is important to note that Cummins’s framework,
developed as it was in a very different context (Canadian secondary schooling) assumes that
learners will already have developed “task-relevant” linguistic skills in their first language.
Nevertheless, the two-continua model provides a “useful conceptual heuristic” (Yeld 2001:
161) for test development, reminding test developers about the numerous variables that can
impact on performance.
An important additional set of insights comes from New Literacy Studies (NLS) theorists and
practitioners who have as a fundamental tenet that literacy is a set of practices located (or
situated) in specific contexts – indeed, more recent NLS thinking holds that “... meaning and
context are mutually constitutive of each other” (Gee 2000: 190). However, in a study exploring
the implications of the NLS approach for large-scale entrance-level assessment, it is argued
that it is the extent to which a candidate can produce a performance that is interpretable from
within a very specific context – that of higher education – that will count (Yeld 2001: 132).
The language proficiency work above is supplemented by understandings from the work of
the Student Learning Research framework (see, for example, Marton & Säljö’s, 1976a & b;
1984; Entwistle & Ramsden, 1983; Marton, Dall’Alba & Beaty, 1993). Studies based on the
Student Learning Research framework have consistently sought to address the challenge
presented by assessing students’ contextualised approaches to language and learning. It is
not enough that language and learning approaches influence meaning-making; it is how these
elements interact with each other in a specific and authentic higher education context that
constitutes academic literacy or the absence thereof. Writers within the Student Learning
Research framework point to successful higher education study being associated with
students who are able to:
a. separate the point of an argument from its supporting detail;
b. interact vigorously and critically with ideas in text and elsewhere;
c. produce well-reasoned arguments supported by appropriate evidence;
d. perceive the structure and coherence of text, as well as the organisation of ideas that contributes
to that structure;
e. understand that learning involves negotiating meaning, applying insights in different contexts,
developing a view of one’s own, and “seeing” the world differently as a consequence of these.
There would appear to be parallels in the views of the researchers on language, learning and
thinking mentioned in the foregoing, and the point of central interest to this research rationale
is that the “academically literate” student is one who has managed to negotiate at least the
abovementioned demands in a context of appropriate and adequate support for learning
development.
TEST DOMAINS AND CONSTRUCTS
22___
chapter three
Operationalising academic literacy
An assessment of the academic literacy levels of a student on entry to higher education
should be developed around a central construct, which embodies the following principles in
its design:
Language should be seen as the vehicle, not the target. This principle relates to earlier
discussion of academic literacy as a capacity to cope with the higher education reading,
writing and thinking demands in the medium-of-instruction;
Assessment should be generic, i.e. any assessment should be based on the typical
academic literacy requirements of any or all disciplines;
Any assessment should be developed in such a way that test-takers have the opportunity
to demonstrate competence in a “real” context that bears relation in its complexity to the
context in which they will study;
Assessment should downplay the role of exposure to prior content knowledge and be
aimed at assessing test takers’ ability to grapple with academic literacy processes (as
delineated in Figure 1 below);
The construct should be based on inputs from inter-disciplinary panels of expertise to
ensure that it has high face validity and that what is assessed bears direct relation to what
is likely to be required (in a generic sense) of test takers in any higher education context.
Staff of the Alternative Admissions Research Project, working with colleagues from several
higher education institutions (see, for example, Cliff, Yeld & Hanslo, under review), have
sought to operationalise the above notions of academic literacy in terms of a framework of
language knowledge specifications, i.e. what students would be required to demonstrate as a
benchmark of performance for the domain of academic literacy. The following table shows the
inter-relationship between the language proficiency required of an academically literate
student and the operationalising of this framework in terms of a set of specifications on a
benchmark test: it also illustrates how it is possible – even desirable – to attempt some form
of description of language and thinking competencies.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
23___
TEST DOMAINS AND CONSTRUCTS
24___
chapter three
Table 1: Language Knowledge Specifications
Language Knowledge
(Bachman & Palmer, 1996)
Language Knowledge Specifications
used in PTEEP test construction
Description of Skill Area
ORGANISATIONAL
KNOWLEDGE
Grammatical
Vocabulary
Morphology
Syntax
Textual
Cohesion
Rhetorical
organisation
Vocabulary: “unknown” and “known”
vocabulary
Students’ established vocabulary
Students’ abilities to derive/work out word meanings from their context,
plus “known” vocabulary
Spelling as it affects meaning
Syntax Students’ abilities to recognise and manipulate the syntactical basis of the
language
Understanding relations between
parts of text
Students’ capacities to “see” the structure and organisation of discourse
and argument, by such means as:
using devices of cohesion such as pronoun reference, particularly
demonstratives, referring to statements/propositions or “entities”,
paying attention – within and between paragraphs in text – to transitions
in argument; superordinate and subordinate ideas; introductions and
conclusions; logical development of ideas;
Skimming and scanning Students’ abilities to use macro features of text such as headings,
illustrations) to get gist of passage, or to locate particular pieces of
information
Inference, extrapolation and
application
Students’ capacities to draw conclusions and apply insights, either on the
basis of what is stated in texts or is implied but not explicitly stated in
these texts.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
25___
PRAGMATIC
KNOWLEDGE
Functional
knowledge
Ideational
Manipulative
Heuristic
Imaginative
Sociolinguistic
knowledge
(sensitivity to
dialect, language
variety; register;
naturalness
criteria)
Separating the essential from the
non–essential
Students’ capacities to “see” main ideas and supporting detail; statements
and examples; facts and opinions; propositions and their arguments; being
able to classify, categorise and ‘label’
Detailed reading for meaning Students abilities to “get at” meaning,at sentence level and at discourse level
Understanding the communicative
function of sentences with or without
explicit indicators
Students’ abilities to “see” how parts of sentences / discourse define other
parts; or are examples of ideas; or are supports for arguments; or
attempts to persuade; or to define.
Understanding the importance of
“own voice” (including “ownership” of
ideas) and/or creativity of thought
and expression
Students’ abilities to use “own voice” appropriately and effectively, and to
acknowledge sources of ideas or information (stage specific)
Understanding visually encoded
forms of information representation
Students’ abilities to understand and use graphs, tables, diagrams,
pictures, maps, flow-charts
Understanding basic numerical
concepts expressed in text and
undertake simple numerical
manipulations
Students’ abilities to make numerical estimations; comparisons; calculate
percentages and fractions; make chronological references and sequence
events / processes; perform basic computations
Understanding metaphorical
expression
Students’ abilities to understand and work with metaphor in language.
This includes their capacity to perceive language connotation, word play,
ambiguity, idiomatic expressions, and so on. Familiarity with cultural
references and figures of speech
Understanding text genre Students’ abilities to perceive “audience” in text and purpose in writing,
including an ability to understand text register (formality / informality)
and tone (didactic / informative / persuasive / etc.),
It goes almost without saying that there is likely to be some degree of overlap between the
specification of numerical competence described in Figure 1 and at least one of the foci of a
numeracy benchmark test described by Frith and Prince (see below). Suffice it to say here
that the specification of numeracy in a test of academic literacy relates to students being able
to demonstrate a basic understanding of information conveyed by numbers as they might
encounter this in texts that range across disciplines, i.e. from the sciences, to commerce, to
the humanities and the social sciences.
One further important challenge to the assessment of academic literacy benchmark performance
is to determine different levels of benchmark performance across particular domains of the
test. It is not sufficient for a benchmark test to determine the construct of academic literacy
and to operationalise this construct: the test should also be able to assess different bench-
mark levels within domains. For example, if a benchmark test is to assess students’ grasp of
information presented visually, then it should also have specified the level of proficiency
required for a student to access diploma study or degree study.
In an early attempt to explore learners’ conceptions of learning from a phenomenographic
perspective, Säljö’s (1979) study reported five qualitatively distinctive categorisations of
learning conception. Based on learners’ self-reports, these categorisations essentially
suggested two contrasting ways in which the process of learning could be conceived: as
a process of collecting and assimilating information or as a process of transforming that
information. The model seemed to depict these processes in hierarchical terms. Processes of
transforming knowledge, applying it in different contexts, interpretation, and making personal
meaning were portrayed as being hierarchically superior to (or more inclusive than) processes
of collecting information, remembering it, and being able to reproduce it mechanically for
assessment purposes. The former processes were viewed as superior to the latter in the
sense that they appeared to involve some form of reworking, personal understanding and
integration of knowledge, whereas the latter involved rote engagement without any attempt
at reinterpretation or personal sense-making. In the context of the present discussion of
benchmarking, then, the reproductive and transformative dimensions referred to here can be
seen as forms of benchmarking student responses. For candidates whose successful
performance falls overwhelmingly within the reproductive dimension rather than the trans-
formative, it can be concluded that they are highly likely to have been unable to transform
what they have learned and may arguably not yet be sufficiently academically literate. The
challenge in benchmarking performance, though, lies in being able to assess the extent to
which the transformative dimension is sufficiently in evidence to suggest that a student is
likely to cope with the demands of higher education study at a particular level.
The results of Säljö’s original study have been replicated and extended by other research
studies. These include findings of associations between these conceptions and learning
outcome (Van Rossum & Schenk, 1984) and associations between conceptions and learners’
perceptions of other important academic context variables, such as teaching and assessment.
A longitudinal study of learning conception by Marton, Dall’Alba & Beaty (1993) reported a
sixth qualitatively distinct dimension of learning conception: learning as changing a person.
One of the difficulties for assessment of models that conceive of learning processes as
hierarchically organised, however, is that they ignore the fact that it is possible to assess
processes in easy as well as difficult ways – so a “hierarchically superior” process might in
fact be assessed in such a way (for example, by using a very simple context) as to be easier
than the assessment of a supposedly inferior one. As Mullis et al (2003: 25) argue,
“In general, the cognitive complexity of tasks increases from one broad cognitive domain to the
next. [....]. Nevertheless, cognitive complexity should not be confused with item difficulty. For nearly
all the cognitive skills listed, it is possible to create relatively easy items as well as very challenging
items. In developing items aligned with the skills, it is expected that a range of item difficulties will
be obtained for each one, and that item difficulty should not affect the designation of the cognitive
skill.
TEST DOMAINS AND CONSTRUCTS
26___
chapter three
Concluding Comment
There are many assessment frameworks currently in use. Well-known examples are those
used by the Programme for International Student Assessment (PISA), the Trends in
International Mathematics and Science Study (TIMSS) Assessment Frameworks. A modified
version of these is proposed by the DoE (see the Subject Assessment Guidelines for
Mathematics Literacy, for example5). An interesting recent development is the project under-
taken from 1995-2000 in the United States, to revise the most famous of all taxonomies:
Bloom’s 1949 taxonomy of educational objectives (Bloom et al, 1949). The revised taxonomy
derives partly from the original structure of educational objectives, but incorporates advances
in cognitive psychology and takes into account the many other initiatives since 1949. It offers
a potentially useful set of six categories which make up the “cognitive process dimension”,
and separates this from the “knowledge dimension” which, in turn, is subdivided into four
categories (see Anderson, 2005 for a concise exposition of the revised taxonomy). Its interest
to the NBTP lies partly in the widespread recognition given to the original Bloom’s taxonomy,
which means that it is instantly recognisable by educators in all walks of life, and partly
because its use as an assessment framework is presently being investigated by various South
African initiatives.
At this stage no final decision has been made about which approach to use, as the test
development team is currently assessing the appropriateness and feasibility of several
approaches.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
27___
5The document is available on the following website:
http://www.education.gov.za/mainDocument.asp?src=docu&xsrc=poli.
Quantitative Literacy
A national benchmark test in Quantitative Literacy needs to address the same central
question as framed in the previous section on Academic Literacy and should foreground the
same key elements to consider, such as:
What is meant by “core” quantitative literacy competencies in higher education?
What is meant by “sufficient” quantitative literacy competencies for different levels of
qualification, disciplines and curricula in higher education?
What is quantitative literacy at tertiary level?
For a working definition of the domain, Quantitative Literacy, and its assessment we draw on,
amongst other sources, the work of Street (1995), Baynham and Baker (2002), Chapman and
Lee (1990), Chapman (1998), Steen (2001), Jablonka (2003), the Adult Literacy and Lifeskills
(ALL) Survey, the Programme for International Student Assessment (PISA), the Third
International Mathematics and Science Study (Mullis, 2003) Assessment Frameworks and the
experience of the Quantitative Literacy Test project of the Numeracy Centre at the University
of Cape Town (see Frith, Bowie, Gray & Prince, 2003; Frith, Jaftha & Prince, 2004a). We
present a definition of quantitative literacy (and reasoning) that can be used, amongst other
purposes, to operationalise the development of a Quantitative Literacy Benchmark Test.
There is an ongoing debate about the meaning of the term “Quantitative Literacy” (also
known as “numeracy” or “mathematical literacy” in different countries and/or contexts) and
its relationship to “literacy” and to “mathematics”. This debate is exemplified by the various
articles in Mathematics and Democracy: The Case for Quantitative
Literacy (Steen, 2001), in which the preface states that the book “does not seek to end
debate about the meaning of numeracy. On the contrary, it aspires instead to be a starting
point for a much needed wider conversation” (Orrill, 2001: xvii).
We contend that the concept of “practice” (Lave & Wenger, 1991) is a useful and generative
way of thinking about quantitative literacy (Prince & Archer, 2005; Archer, Frith & Prince,
2002). Quantitative literacy cannot be seen as a set of identifiable mathematical skills that
can be taught and learned without reference to the social contexts where they might be
applied. Baynham and Baker (2002: 2) stress that the term practice is used to incorporate
“both what people do and the ideas, attitudes, ideologies and values that inform what
they do.” They attribute the introduction of the term “practice” in this way for describing
quantitative literacy to Street (1984). Baker, Clay and Fox (1996: 3) refer to “the collection
of numeracy practices that people engage in – that is the contexts, power relations and
activities – when they are doing mathematics”. Jablonka (2003: 78) also argues that the
promotion of any definition of mathematical literacy will implicitly or explicitly promote a
particular social practice.
Chapman and Lee (1990: 277) in thinking about numeracy and learning at tertiary level,
attempt to “situate numeracy within a larger reconceptualised notion of literacy”. They argue
that it is not possible to draw an artificial separation between the notions of quantitative
literacy and literacy, but rather that quantitative literacy involves many competencies:
“reading, writing and mathematics are inextricably interrelated in the ways in which they are
used in communication and hence in learning.” Chapman (1998) develops a framework for
what she calls “academic numeracy” as a means for describing the numeracy demands of
academic texts and tasks.
We take the view that quantitative literacy, can be described in terms of 1) the contexts that
require the activation of quantitative literacy practice; 2) the mathematical and statistical
content that is required when quantitative literacy is practiced; and 3) the underlying
reasoning and behaviours that are called upon to respond to a situation requiring the
activation of quantitative literacy practice.
TEST DOMAINS AND CONSTRUCTS
28___
chapter three
Contexts
The various positions presented in Steen (2001) reinforce the idea that quantitative literacy
practice, as opposed to mathematics, is always embedded within a context. Yet, until now,
the dominant pedagogical practice of teaching mathematical literacy in the restricted context
of the formal mathematics classroom is at odds with this idea. Usiskin (2001) warns against
the use of contrived “real-life” examples masquerading as “reality” in the mathematics class-
room. Teaching quantitative literacy requires the use of authentic contexts, which need to be
understood as clearly as the mathematics that is being applied. Hughes-Hallett (2001: 94)
summarises the difference between quantitative literacy and mathematics as follows:
...mathematics focuses on climbing the ladder of abstraction, while quantitative literacy clings to
context. Mathematics asks students to rise above context, while quantitative literacy asks students
to stay in context. Mathematics is about general principles that can be applied in a range of
contexts; quantitative literacy is about seeing every context through a quantitative lens.
In terms of developing a benchmark test, the challenge is to find contexts that are sufficiently
relevant that they motivate test-takers to truly display their potential for quantitatively
literate practice.
Content
Clearly mathematical and statistical content knowledge is essential for quantitatively literate
practice, although there will be debate about the exact nature of appropriate content
knowledge for different contexts and/or academic disciplines. The point is made by Steen
(2001) and Hughes-Hallett (2001) that statistics and data handling (rather than traditional
school mathematics topics) play a dominant role in quantitative literacy, and this is certainly
true of the quantitative literacy required for many academic disciplines. To be quantitatively
literate at tertiary level, a student will need a great deal more than just basic arithmetic and
mathematical skills.
Reasoning and behaviours
According to Jablonka (2003: 78), “Any attempt at defining ‘Mathematical Literacy’ faces the
problem that it cannot be conceptualised exclusively in terms of mathematical knowledge,
because it is about an individual’s capacity to use and apply this knowledge. Thus it has to
be conceived of in functional terms as applicable to the situations in which the knowledge is
to be used.” This emphasis on the use and application of knowledge implicitly assumes the
importance of the associated quantitative thinking and reasoning. In the literature, there is
no clear definition or characterisation of these “mathematical actions” which include such
activities as drawing connections, visualising, questioning, representing, concluding and
communicating (Boaler, 2001). Clearly it is imperative that some characterisation of these
critical competencies must be part of a test construct for quantitative literacy.
Being numerate requires the ability to express quantitative information coherently in a
verbal and visual form. Kemp (1995) argues that mathematical literacy includes the ability
to communicate clearly and fluently and to think critically and logically. In dealing with
quantitative or mathematical ideas in context, students should be able to interpret information
presented either verbally, graphically, in tabular or symbolic form, and be able to make
transformations between these different representations. The transformation of quantitative
ideas into verbal messages is the area where a student’s ability to write coherently about
quantitative ideas will be exercised. Mathematical literacy also requires the ability to choose
the appropriate form for the expression of a quantitative idea, and to produce a text that
expresses that idea. Thus the practice of mathematical literacy must include the ability to put
together a document for a particular purpose in a particular context.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
29___
Definition
We adopt the following definition of quantitative literacy, in which all three approaches to the
description (contexts, content and reasoning) are embedded:
Quantitative literacy is the ability to manage situations or solve problems in practice, and involves
responding to quantitative (mathematical and statistical) information that may be presented
verbally, graphically, in tabular or symbolic form; it requires the activation of a range of enabling
knowledge, behaviours and processes and it can be observed when it is expressed in the form of a
communication, in written, oral or visual mode.
This definition has been informed by the discussions and definitions implicit in the frameworks
used by NAEP (National Assessment Governing Board, 2004), TIMMS (Mullis et al, 2003),
PISA (Programme for International Student Assessment, 2003) and ALL (2002) studies. The
relevant components of these frameworks will be further discussed in our elaboration of the
definition in Appendix 1.
Table 2 below provides an expanded representation of the elements that make up this
definition, which is further elaborated in detail in Appendix 2 (pages 47-54). In particular, the
following elements are explicated:
Real contexts
Responding
Quantitative information
Representation of quantitative ideas
Activation of enabling knowledge, behaviours and processes
Expressions of quantitatively literate behaviour
Table 2: An expanded representation of the definition of
quantitative literacy
Quantitative literacy is the ability to:
manage a situation or solve a problem in a real context
Education (tertiary) – Health, Law, Social Science, Commerce etc.
Professions – Health, Law, Social Science, Commerce etc.
Personal Finance
Personal Health
Management
Workplace
Citizenship
Culture
by responding
Comprehending: identifying or locating
Acting upon
Interpreting
Communicating
to information (about mathematical and statistical ideas)
Quantity and number
Shape, dimension and space
Relationships, pattern, permutation
Change and rates
Data representation and analysis
Chance and uncertainty
TEST DOMAINS AND CONSTRUCTS
30___
chapter three
that is represented in a range of ways
Numbers and symbols
Words (text)
Objects and pictures
Diagrams and maps
Charts
Tables
Graphs
Formulae
and requires activation of a range of enabling knowledge,
behaviours and processes
Quantitative (mathematical and statistical) knowledge
Mathematical and statistical techniques and “skills”
Quantitative reasoning
Literacy skills [language, visual]
Use of computational technology
Beliefs and attitudes
it can be observed when it is expressed in the form of
a “text”
Written
Oral
Visual [includes concrete objects]
Operationalisation of the definition for test development and
analysis
Table 3 below summarises the operationalisation of the understanding of quantitative literacy
for the purpose of test development and analysis. Criteria (or competence areas) selected to
be assessed in a Quantitative Literacy benchmark test will depend on the choices made about
the format, complexity level and test length. The Quantitative Literacy benchmark test may
very well encompass specifications that overlap with other domains (Academic Literacy and
Cognitive Academic Mathematical Proficiency). Items within the test may also simultaneously
assess different competence areas (described in Table 3) within the Quantitative Literacy
domain.
The competence areas defined in this table will allow the test itself, student results and cohort
results to be analysed using clustering of items and criteria. For example, the test can be
analysed from the point of view of which competencies (such as reasoning) are required, or
which quantitative content areas are addressed (such as relationships or algebra).
In the section on Academic Literacy, the concluding discussion about classifying items in
terms of cognitive complexity applies equally to the consideration of describing the complexity
of test items in the Quantitative Literacy domain.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
31___
Table 3: Competencies Specification for a Quantitative Literacy
Test
TEST DOMAINS AND CONSTRUCTS
32___
chapter three
Competence area Description/specifications
Comprehending:
identifying or
locating
Vocabulary The ability to understand the meanings of
commonly encountered “quantitative” terms
and phrases (such as percentage increase,
rate, approximately, representative sample,
compound interest, average, order, rank,
category, expression, equation), and the
mathematical and statistical concepts
(including basic descriptive statistics) that
these terms and phrases refer to.
Representations
of numbers and
operations
The ability to understand the conventions
for the representation of numbers (whole
numbers, fractions, decimals, percentages,
ratios, scientific notation), measurements,
variables and simple operations (+, -, ×, ÷,
positive exponentiation, square roots) on
them.
Conventions
for visual
representations
The ability to understand the conventions for
the representation of data in tables (several
rows and columns and with data of different
types combined), charts (pie, bar, compound
bar, stacked bar, “broken” line, scatter plots),
graphs and diagrams (such as tree diagrams,
scale and perspective drawings, and other
visual representations of spatial entities)
Using
representations of
data
Acting, interpreting,
communicating
The ability to derive and use information from
representations of contextualised data and to
interpret the meaning of this information.
Computing The ability to perform simple calculations
as required by problems and to interpret
the results of the calculations in the original
context.
Conjecturing The ability to formulate appropriate questions
and conjectures, in order to make sense of
quantitative information and to recognise
the tentativeness of conjectures based on
insufficient evidence.
Interpreting The ability to interpret quantitative information
(in terms of the context in which it is embedded)
and to translate between different representa-
tions of the same data. This interpretation
includes synthesising information from more
than one source and identifying relationships
(patterns) in data.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
33___
Reasoning
Acting, interpreting,
communicating
The ability to identify whether a claim is
supported by the available evidence, to
formulate conclusions that can be made given
specific evidence or to identify the evidence
necessary to support a claim.
Quantity, number
and operations.
Mathematical and
statistical ideas
The ability to order quantities, calculate and
estimate the answers to computations
required by a context, using numbers (whole
numbers, fractions, decimals, percentages,
ratios, scientific notation) and simple
operations (+, -, ×, ÷, positive exponentiation)
on them.
The ability to express the same decimal
number in alternative ways (such as by
converting a fraction to a percentage, a
common fraction to a decimal fraction and
so on).
The ability to interpret the words and phrases
used to describe ratios (relative differences)
between quantities within a context, to
convert such phrases to numerical
representations, to perform calculations with
them and to interpret the result in the original
context. The ability to work similarly with
ratios between quantities represented in tables
and charts, and in scale diagrams.
Representing
quantitative
information
The ability to represent quantitative
information verbally, graphically,
diagrammatically and in tabular form.
Describing
quantitative
relationships
The ability to describe patterns, comparisons
between quantities, trends and relationships
and to explain reasoning (linking evidence
and claims)
Shape, dimension
and space.
The ability to understand the conventions for
the measurement and description
(representation) of 2- and 3-dimensional
objects, angles and direction,
The ability to perform simple calculations
involving areas, perimeters and volumes of
simple shapes such as rectangles and cuboids.
Relationships,
pattern,
permutation
The ability to recognise, interpret and
represent relationships and patterns in a
variety of ways (graphs, tables, words and
symbols)
The ability to manipulate simple algebraic
expressions using simple arithmetic
operations.
Concluding Comment
We maintain that quantitative literacy is a vital competence required by students entering into
and for success in higher education. We have presented a definition of quantitative literacy
which we have used to construct a specification for a quantitative literacy test. For this
purpose we have drawn upon several assessment frameworks currently in us, such as those
of the Programme for International Student Assessment (PISA), the Trends in International
Mathematics and Science Study (TIMSS) and Adult Literacy and Life (ALL) Skills Survey, as
well as the framework in the national Department of Education’s Subject Assessment
Guidelines for Mathematical Literacy. As mentioned in the section on Academic Literacy, a
final decision has yet to be made about the exact form of the assessment framework that will
be used for this project.
TEST DOMAINS AND CONSTRUCTS
34___
chapter three
Change and rates
Mathematical and
statistical ideas
The ability to distinguish between changes
(or differences in magnitudes) expressed in
absolute terms and those expressed in relative
terms (for example as percentage change).
The ability to quantify and reason about
changes or differences.
The ability to calculate average rates of
change and to recognise that the steepness of
a graph represents the rate of change of the
dependent variable with respect to the
independent variable.
The ability to interpret curvature of graphs in
terms of changes in rate.
Data representation
and analysis
The ability to derive and use information from
representations of contextualised data in
tables (several rows and columns and with
data of different types combined), charts (pie,
bar, compound bar, stacked bar, “broken” line,
scatter plots) graphs and diagrams (such as
tree diagrams) and to interpret the meaning
of this information.
The ability to represent data in simple tables
and charts, such as bar or line charts.
Chance and
uncertainty
The ability to appreciate that many
phenomena are uncertain and to quantify the
chance of uncertain events using empirically
derived data. This includes understanding the
idea of taking a random sample.
The ability to represent a probability as a
number between 0 and 1, with 0 representing
impossibility and 1 representing certainty.
Domain 2–
Cognitive Academic Mathematical Proficiency (CAMP)
Carol Bohlmann and Max Braun
Universities of South Africa and Pretoria
Overview
Whereas the Academic Literacy and Quantitative Literacy tests are intended as tests of generic
skills in these domains, the Cognitive Academic Mathematical Proficiency (CAMP) test is
explicitly designed to measure how well the school exit qualification – the new National Senior
Certificate (NSC) – will assess the mathematical preparedness of candidates for higher education
by comparing student performance at tertiary level against their performance in the NSC and
in the benchmark test.
The term “CAMP” is designed to link with the Basic Interpersonal Communicative
Skills/Cognitive Academic Language Proficiency (BICS/CALP) distinction made in relation to
Academic Literacy (see pages 20-22). The BICS/CALP distinction plays an important role in
all text comprehension. Cummins (1981) postulated the existence of a minimal level of linguistic
competence that students must attain in order to perform cognitively demanding tasks. Dawe
(1983) further postulated the need for a threshold level of proficiency in what he called CAMP:
Cognitive Academic Mathematics Proficiency. Dawe contended that the underlying proficiency
needed to complete mathematical tasks involves cognitive knowledge (mathematical
concepts and their application) embedded in a language specifically structured to express that
knowledge. Research has shown that although second language students may acquire BICS
fairly quickly, the acquisition of academic language skills takes an average of five years (see
for example Garaway, 1994).
In the Learning Programme Guidelines for the National Curriculum Statement (NCS) for
Mathematics for Grades 10 to 12, it is stated that:
The curriculum for Mathematics is based on the following view of the nature of the discipline.
Mathematics enables creative and logical reasoning about problems in the physical and social world
and in the context of Mathematics itself. It is a distinctly human activity practised by all cultures.
Knowledge in the mathematical sciences is constructed through the establishment of descriptive,
numerical and symbolic relationships. Mathematics is based on observing patterns, which, with
rigorous logical thinking, leads to theories of abstract relations. Mathematical problem solving
enables us to understand the world and make use of that understanding in our daily lives. (DoE,
2005a: 7)
Furthermore, the document states that
Mathematics should enable learners to establish an authentic connection between Mathematics as a
discipline and the application of Mathematics in real-world contexts. Mathematical modelling
provides learners with a powerful and versatile means of mathematically analysing and describing
their world. ... Mathematical modelling allows learners to deepen their understanding of mathematics
while expanding their repertoire of mathematical tools for solving real-world problems. (Ibid.,
pp.11-12)
The Learning Programme Guidelines for Mathematics emphasise the ability of Mathematics to
provide the conceptual tools required to analyse (situations and arguments), make and justify
critical decisions, implying that the presence of these attributes carries over into every day
life, and has benefits beyond the subject-related ones: “Mathematics is also important for the
personal development of any learner. Mathematics is used as a tool for solving problems related
to modern society and for accelerating development in societies and economies” (DoE,
2005a:7).
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
35___
The ability of students to analyse and describe their world mathematically presupposes an
ability to comprehend text and formulate principles verbally (academic literacy). Formulating
the problem, together with an ability to use mathematical tools to analyse the situation in
a meaningful way, constitutes mathematical modelling. After some form of mathematical
modelling, however contrived it may be at this level, quantitative reasoning comes into play
in the interpretation of answers. Meta-cognitive skills thus also need to be assessed in a
mathematical context, if sense making is regarded as an important academic skill.
“Making sense” – whether choosing the right “tools” or coming to the appropriate conclusions
using logic in an appropriate context – is critically important for all scientific and health
disciplines, as well as being a cornerstone in South Africa’s technological vision.
Acceptable assessment practice suggests that assessment tasks should provide balanced
problems which are meaningful, informative, set in recognisable contexts, and involve higher
order thinking (van den Heuvel-Panhuizen, 1996). One possible categorisation of educational
goals into lower, middle and higher levels is provided by de Lange (1994) in Verhage & de
Lange (1997). This categorisation is illustrated in Figure 1. In terms of this categorisation,
straightforward real-life assessment tasks may be at the lower level, while meaningless but
difficult tasks are also lower level activities if they require no insight and simply the application
of routine skills. Middle level tasks require that students relate two or more concepts or
procedures, and although they may be easier to solve, they are richer and more meaningful
than more involved but meaningless routine tasks.
The third level includes other aspects such as mathematical thinking, communication, critical
attitude, creativity, interpretation, reflection, generalisation and “mathematising” (Verhage &
de Lange, 1997: 16). Freudenthal coined the term “mathematisation” in the 1960s to “signify
the process of generating mathematical problems, concepts, and ideas from a real-world
situation and using mathematics to attempt a solution to the problems so derived” (Perry &
Dockett, 2002: 89). Although Freudenthal’s intention was that this process should begin
in early childhood, and even if such opportunities may have been lacking during the
developmental years, they should not necessarily be excluded later. The important point
is that learners should have discovered and learned to use mathematical tools with which
they can organise and solve real-life problems.
Figure 1:
Levels of mathematical educational goals
TEST DOMAINS AND CONSTRUCTS
36___
chapter three
higher level
(reasoning)
lower level
(reproduction)
algebra
number
geometry
information
processing statistics
middle level
easy
difficult
It is important that assessment should enable learners to demonstrate what they know rather
than what they do not know; furthermore, assessment should integrate lower, middle and
higher level goals of mathematical education (see Figure 1 above).
It is thus clear that the CAMP test needs to assess the development of some level of rigorous
logical thinking, as well as the development of the skills and tools required to use mathe-
matical concepts to solve real world problems. The mathematical tools used in the exercise
of logical thinking and the knowledge and skills required to solve problems would be expected
to have been provided by the secondary school curriculum, and problem solving would involve
concrete everyday situations in which such ideas may be relevant.
The minimum requirements indicated in the Subject Assessment Guidelines of the National
Curriculum Statement for Mathematics for Grades 10–12 provide information regarding the
level at which content-based aspects could be measured. Test items will also be influenced by
the taxonomy of categories of mathematical demand specified in the Subject Assessment
Guidelines, which indicates that learners need to:
perform on the levels of knowing (recall, or basic factual knowledge are tested);
perform routine as well as complex procedures; and
engage in problem solving (see the Subject Assessment Guidelines, DoE, 2005b: 26–28).
The Learning Programme Guidelines for Mathematics point out that “the process skills
developed in Mathematics are those that enable learners to become mathematicians as
opposed to stunting their growth through an emphasis on rote approaches to the subject”;
and furthermore, that “A learner who achieves the Assessment Standards for Mathematics
will be well prepared for the mathematics required by Higher Education Institutions” (DoE,
2005b: 8). This view is welcomed by higher education, where rote learning has for so long
undermined learners’ progress. It is thus important to measure the extent to which this has
been achieved at entry levels to higher education study. The CAMP test thus needs to assess
the extent to which the new schools curriculum has prepared students with both low- and
high-level skills that equip them to engage with abstract mathematical concepts and their
real-life applications. As in the other tests, the CAMP test needs to take into account student
diversity and accommodate differences in the ways that students think and demonstrate their
knowledge and skill. The CAMP test needs to reflect the vision of the Learning Programme
Guidelines which highlights the interrelatedness of content, processes and contexts (see
Figure 2.3 in the Learning Programme Guidelines: Mathematics (DoE, 2005b: 13).
The Specific Role of the CAMP Test
It is important to clarify the specific role of the CAMP test in relation to the Academic Literacy
and Quantitative Literacy tests. The Academic and Quantitative Literacy tests determine the
degree to which essential skills for a school-leaver and functional participant in higher education
have been obtained from prior learning opportunities in verbal and numerical contexts. The
Mathematics test assesses the degree to which a learner has achieved the ability to manipulate,
raise questions, synthesise a number of different mathematical concepts and draw strictly
logical conclusions in abstract symbolic and complex contexts. These higher skills underlie
success in Mathematics in higher education. These skills, developed deliberately in mathe-
matical subjects such as Mathematics and Physical Science, are often implicitly expected by
higher education institutions and are included in the design of courses or modules satisfying
outcomes-based education norms. Where candidates for higher education programmes have
not been exposed to the specific mathematics concepts that could reasonably be expected to
be included in the Mathematics test (for reasons such as inadequate or disrupted schooling),
some of the generic skills must be assessed in the more concrete contexts of the Academic
Literacy and Quantitative Literacy tests. Some overlap between the three test components
may therefore be expected, but it is also evident that the contexts of the tests should be
appropriately different.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
37___
Source of skills and competencies that the CAMP
test must address
In the design of CAMP test items, it will be important to:
interrogate the outcomes specified in the Mathematics subject statement in order to make
certain that these are covered in the CAMP; and
consider the mathematical concepts required in higher education programme contexts
requiring mathematics.
The Nature of the Test
The focus is on achievement/proficiency but it is necessary to construct richly contextualised
tests that are sensitive to the diversity of candidates in terms of their levels of preparedness.
Equity in assessment is related to equity of resources (Mathematical Sciences Educational
Board (MSEB), 1993). The test items would thus need to make provision for diversity in terms
of ethnicity, language, urban/rural and socio-economic status (Tate, 1997). Some dynamic
components are necessary if learning ability (rather than only past achievement) is to be
assessed. However, dynamic test items would probably only be included once more standard
items have been adequately piloted.
It is important that assessment should reflect mathematical goals, so that skills are assessed
at different levels, that is, low, middle and high (Verhage & de Lange, 1997). In the design
of the test scoring may need to make provision for a number of sub-minima, or a weighting
of questions, so that high scores on low-level skills do not contribute in the same way to the
over-all profile of students as high-scores on high-level skills. Testing for understanding
should carry more weight than assessing the extent to which concepts could have been
memorised (see also Lawson, 1995; Mason, 2002; Kahn & Kyle, 2002).
To provide learners with a smooth interface between Mathematics at secondary and tertiary
level, the competencies that are required, but not necessarily made explicit, by higher
education need to be assessed. The choice of competencies is also influenced by the four
Learning Outcomes (LO1, 2, 3 and 4) that appear in the Learning Programme Guidelines6of
the National Curriculum Statement for Mathematics for Grades 10-12:
Number and number relationships;
Functions and algebra;
Shape, space and measurement; and
Data handling and probability.
These are the outcomes that will be assessed in Grade 12 in 2008, 2009 and 2010. Table 4
outlines the competencies that will be assessed in the CAMP test.
TEST DOMAINS AND CONSTRUCTS
38___
chapter three
6The document is available on the following website:
http://www.education.gov.za/mainDocument.asp?src=docu&xsrc=poli.
Table 4:
Definition of Competencies that should be assessed in the
CAMP Test
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
39___
Competence Specific aspects assessed
Problem solving and modelling within
mathematical contexts:
operations with fractions and decimals;
operations involving abstract relationships
such as ratios, percentages and powers;
interpretation of scientific notation;
orders of magnitude; number sense;
quantitative comparisons
spatial perception including angles,
symmetries, measurement, representation
and interpretation of 2D and 3D shapes
functions represented by tables, graphs
and symbols; distinction between
dependent and independent variables;
relationship between graphs and algebraic
equations and inequalities; functions and
their inverses, e.g. the relationship
between ln and ex; recognise and apply
functional relations, such as direct and
inverse proportion, in a variety of ways;
understanding of above/below, e.g.
approaching from above, from below;
translation between different methods of
representation of functions
operations with surds
circle geometry
basic trigonometry, including graphs of
trigonometric functions
solving for unknown quantities in single
and simultaneous linear, quadratic
equations, and simple polynomial
expressions and inequalities
using common statistical measures
(mean, median, mode, range)
pattern recognition (as in sequences
and series)
conversion from language to symbolic
form, e.g. the common interpretation of
“I earn 25% less than you” as “you earn
25% more than me”; confusion between
“x times more than y”, and “x more than y”
There is clearly an overlap with the other domains. The overlap emphasises the importance
of the achievement of a threshold in Academic and Quantitative Literacy skills in order for
learners to cope with the conceptual demands of the mathematical domain. In the CAMP test
competencies will be assessed within a specific mathematical context, so that the item
content will be essentially different to that of items in the AL/QL domain. Learners who have
inadequate grounding in the competencies described above are unlikely to cope with mathe-
matics at tertiary level. The fact that mathematics requires learners to integrate many
different skills and concepts in a given problem means that individual test items will assess
across the range of mathematical competencies. For example, an item dealing with the graphical
representation of a function will also assess spatial and algebraic competence. Test items will
focus specifically on the following clusters: algebra, trigonometry, geometry and spatial
awareness; and logarithms, exponents and surds. Mathematical concepts will be assessed
within a context of deep understanding, with an emphasis on the ability to move between
different forms of mathematical representation.
TEST DOMAINS AND CONSTRUCTS
40___
chapter three
Competence Specific aspects assessed
Manipulation of formal conditional, biconditional
and contrastive statements; interpretation of
inferences (logic of theorems, converses,
definitions)
identify appropriate evidence to support
a claim or an argument
critique assumptions and thinking which
underlie logical argument
evaluate the validity of evidence used to
support claims
tentative or conclusive reasoning
see logical relationships between
statements;
construction of all possible combinations
or conditions
understand the concept of “chance” and
probability and draw inferences within
them
Algebraic manipulation
ability to perform basic manipulation of
algebraic expressions
Concluding Comment
There are many relevant assessment frameworks for mathematics, such as those used by the
Programme for International Student Assessment (PISA), and the Trends in International
Mathematics and Science Study (TIMSS) Assessment Frameworks. The Subject Assessment
Guidelines of the National Curriculum Statement for Mathematics for Grades 10-127propose
a taxonomical differentiation of questions in which knowledge, performance of routine
procedures, performance of complex procedures and problem solving carry weights
respectively of approximately 25%, 30%, 30% and 15%. At this stage no final decision
has been made regarding the balance of items in the CAMP test, as the test development
team is currently assessing the appropriateness of initial test items and the effectiveness of
feasible approaches to assessment.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
41___
7The document is available on the following website:
http://www.education.gov.za/mainDocument.asp?src=docu&xsrc=poli.
Appendix 1
TEST ITEM DEVELOPMENT
TEAM MEMBERS, 2005
REFERENCE GROUP
NATIONAL BENCHMARK TESTS
PROJECT CONSULTATIVE
MEETING
25 NOVEMBER 2004
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
43___
Test Item Development Team Members, 2005
APPENDIX 1
44___
appendix 1
Academic Literacy
Dr Alan Cliff (team leader) University of Cape Town
Prof Albert Weideman University of Pretoria
Tobie Van Dyk University of Stellenbosch
Dr Leonora Jackson University of KwaZulu-Natal
Desiree Scholtz Cape Peninsula University Of Technology
Thabisile Biyela UNIZUL
Dr Esther Ramani University of Limpopo
Quantitative Literacy
Robert Prince (team leader) University of Cape Town
Vera Frith University of Cape Town
Dr Kabelo Chuene University of Limpopo
Dr Mellony Graven University of Witwatersrand
Mathematics
Dr Kwena Masha (team leader) University of Limpopo
Prof Richard Fray University of the Western Cape
Prof Babington Makamba UFH
Dr Carol Bohlman UNISA
Prof Max Braun University of Pretoria
Chaim Agasi AARP
Reference Group
National Benchmark Tests Project Consultative Meeting
25 November 2004
Universities
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
45___
Name Institution
Helen Alfers Rhodes University
Dr Ludolph Botha University of Stellenbosch
Prof Max Braun University of Pretoria
Dr CJ Chikunji UNITRA
Zoleka Dotwana UNITRA
Prof Cornelius Fourie Rand Afrikaans University
Dr Andrew Fransman University of Stellenbosch
Prof Roland Fray University of Western Cape
Prof Wilfred Greyling University of the Free State
Dr Leonora Jackson University of Kwa-Zulu Natal
Prof Elize Koch University of Port Elizabeth
Nick Kotze University of North West
Prof Babington Makamba University of Fort Hare
Dr David Mogari University of Venda
Nombulelo Phewa UNISA
Lillie Pretorius UNISA
Netta Schutte University of North West
Prof Peggy Siyakwazi University of Venda
Prof Maritz Snyders University of Port Elizabeth
Dr Francois Strydom University of the Witwatersrand
Prof Albert Weideman University of Pretoria
Universities of Technology
Core Team
SAUVCA & CTP Directorates
APPENDIX 1
46___
appendix 1
Name Institution
Sandy Blunt Port Elizabeth Technikon
Cariana Fouché Vaal University of Technology
Macelle Harran Port Elizabeth Technikon
Elza Hattingh Tshwane University of Technology
Dr Solomon Moeketsi Central University of Technology
Koo Parker Durban Institute of technology
Shubnam Rambharos Durban Institute of Technology
Prof Taivan Schultz Central University of Technology
Elmarie van der Walt Cape Technikon
Elmarie van Heerden Tshwane University of Technology
Name Institution E-Mail
Carol Bohlman UNISA
Dr Alan Cliff University of Cape Town
Prof Cheryl Foxcroft University of Port Elizabeth
Kwena Masha University of the North
Robert Prince University of Cape Town
Prof Tahir Wood University of the Western Cape
Prof Nan Yeld University of Cape Town
Hanlie Griesel
Ronnie Kundasami
Cobus Lötter
Kogie Pretorius
Gladness Seabi
Appendix 2–
ELABORATION
ON THE ELEMENTS
OF THE DEFINITION OF
QUANTITATIVE LITERACY
Vera Frith and Robert Prince
Centre for Higher Education Development, University of Cape Town
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
47___
APPENDIX 2
48___
appendix 2
Definition
Mathematical literacy is the ability to manage situations or solve problems in practice, and involves
responding to quantitative (mathematical and statistical) information that may be presented
verbally, graphically, in tabular or symbolic form; it requires the activation of a range of enabling
knowledge, behaviours and processes and it can be observed when it is expressed in the form of a
communication, in written, oral or visual mode.
Elaborations on the elements of the definition of quantitative literacy
To make the definition of quantitative literacy more explicit, we elaborate on what is meant by
the following concepts in the definition: “real contexts”, “responding”, “quantitative information”,
“representation of quantitative ideas”, “activation of enabling knowledge, behaviours and
processes” and “expressions of quantitatively literate behaviour”.
Working in a real context
An important component of quantitative literacy, often mentioned in the literature, is the
ability to operate in a context. The following elaboration of the different types of contexts for
exercising quantitative literacy is largely based on the discussion in Steen (2001: 12-14).
1) Education in a tertiary context
Significant mathematical competence has always been required in order to study most scientific,
commercial and engineering disciplines. However, it is becoming increasingly necessary for
students in these disciplines, as well as those in other, traditionally less mathematically-
demanding disciplines such as social studies or law, to develop high levels of quantitative
literacy. Some examples follow:
Social sciences such as sociology or psychology “rely increasingly on data either from
surveys and censuses or from historical or archaeological records; thus statistics is as
important for a social science student as calculus is for an engineering student” (Steen,
2001: 12).
In a discipline like history, a student may need to understand and perform summary analysis
of descriptive and numerical data and records.
Students studying medicine need high levels of quantitative literacy to understand, for
example, experimental studies, surveys, assessment of risk, epidemiology and other
aspects of public health, and in the explicit practice of diagnostic reasoning.
Law students similarly need, for example, to understand arguments based on evidence
using DNA testing, to understand financial management, and to understand evidence
gathered using social science methods.
Biology students would need (as for medicine) to be able to think probabilistically and to
use and understand statistics, as well as more traditional mathematics.
Students in some areas of the visual arts and media studies will need a level of quantitative
literacy to realise the potential of technology in their discipline (for example in computer
graphics and animation).
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
49___
2) Professions
Almost all professions require the ability to deal with quantitative evidence to make decisions,
as quantitative data becomes increasingly important and ubiquitous in modern society. For
example:
Lawyers exercise various kinds of subtle reasoning and arguments about probability to
argue their cases and make statements about “reasonable doubt”.
Doctors need to understand statistical arguments in order to explain risks clearly enough
to patients to ensure “informed consent”. As part of diagnosis they need to be able to
formulate tentative inferences about plausible causes from information about the presence
or absence of effects.
Journalists need a thorough understanding of quantitative information and arguments, to
develop an informed, skeptical and responsible understanding of events in the news, and
to be able to modify this view as necessary when new information unfolds.
3) Personal Finance
People in society, particularly if self-employed, need to understand complicated quantitative
issues such as depreciation, inflation, interest rates over different periods of time, the effect of
variations in loan repayments, risk on investment, gambling, the tax implications of different
financial decisions, bank charges and so forth. “In everyday life a person is continually faced
with mathematical demands which the adolescent and adult should be in a position to handle
with confidence. These demands frequently relate to financial issues such as hire-purchase,
mortgage bonds and investment.” (DoE, 2003: 9)
4) Personal Health
With the increasing role of quantitative data in the field of medicine, quantitative literacy has
become increasingly necessary for people to manage their personal health. Patients need to
understand statistics and probability to understand the tentativeness of diagnoses, the
fallibility of medical testing, the choice of different treatment options and the risks associated
with them. Sufficient knowledge of finances is needed to balance the costs and benefits of
new treatments, to understand medical aid schemes and to manage payments for medical
services and insurance. People using medication need to understand dosages which are
calculated in proportion to body weight, or which must be administered according to a
precise timing regime.
5) Management
Different people carry out management tasks in a variety of different settings, such as home,
travel, school, business, societies, committees and other enterprises. Some examples of
management activities requiring quantitative literacy are:
developing a business plan.
gathering and analysing data (e.g. for tracking expenditures).
looking for the presence or absence of trends in data to make predictions (and under-
standing the limits of extrapolation).
drawing up or reviewing budgets and balance sheets.
calculating time differences and currency exchange.
6) Workplace
“The workplace requires the use of fundamental numerical and spatial skills in order
efficiently to meet the demands of the job. To benefit from specialised training for the
workplace, a flexible understanding of mathematical principles is often necessary. This
numeracy must enable the person, for example, to deal with work-related formulae, read
statistical charts, deal with schedules and understand instructions involving numerical
components” (DoE, 2003: 9). Other representative tasks are: completing purchase orders,
totaling receipts, calculating change, using spreadsheets to model scenarios, organising
and packing different shaped goods, completing and interpreting control charts, making
measurements and reading blueprints. (ALL, 2002:17)
APPENDIX 2
50___
appendix 2
7) Citizenship
“To be a participating citizen in a developing democracy, it is essential that the adolescent and
adult have acquired a critical stance with regard to mathematical arguments presented in the
media and other platforms. The concerned citizen needs to be aware that statistics can often
be used to support opposing arguments, for example, for or against the use of an ecologically
sensitive stretch of land for mining purposes. In the information age, the power of numbers
and mathematical ways of thinking often shape policy. Unless citizens appreciate this, they
will not be in a position to use their vote appropriately.” (DoE, 2003: 10)
Some examples of relevant situations involving quantitative literacy are:
understanding how voting procedures can affect the results of an election.
understanding the concept of risk and its measurement.
understanding that apparently unusual events can occur by chance.
analysing data to support or oppose policy proposals.
understanding the difference between rates and changes in rates (for example, the meaning
of a reduction in the rate of inflation).
understanding weighted averages, percentages and statistical measures such as percentiles
(for example, in data about performance of educational institutions).
appreciating common sources of bias in surveys such as poor design and non-representative
sampling.
understanding how assumptions influence the behaviour of mathematical models and how
to use models to make decisions.
8) Appreciation for mathematics as part of human culture
As part of their education, people should learn to appreciate the roles mathematics plays in
our society and economy. They should recognise the power (and dangers) of numbers in decision-
making and influencing people’s opinions. Ideally they should understand the historical
significance of fundamental mathematical concepts (such as zero and place value), and how
the history of mathematics relates to the development of cultures. “As educated men and
women are expected to know something of history, literature, and art, so should they know
– at least in general terms – something of the history, nature, and role of mathematics in
human culture” (Steen, 2001: 11).
Responding
There are four categories of competency involved in responding to a situation or problem in
a quantitatively literate manner (displaying quantitatively literate behaviour):
identifying or locating: the nature of the situation/problem is comprehended and the
appropriate actions are explicitly recalled and conceptualised.
acting: the appropriate actions are carried out.
interpreting: the meaning of the outcome of the activities is interpreted in the context.
communicating: the consequences or meaning are communicated appropriately for the
context.
These four types of activity are not carried out chronologically. A person uses these
competencies in an integrated and iterative way while engaging with a quantitative situation
or problem.
1) Comprehending the situation or problem
In order to gain access to the quantitative information embedded in a situation or problem,
a person may need to:
know the meanings of terms and phrases used to express quantitative ideas (appropriate
text literacy).
know the necessary mathematical and statistical concepts.
know the conventions for representation of numbers, operations and variables.
know the conventions for representation of data in diagrams, charts, tables and graphs.
know the conventions for the representation of 2-dimensional and 3-dimensional space.
identify the relevant quantitative actions in which to engage.
2) Acting on the situation or problem
In engaging with a quantitative event a person may need to:
perform calculations using commonly encountered operations.
estimate the magnitude of answers to simple calculations.
visualise and/or model situations using simple formulae and/or diagrams.
arrange “quantitative objects” in order.
identify where to find and extract information from representations of data.
locate the necessary information from more than one unspecified source, and use it in
combination.
3) Interpretation: Making sense of the situation or problem
Competencies required for interpreting quantitative information include the ability to:
ask appropriate questions and formulate conjectures.
understand the quantitative information in terms of the context in which it is embedded.
identify absences of possibly relevant information.
reason logically (linking evidence and claims); i.e.
identify whether a claim is supported by the available evidence, or whether future
corroborative evidence is needed.
formulate conclusions that can be made given the evidence.
identify the evidence necessary to support a claim.
translate between different types of representation of the same data.
recognise the presence of patterns and permutations in data.
identify relationships revealed by data.
4) Communicating about the situation or problem
In order to communicate about quantitative information a person often needs to:
represent quantitative information verbally, visually (in charts graphs or diagrams),
in tabular form or using symbols (including in formulae)
identify ambiguities or omissions in representations.
describe comparisons between data values, trends and relationships.
explain reasoning (linking evidence and claims).
Quantitative information (fundamental mathematical and
statistical ideas)
The following classification and description of the fundamental ideas which underpin the
definition of quantitative literacy are largely based on the ALL (2002: 18-20) and the PISA
(2003: 36-37) frameworks.
1) Quantity, number and operations
Classification, ordering and quantification (using numbers) is fundamental to the process of
making sense of and organising the world. Whole numbers are used for counting, measuring
and estimating; fractions are needed to express greater precision, ratios for making relative
comparisons and positive and negative numbers for expressing direction. Numbers are also
fundamental to the processes of ordering and labelling (e.g. telephone numbers and postal
codes) and calculating. A very important part of quantitative literacy is the possession of a
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
51___
good “sense” of magnitude and the ability to judge the required level of accuracy in a given
context, and to assess the consequences of any inaccuracies.
2) Shape, dimension and space
The study of shapes is connected with the ability to know, explore and move with under-
standing in the real space in which we live. This ability requires understanding the spatial
properties of objects and their positions relative to each other, as well as how they appear to
us. We need to understand the relationships between shapes and images (for example the
representation of three-dimensions in a two-dimensional image). The ideas of dimension and
space involve the visualisation, description and measurement of objects in one, two or three
dimensions (projections, lengths, perimeters, surfaces, location) and requires the ability to
estimate and to make direct and indirect measurements of direction, angle, and distance.
3) Relationships, pattern, permutation
The capacity to identify patterns and relationships is fundamental to quantitative thinking.
Relationships between quantities can be represented through the use of tables, charts,
graphs, symbols and text. The ability to generalise and to describe relationships between
variables is essential for understanding even simple social or economic phenomena, and
is fundamental to many everyday quantitative activities. Patterns or relationships that
involve the dimension of time require particular attention, as they are so fundamental to our
experience of the world (seasons, tides, days, phases of the moon and so on).
4) Change and rates
The concept and measurement of time intervals is fundamental to understanding changes
that occur over time. Change is inherent in all natural phenomena, and we are surrounded by
evidence of temporary and permanent relationships among phenomena. Individuals grow,
populations vary, prices fluctuate, objects traveling speed up and slow down. The measurement
of rates of change (and changes in rate of change) help provide a description of the world as
time passes.
5) Data representation and analysis
The idea of “data” includes ideas such as variability, sampling and error, and statistical topics
such as data collection and analysis, data displays and graphs. People are often required to
interpret (or produce) analyses and representations of data, such as frequency tables, charts
(such as pie and bar charts) and descriptive statistics, (such as averages).
6) Chance and uncertainty
The idea of “chance” is expressed mathematically in terms of probability. Competence in this
area involves the ability to attach a number to the likelihood (or risk) of an uncertain event.
Being able to understand statements about and to reason with probabilities is necessary,
for example, in understanding weather forecasts, financial and social phenomena, legal
arguments and health risks.
Representation of quantitative ideas
When a person engages with a real situation or a problem of a quantitative nature (a quantitative
event), the information needed can be presented in many different forms, including:
symbols (numbers, variables or formulae).
words (verbal or written texts, using ordinary words or specialised terminology, or special
formats such as forms).
tables and charts (tables of data, bar, pie, line and other charts or graphs).
objects or pictures (objects to be counted, visual displays, scale models, diagrams or
maps).
APPENDIX 2
52___
appendix 2
Activation of enabling knowledge, behaviours and processes
The way in which a person will respond to a quantitative event will depend on her/his
quantitative knowledge and skills, quantitative reasoning ability, literacy skills and beliefs and
attitudes. These are briefly discussed below.
1) Quantitative (mathematical and statistical) knowledge and skills
Mathematical and statistical knowledge, including the understanding of mathematical concepts
and access to computational skills and procedures, is the basis for being able to manage
many quantitative tasks in real life (ALL, 2002: 20) and for comprehending the significance
of these tasks. The description of these skills and procedures is structured in many different
ways in different school curricula and they may be assessed in surveys such as TIMMS and
PISA. For example, one possible structure is a breakdown of the knowledge and skills into
those related to whole numbers, fractions, decimal representation of fractions, measurements,
differences, directions, ratios, percentages, rates, geometry, use of the Cartesian plane, algebra,
descriptive statistics and probability.
2) Quantitative reasoning
Reasoning quantitatively involves the capacity for logical, systematic thinking (Mullis et al,
2003: 32). It includes intuitive, inductive, deductive and probabilistic reasoning. For example,
reasoning involves the ability to observe patterns and regularities and make conjectures, as
well as the realisation that incomplete evidence or inherent uncertainty could lead to tentative
inferences which may later be found to be erroneous as further evidence unfolds. It also
involves making logical deductions based on specific assumptions and rules, identifying and
using evidence required to support a claim, as well as identifying claims that are supported/not
supported by given evidence. The ability to generate counter-examples to disprove a claim is
also included.
The TIMSS framework (Mullis et al, 2003: 32-33) specifies the following sub-categories under
this heading:
hypothesise/conjecture/predict (e.g. make conjectures by investigating patterns and
trends revealed in tabulated data.)
analyse (e.g. applying relevant statistical analysis to a set of data; extracting information
from a situation for which a quantitative activity is relevant.)
generalise (e.g. given a set of data that shows a specific relationship exists for several
different years, formulating the statement that the relationship may exist at all times.)
connect (e.g. translating quantitative information from one representation to another,
making connections between related quantitative ideas or objects.)
synthesise/integrate – combine disparate procedures or results to form a new result (e.g.
combining information from two separate charts to solve a problem.)
solve non-routine problems (e.g. applying quantitative concepts and procedures familiar
from one context to a new unfamiliar context.)
justify/prove (e.g. identifying evidence from a chart for the validity of a quantitative
statement; explaining why a given statement about a chart is not supported by the
evidence in the chart.)
evaluate (e.g. commenting on a survey with obvious flaws such as too small a sample or
non-representative data.)
3) Literacy skills
Understanding representations of quantitative information will depend on reading comprehension
and other literacies. The text associated with a quantitative event usually requires a more
analytical reading style than is needed for ordinary prose. Specific literacy is required for
analysing mathematical and statistical relationships described in text and for understanding
the specialised terminology used to describe quantitative concepts and contexts. Making
sense of mathematical or statistical information represented in textual form requires the
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
53___
understanding and correct use of seemingly basic terms like “greater than, smaller than,
percentage, half of, more than double” as well as terms used for more complex mathematical
and statistical information, such as “significant difference”, “weighted average”, “function of”,
“change of rate”, “random sample”, “probability” and “correlation”. In a similar way,
understanding quantitative text often requires visual literacy associated with interpreting
and creating diagrams, maps, charts, graphs and other visual representations.
4) Use of computational technology
Quantitative literacy includes the ability to make use of and understand the role of computers
in science, social science, professional and everyday life, and in the workplace. “The changing
nature of workplaces and the ubiquity of computer-based systems for the automation and
control of processes and the management of information, has brought about the need for
employees at all levels to engage with these systems, to interpret their outputs and to make
sense of the abstract models on which they are based” (Kent, Hoyles, Noss & Guile 2004: 1).
At its most fundamental level this knowledge includes the role and use of calculators and in
the context of tertiary education includes the effective use of spreadsheets (Frith, Jaftha &
Prince, 2005).
5) Beliefs and attitudes
Affective factors play a particularly significant role in people’s ability to engage productively
with quantitative events. “The way in which people respond to a quantitative situation and
how they choose to act depends on how familiar they feel with such situations and how
confident they are in their own strategies. General dispositions towards mathematical
matters, as well as a person’s self-perception and the degree of a sense of “at-homeness”
with numbers, considerably impact a person’s willingness and ability to perform mathematics
tasks.” (ALL, 2002:22). In fact the National Curriculum Statement for the subject
Mathematical Literacy (DoE, 2003: 9) specifies the development of confidence as part of its
definition of the subject area. We have discussed the role of confidence in mathematics
learning in Frith, Jaftha & Prince (2004b).
Expressions of quantitatively literate behaviour
Quantitative literacy can be expressed when a person responds to a quantitative event by
producing a written, oral and/or visual (including concrete objects) text. The notion of “text”
is used broadly (see Archer, Frith & Prince, 2002) and includes many kinds of output, including
concrete objects.
APPENDIX 2
54___
appendix 2
References
ALL, 2002. Adult literacy and lifeskills survey. Numeracy–working draft.
http://www.ets.org/all/numeracy.pdf (accessed 24/3/2003).
American Association for the Advancement of Science (1993). Benchmarks for Science Literacy.
New York: Oxford University Press.
Amoore H & Griesel H, 2003. The FET schools policy: The National Curriculum Statement and FETC
(General) exit qualification. SAUVCA Summary Report, 23 September 2003.
Anderson LE, 2005. Objectives, evaluation and the improvement of education. Studies in
Educational Evaluation 31, 102-113.
Archer A, Frith V & Prince RN, 2002. A project-based approach to numeracy practices at university
focusing on HIV/AIDS. Literacy and Numeracy Studies, 11(2), 123-131.
Arena LA, 1975. Linguistics and Composition: A Method to Improve Expository Writing Skills.
Washington, DC: Georgetown University Press.
Bachman LF, 1990. Fundamental Considerations in Language Testing. Oxford: Oxford University
Press.
Bachman LF & Palmer AS, 1996. Language Testing in Practice. Hong Kong: Oxford University Press.
Baker D, Clay J & Fox C (eds.), 1996. Challenging ways of knowing. English, Maths and Science.
London and Bristol: Falmer Press.
Baynham M & Baker D, 2002. “Practice” in literacy and numeracy research: Multiple perspectives.
Ways of Knowing, 2(1), 1-9.
Bereiter C & Scardamalia M, 1982. From conversation to composition: The role of instruction in a
developmental process. In R Glaser (ed.), 1982. Advances in Instructional Psychology,
Volume 2. Hillsdale, NJ: Lawrence Erlbaum Associates.
Bloom BS, Englehart MD, Furst EJ, Hill WH & Krathwohl DR, 1949. Taxonomy of Educational
Objectives: The Classification of Educational Goals. Handbook 1: Cognitive Domain. White
Plains, NY: Longman.
Chapman A, 1998. Academic numeracy: Developing a framework. Literacy and Numeracy Studies,
8(1), 99-121.
Chapman A & Lee A, 1990. Rethinking literacy and numeracy. Australian Journal of Education,
34(3), 277-289.
Cizek G (ed.), 2001. Setting Performance Standards: Concepts, Methods and Perspectives.
Mahwah, New Jersey: Erlbaum.
Cliff AF, Yeld N & Hanslo M (under review). Assessing the academic literacy skills of entry-level
students, using the Placement Test in English for Educational Purposes (PTEEP). Assessment in
Education.
Cummins J, 1980. The cross-lingual dimensions of language proficiency: Implications for bilingual
education and the optimal age issue. TESOL Quarterly 14, 175-87.
Cummins J, 1984. Implications of bilingual proficiency for the education of minority language
students. Language Issues and Education Policies. ELT Documents 119. Oxford: Pergamon Press
and the British Council.
Cummins J, 2000. Language, Power and Pedagogy: Bilingual Children in the Crossfire. Clevedon:
Multilingual Matters Ltd.
Cummins J & Swain M, 1986. Bilingualism in Education. New York: Longman.
Dawe L, 1983. Bilingualism and mathematical reasoning in English as a second language.
Educational Studies in Mathematics, Vol. 14, 325-353.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
55___
Department of Education (DoE), 2003. National Curriculum Statement Grades 10-12 (General.)
Mathematical Literacy. Pretoria: Department of Education.
Department of Education, 2005a. Learning Programme Guidelines of the National Curriculum
Statement for Mathematics for Grades 10-12 (General), 29 April 2005, Pretoria.
Department of Education, 2005b. Subject Assessment Guidelines of the National Curriculum
Statement for Mathematics for Grades 10-12 (General), September 2005, Pretoria.
Enright MK, Grabe W, Koda K, Mosenthal P, Mulcahy-Ernt P & Schedl M, 2000. TOEFL 2000 reading
framework: A working paper. Princeton, New Jersey: Educational Testing Service.
Entwistle NJ & Ramsden P, 1983. Understanding Student Learning. London: Croom Helm.
Foxcroft CD & Roodt G, 2005. An Introduction to Psychological Assessment in the South African
Context (2nd edition). Cape Town: Oxford University Press.
Frith V, Bowie L, Gray K & Prince R, 2003. Mathematical literacy of students entering first year at
a South African university. Proceedings of the Ninth National Congress of the Association for
Mathematics Education of South Africa, Rondebosch, South Africa, 30 June–4 July: 186-193.
Frith V, Jaftha JJ & Prince RN, 2004a. Mathematical literacy of students in first year of medical
school at a South African university. In A Buffler & RC Laugksch (eds.), Proceedings of the 12th
Annual Conference of the Southern African Association for Research in Mathematics, Science
and Technology Education. Durban: SAARMSTE, 791-798.
Frith V, Jaftha JJ & Prince RN, 2004b. Students’ confidence in doing mathematics and in using
computers in a university foundation course. In A Buffler and RC Laugksch (eds.), Proceedings
of the 12th Annual Conference of the Southern African Association for Research in Mathematics,
Science and Technology Education. Durban: SAARMSTE, 234-245.
Frith V, Jaftha JJ & Prince RN, 2005. Interactive Excel tutorials in a quantitative literacy course for
humanities students. In MO Thirunarayanan & Aixa Pérez-Prado (eds,), Integrating Technology
in Higher Education. University Press of America: Maryland, 247-258.
Garaway GB, 1994. Language, culture and attitude in mathematics and science learning: A review
of the literature. The Journal of Research and Development in Education, 27 (2): 102-111.
Gee JP, 2000. The New Literacy Studies. From “socially situated” to the work of the social. In
D Barton, M Hamilton & R Ivanic (eds.), 2000. Situated Literacies: Reading and Writing in
Context. London: Routledge, 180-196.
Hambleton R & Zenisky A, 2003. Advances in criterion-referenced testing methods and practices.
In CR Reynolds & RW Kamphaus (eds.), Handbook of Psychological and Educational Assessment
of Children (2nd edition), 377-404. New York: Guilford Press.
Hughes-Hallett D, 2001. Achieving numeracy: The challenge of implementation. In LA Steen (ed.),
The Case for Quantitative Literacy, USA: The National Council on Education and the Disciplines,
93-98.
Intersegmental Committee of the Academic Senates (ICAS), 2003. Academic literacy: A statement
of competencies expected of students entering California’s public colleges and universities.
Sacramento, California: ICAS, Academic Senate for California Community Colleges
(http://www.academicsenate.cc.ca.us/icas.html).
Jablonka E, 2003. Mathematical literacy. In AJ Bishop, MA Clements, C Keitel, J Kilpatrick & FKS
Leung (eds.), Second International Handbook of Mathematics Education, The Netherlands,
Dordrecht: Kluwer Academic Publishers, 75-102.
Kahn P & Kyle J (eds.), 2002. Effective learning and teaching in mathematics and its applications.
The Institute for Learning and Teaching in Higher Education, The Times Higher Education
Supplement. Kogan Page Ltd., London: Stylus Publishing Inc.
Kemp M, 1995. Numeracy across the tertiary curriculum. In RP Hunting, GE Fitzsimmons, PC
Clarkson & AJ Bishop (eds.), International Commission on Mathematics Instruction Conference
on Regional Collaboration, Melbourne: Monash University, 375-382.
http://cleo.murdoch.edu.au/learning/pubs/mkemp/icmi95.html.
REFERENCES
56___
references
Kent P, Hoyles C, Noss R & Guile D, 2004. Techno-mathematical literacies in workplace activity.
International Seminar on Learning and Technology at Work, Institute of Education, London,
March 2004.
www.ioe.ac.uk/tlrp/technomaths/Kent-LTW-seminar-paper.pdf (accessed on 17 March 2005.)
Larter S, 1991. Benchmarks: Toronto’s response to the testing problem. Federation of Women
Teachers’ Association of Ontario newsletter, 10(1), 5.
Lave J & Wenger E, 1991. Situated learning: Legitimate Peripheral Participation. New York:
Cambridge University Press.
Lawson AE, 1995. Science Teaching and the Development of Thinking. Wadsworth.
Linn RL & Gronlund NE, 2000. Measurement and Assessment in Teaching (8th edition). Upper
Saddle River, New Jersey: Prentice-Hall, Pearson Education.
Marton F & Säljö R, 1976a. On qualitative differences in learning: I – Outcome and process. British
Journal of Educational Psychology, 46, 4-11.
Marton F & Säljö R, 1976b. On qualitative differences in learning: II – Outcome as a function of
the learner’s conception of the task. British Journal of Educational Psychology, 46, 115-127.
Marton F & Säljö R, 1984. Approaches to learning. In F Marton, D Hounsell & NJ Entwistle (eds.),
The Experience of Learning. Edinburgh: Scottish Academic Press, 36-55.
Marton F, Dall’Alba G & Beaty E, 1993. Conceptions of learning. International Journal of Educational
Research, 19(3), 277-300.
Mason JH, 2002. Mathematics teaching practice, Horwood Publishing in Association with
Mathematical Sciences Education Board (MSEB), 1993. Measuring what counts: A conceptual
achievement. An update. Journal for Research in Mathematics Education, 28, 652-679.
Mason JH, 2002. Mathematics Teaching Practice: A Guide for University and College Lecturers.
Horwood Publishing Limited, England.
Masters G & Forster M, 1996a. Progress maps. Assessment resource kit. Melbourne, Australia: The
Australian Council for Educational Research (ACER).
Masters G & Forster M, 1996b. Developmental assessment. Assessment resource kit. Melbourne,
Australia: The Australian Council for Educational Research (ACER).
Mathematical Sciences Educational Board (MSEB), 1993. Measuring What Counts, National
Research Council, National Academy Press, Washington DC.
Mullis IVS, Martin MO, Smith TA, Garden RA, Gregory KD, Gonzalez EJ, Chrostowski SJ, O’Connor
KM, 2003. Trends in International Mathematics and Science Study (TIMMS). Assessment
Frameworks and Specifications 2003 (2nd Edition). International Association for the Evaluation
of Educational Achievement and International Study Center, Lynch School of Education,
Boston College, US.
http://timss.bc.edu/timss2003i/PDF/t03_af_book.pdf. (accessed 10 Feb 2005.)
National Assessment Governing Board, 2004. Mathematics Framework for the 2005 National
Assessment of Educational Progress. US Department of Education.
http://www.nagb.org/pubs/m_framework_05/761607-Math%20Framework.pdf.
(Accessed 20 March 2006.)
Orrill R, 2001. Mathematics, Numeracy and Democracy. In LA Steen (ed.), Mathematics and
Democracy, The Case for Quantitative Literacy. USA: The National Council on Education and the
Disciplines, xiii-xx.
Prince RN, Frith V & Jaftha J, 2004. Mathematical literacy of students in first year of medical school
at a South African University. The 12th Annual Meeting of the Southern African Association for
Research in Mathematics, Science and Technology Education (SAARMSTE), Rondebosch, South
Africa, 13-17 January 2004.
Prince RN & Archer AH, 2005. Numeracy practices in the South African context. The 12th
International Conference on Learning, University of Granada, Spain, 11-14 July 2005.
ACCESS AND ENTRY LEVEL BENCHMARKS The National Benchmark Tests Project
57___
Säljö RS, 1979. Learning in the learner’s perspective. I. Some common-sense conceptions. Reports
from the Department of Education, University of Götebörg, No. 76.
Snow CE, 1987. Beyond conversation: Second language learners’ acquisition of description and
explanation. In JP Lentolf & A Labares (eds.), Research in Second Language Learning: Focus on
the Classroom. Norwood NJ.: Ablex, 3-16.
Snow CE, Cancino H, De Temple J & Schley S, 1991. Giving formal definitions: A linguistic or
metalinguistic skill? In E Bialystok (ed.), Language Processing in Bilingual Children. Cambridge:
Cambridge University Press, 90-112.
Sons L, 1996. Quantitative reasoning for college graduates: A complement to the standards.
A report of the CUPM Committee on Quantitative Literacy Requirements. MAA.
Steen LA (ed.), 1990. On the Shoulders of GIANTS: New Approaches to Numeracy, National
Academy Press, Washington, D.C.
Steen LA, 2001. The Case for Quantitative Literacy. In LA Steen (ed.), Mathematics and Democracy,
The Case for Quantitative Literacy. USA: The National Council on Education and the Disciplines,
93-98.
Stratton BD & Grindler MC, 1990. Diagnostic assessment of reading. In CR Reynolds &
RW Kamphaus (eds.), Handbook of Psychological and Educational Assessment of Children
(pages 523-534). New York: Guilford Press.
Street B, 1995. Social Literacies: Critical Approaches to Literacy in Development, Ethnography and
Education. London and New York: Longman.
Tannen D, 1985. Relative focus on involvement in oral and written discourse. In DR Olson,
N Torrance & A Hildyard (eds.), Literacy, Language and Learning. The Nature and Consequences
of Reading and Writing. New York: Cambridge University Press.
Tate WF, 1997. Race-Ethnicity, gender, and language proficiency trends in mathematics. The Open
University.
The PISA 2003 Assessment Framework–Mathematics, Reading, Science and Problem-solving
Knowledge and Skills. Organisation for Economic Co-operation and Development (OECD).
http://www.pisa.oecd.org/dataoecd/46/14/33694881.pdf (accessed 10 Feb 2005.)
Usiskin Z, 2001. Quantitative literacy for the next generation. In LA Steen (ed.), Mathematics and
Democracy, The Case for Quantitative Literacy. USA: The National Council on Education and the
Disciplines, 79-86.
Van den Heuvel-Panhuizen M, 1996. Written assessment within RME–spotlighting short-task
problems. Assessment and Realistic Mathematics Education, Freudenthal Institute, Utrecht,
133-185.
Van Rossum EJ & Schenk SM, 1984. The relationship between learning conception, study strategy
and learning outcome. British Journal of Educational Psychology, 54, 73-83.
Verhage H & de Lange J, 1996. Mathematics education and assessment: Keynote address to the
AMESA Conference, July 1996. Pythagoras 42, April 1997, 14-20.
Verhage H & de Lange J, 1997. Mathematics education and assessment. Pythagoras: Journal of the
Association for Mathematics Education of South Africa, Vol. 42, 14-20.
Wells G (ed.), 1981. Learning Through Interaction. The Study of Language Development.
Cambridge: Cambridge University Press.
Yeld, 2001. Equity, Assessment and language of learning: Key issues for higher education selection
and access in South Africa. Unpublished PhD manuscript: University of Cape Town.
Yeld N, 2003. Academic literacy and numeracy profiles: An analysis of some results from the AARP
and TELP tests of incoming students (2001/2002 entry years). In Into Higher Education—
Perspective on entry thresholds and enrolment systems. Pretoria: A joint SAUVCA-CTP
publication, 21-52
REFERENCES
58___
references
... Van Dyk and Weideman 2004;Weideman, Patterson and Pot 2016). Yeld's work (2001) on language knowledge specifications, the original articulation of a construct of academic literacy (Cliff 2015), and this construct's later incarnations (Cliff and Yeld 2006;National Benchmark Test Project 2018) used for the development of the NB AQL test papers, drew on Bachman and Palmer's (1996) conceptualisation of language knowledge. This divided language knowledge into two categories: organisational knowledge and pragmatic knowledge. ...
... These two categories, in turn, could be subdivided: grammatical language knowledge and textual language knowledge relating to organisational knowledge; functional knowledge and sociolinguistic knowledge forming part of pragmatic knowledge. Yeld (2001), Cliff and Yeld (2006) and the National Benchmark Test Project (2018) identified specific skills relating to each of these aspects of language knowledge, including some applications of functional language knowledge that are now more closely associated with quantitative literacy. ...
... The academic literacy component of the NB AQL test covers the ability to identify links and other mechanisms that connect parts of a text; to identify and understand the function of parts of a text; to understand the structure and organisation of arguments; to make distinctions (for example between main ideas and supporting detail); to understand and analyse grammatical and sentence structure; to draw conclusions and apply insights; to understand and work with non-literal language; to recognise difference in text genre, tone and register; and to work out the meaning of words from the context (Cliff and Yeld 2006). ...
Article
Full-text available
Academic and quantitative literacies (AQL) are essential to success in higher education. These literacies are largely not explicitly taught, but acquired indirectly, mostly through practices in various school subjects. The National Benchmark Tests (NBT) Project assesses students’ AQL competencies to assist in identifying students who need support, with placement into appropriate programmes and with curriculum development. We analyse the performance on the NBT AQL test of students who took the school-leaving examinations in Mathematics, Mathematical Literacy, English Home Language and English First Additional Language. We use the subject choice as a representation of the level of a candidate’s quantitative competence and language proficiency respectively, and investigate the relative contributions made by these subject choices to a student’s AQL. Students who paired Mathematics with English as Home Language subject had the statistically significant highest mean AQL score and those who took both English First Additional Language and Mathematical Literacy had the lowest. Language competence has a stronger effect than mathematical competence on AQL. Students who took the subject combination Mathematics and English Home Language at school are better prepared for the academic demands of higher education than their counterparts who took the alternate subjects. Treating these subjects as equivalent to English Home Language and Mathematics for admissions purposes ignores the differences in preparedness of these students.
... In the USA the Scholastic Aptitude Test and the American College Testing Program form a major part of the college and university admissions process (Syverson, 2007), the Test in English for Educational Purposes is widely used in the United Kingdom, while the Diagnostic English Language Needs Assessment is written by first year entrants at the University of Auckland (Read, 2008). Cliff, Ramaboa and Pearce (2007) and Cliff and Yeld (2006) note that many universities in South Africa have used proficiency tests in the past decade, with the most prominent tests being the Placement Test in English for Educational Purposes (PTEEP), the Standardised Test for Access and Placement (SATAP), the English Literacy Skills Assessment for Higher Education and Training (ELSA Plus), the Test of Academic Literacy Levels (TALL), the Assessment Access Battery (AAB) and more recently the NBT. ...
... The overall purpose of the NBT in academic literacy is to determine whether a student is able to negotiate the demands of academic study in a higher education context. Academic literacy proficiency tests written nationally in South Africa assess students' ability to cope with academic reading, writing and thinking demands of higher education without relating to a subject or discipline bias (Cliff, Ramaboa & Pearce, 2007;Anthonissen, 2006;Cliff & Yeld, 2006;Van Dyk & Weideman, 2004). The NBT specifically, aims to assess the `core academic literacy competencies that an entry-level student should demonstrate that will be sufficient indication that s/he will be able to cope with the typical demands of higher education in the medium-ofinstruction, in a context of appropriate teaching, learning and curriculum support' (Cliff & Yeld, 2006: 20). ...
Article
Full-text available
Proficiency tests are being used moreextensively at institutions of higher learningfor selection, placement, for diagnostic purposes and as a means of early identificationfor first year entering students who might be at risk of under-performance. Given thatat some institutions a high premium is placed on these test results, one of the issues atstake is the extent to which the generic test content relates to curriculum practices inthe various disciplines. This article focuses on three Engineering diplomas and exploresthe extent to which the test specifications of the National Benchmark Test in academicliteracy relate to reading and writing practices in the discipline. The contention is thatthere should be a relationship between the test specifications and academic literacypractices at first year level in order to provide the data necessary to appropriately placeand support students who might be at risk of under-performance.
... (Fleisch et al. 2015: 167) Not only did Fleisch et al.'s (2015) study reveal that English FAL school-leavers perform poorly overall on the NBT AL when compared to their HL counterparts, it also showed that this too was the case at the level of the subdomains of academic literacy that constitute the construct of academic literacy as defined for this test. These subdomains include students' ability to recognise/understand/use cohesion, communicative function, essential versus non-essential information, grammar (syntax), inferencing, metaphorical expressions, discourse relations, text genre, and vocabulary (see Cliff and Yeld 2006;Cliff 2014Cliff , 2015. With regard to differential performance between the two groups on the ability to work out the meaning of vocabulary in context, Fleisch et al. (2015: 170) found that the gap for the students whose marks in English HL and FAL were around 60% was less than .5 of the standard deviation, but that this gap widened as these marks increased. ...
Article
Full-text available
Twenty-five years into the post-apartheid period, South African universities still struggle to produce the number of graduates required for the country's socio-economic development. The reason most often cited for this challenge is the mismatch that seems to exist between the knowledge that learners leave high school with, and the kind that academic education requires them to possess for success. This gap, also known as the "articulation gap", has been attributed to, amongst others, the levels of academic language ability among arriving students. The school-leaving English examination, and a pre-university test of academic literacy are the commonly used measures to determine these levels. The aim of this article is to investigate whether predetermined standards of performance on these assessments relate positively with academic performance. In order to determine this, Pearson Correlations and an Analysis of Variance (ANOVA) were carried out on the scores obtained for these assessments by a total of 836 first-year students enrolled at Stellenbosch University. The results show that the performance standards set for the standardised test of academic literacy associate positively with first-year academic performance, while the scores on the levels of performance set for the school-leaving English examination do not.
... Their definition is, in the view of Cliff and Yeld (2006), informed by Bachman and Palmer's (1996) view of language ability. Bachman and Palmer (1996) view language ability as being constituted by what they call language knowledge and strategic competence. ...
Article
Full-text available
The notion of ‘subjectivity’ in news reports has been widely researched, especially from the media perspective. However, ‘subjectivity’ is realised in various forms and the varied contexts and theoretical approaches offer new understanding of the notion. This article departs from such media-theoretic perspectives to a discourse-linguistic approach and makes an analysis of ‘controversial’ and ‘emotional’ reports of debates informed by the Appraisal Theory and Controversy Analysis. The focus of the research is on how the Zimbabwean newspapers represent ‘controversial’ and ‘emotional’ debates balancing factuality, impartiality and objectivity. Stories from both independent and state-owned newspapers have been selected on the basis of their ‘controversiality’ and ‘emotionality’. The article concludes that news reporting is directed at aligning and disaligning readers with certain interpersonal meanings. Headlines of newspapers have been argued to be attitudinal or ‘emotionally charged’.
... Academic literacy in the sense delineated in this paper (cf. Bachman & Palmer, 1996;Yeld, 2001;Cliff & Yeld, 2006;Weideman, 2006) means the extent to which students are able to: ...
Article
Full-text available
The assessment of entry-level students' academic literacy: does it matter? In Higher Education both nationally and internationally, the need to assess incoming students' readiness to cope with the typical reading and writing demands they will face in the language-of-instruction of their desired place of study is (almost) common cause. This readiness to cope with reading and writing demands in a generic sense is at the heart of what is meant by notions of academic literacy. 'Academic literacy' suggests, at least, that entry-level students possess some basic understanding of – or capacity to acquire an understanding of – what it means to read for meaning and argument; to pay attention to the structure and organisation of text; to be active and critical readers; and to formulate written responses to academic tasks that are characterised by logical organisation, coherence and precision of expression. This paper attempts to address two crucial questions in the assessment of students' academic literacy: (1) Does such an assessment matter, i.e. does understanding students' academic literacy levels have consequence for teaching and learning, and for the academic performance of students, in Higher Education? (2) Do generic levels of academic literacy in the sense described above relate to academic performance in discipline-specific contexts? Attempts to address these two questions draw on comparative data based on an assessment of students' academic literacy and subsequent academic performance across two disciplines at the University of Cape Town and the Cape Peninsula University of Technology. Quantitative analyses illustrate relationships between students' academic literacy levels and the impacts these have on academic performance. Conclusions to the paper attempt a critical assessment of what the analyses tell us about students' levels
Chapter
This book is a collection of papers that explore the ways in which bilingual children cope with two language systems. The papers address issues in linguistics, psychology, and education bearing on the abilities that bilingual children use to understand language, to perform highly specialised operations with language, and to function in school settings. All of the papers provide detailed analysis about how specific problems are solved, how bilingualism influences those solutions, and how the social context affects the process. Finally, the implications of these findings for policy-setting and the development of bilingual education programmes are explored. This will be an important and useful volume at the forefront of topical research in an area which is exciting increasing interest among linguists and cognitive scientists.
Article
Acknowledgements Introduction Section 1: Literacy, Politics and Social Change Introduction 1 Putting Literacies on the Political Agenda 2 Literacy and Social Change: The Significance of Social Context in the Development of Literacy Programmes Section 2: The Ethnography of Literacy Introduction 3. The Uses of Literacy and Anthropology in Iran 4. Orality and Literacy as Ideological Constructions: Some Problems in Cross-cultural Studies Section 3. Literacy in Education Introduction 5. The Schooling of Literacy 6. The Implications of the New Literacy Studies for Pedagogy Section 4: Towards a Critical Framework Introduction 7. A critical Look at Walter Ong and the 'Great Divide' 8. Literacy Practices and Literacy Myths Index
Book
ed. by Gregory J. Cizek., The following values have no corresponding Zotero field: Label: B821 Research Notes: Pant ID - 86