ArticlePDF Available

Using principles of authentic assessment to redesign written examinations and tests

Taylor & Francis
Innovations in Education and Teaching International
Authors:

Abstract

ABSTRACT Tests and examinations are widely used internationally. Despite their pervasiveness, they tend to measure lower order thinking skills in a decontextualized manner at a time when the literature frequently argues for the benefits of a richer, authentic approach to assessment. The focus of this paper is to improve authenticity in test assessment methods through promoting realism, cognitive challenge and evaluative judgement during the planning, administering and following up of assessment tasks. The article builds on a systematic literature review, in which the main principles of authentic assessment were outlined. In this paper, we posit how these principles can be implemented through the three chronological phases of the assessment process: before, during and after the act of assessment. KEYWORDS: Assessment, authentic assessment (AA), authenticity, testing. Free online e-print available in: https://www.tandfonline.com/eprint/3EeuW2zjhSZS6VmNpZty/full
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=riie20
Innovations in Education and Teaching International
ISSN: 1470-3297 (Print) 1470-3300 (Online) Journal homepage: https://www.tandfonline.com/loi/riie20
Using principles of authentic assessment to
redesign written examinations and tests
Verónica Villarroel, David Boud, Susan Bloxham, Daniela Bruna & Carola
Bruna
To cite this article: Verónica Villarroel, David Boud, Susan Bloxham, Daniela Bruna & Carola
Bruna (2019): Using principles of authentic assessment to redesign written examinations and tests,
Innovations in Education and Teaching International, DOI: 10.1080/14703297.2018.1564882
To link to this article: https://doi.org/10.1080/14703297.2018.1564882
Published online: 08 Jan 2019.
Submit your article to this journal
Article views: 116
View Crossmark data
Using principles of authentic assessment to redesign written
examinations and tests
Verónica Villarroel
a
, David Boud
b,c,d
, Susan Bloxham
e
, Daniela Bruna
a
and Carola Bruna
f
a
Center for Research and Improvement of Education (CIME), Faculty of Psychology, Universidad del
Desarrollo, Concepción, Chile;
b
Deakin University, Geelong, Australia;
c
School of Education, University of
Technology Sydney, Sydney, Australia;
d
Middlesex University, London, UK;
e
Faculty of Arts, Business and
Science, University of Cumbria, Carlisle, UK;
f
Department of Biochemistry and Molecular Biology,
Universidad de Concepción, Concepcion, Chile
ABSTRACT
Tests and examinations are widely used internationally. Despite
their pervasiveness, they tend to measure lower order thinking
skills in a decontextualized manner at a time when the literature
frequently argues for the benets of a richer, authentic approach
to assessment. The focus of this paper is to improve authenticity in
test assessment methods through promoting realism, cognitive
challenge and evaluative judgement during the planning, admin-
istering and following up of assessment tasks. The article builds on
a systematic literature review, in which the main principles of
authentic assessment were outlined. In this paper, we posit how
these principles can be implemented through the three chronolo-
gical phases of the assessment process: before, during and after
the act of assessment.
KEYWORDS
Assessment; authentic
assessment (AA);
authenticity; testing
Introduction
Tests focused in knowledge reproduction are widely used at universities (Ghosh, Bowles,
Ranmuthugala, & Brooks, 2017). There is a strong testing culture in South America
(Martínez-Rizo & Mercado, 2015), South East Asia (Gitanjali, 2016) and the Middle East
(Mahmoud, 2014), as well as in other university systems (Lesage, Valcke, & Sabbe, 2013).
Proponents of testing claim that they reduce plagiarism (Richardson, 2015), increase
reliability (McColongue, 2012) and are easy to correct (McCabe & OConnor, 2014). These
are closed-book tests, in which students are not allowed to bring materials or refer to
a textbook. They are administered in controlled conditions as timed unseen tests, in which
an invigilator is present to ensure students do not cheat (Hinton & Higson, 2017). Through
this process, students tend to become passive learners (Altay, 2014) that memorize
content rather than comprehending it (Flores, Veiga-Simao, Barros & Pereira, 2015).
Why is the emphasis on memorising information a problem? Remembering is the
lowest level of knowledge assessment (Anderson & Krathwohl, 2001) and students
quickly forget what they memorize (Rawson, Dunlosky, & Sciartelli, 2013). In addition,
CONTACT Verónica Villarroel vvillarroel@udd.cl
INNOVATIONS IN EDUCATION AND TEACHING INTERNATIONAL
https://doi.org/10.1080/14703297.2018.1564882
© 2019 Informa UK Limited, trading as Taylor & Francis Group
students understand that learning is only mechanically repeating data and information
(Schell & Porter, 2018). Instead, when students use higher-order cognitive skills to
respond to an assessment, such as concluding, designing or evaluating, they gain
a deeper understanding (Entwistle, 2009), and show better stability in remembering
what was learned (Rawson et al., 2013). Although there are dierences between dis-
ciplines, memorization is not the ultimate learning goal in any subject, and memorisa-
tion ill-equips students for the complex demands of life and work they face on
graduation. The achievement of deep learning may require progressively advancing
towards it, incorporating memory, analysis and transfer, in dierent weightings, until
students become familiar with the cognitive complexity required.
Assessment is critical in the learning process (Kearney, Perkins, & Kennedy-Clark,
2015) because it creates a backwash eect on teaching and learning activities
(Watkins, Dahlin, & Ekholm, 2005). It prompts opportunities for students to practice
higher order thinking skills. Anticipation of assessment has a strong inuence on what
and how learners study, frames what students do (Boud, 2010), and drives the learning
process (Vu & DallAlba, 2014). As a result, assessment has been reported as the most
eective way to improve of students´ achievement quality (Edström, 2008). When done
poorly, it can have the opposite eect.
Why it is necessary to incorporate authenticity in assessment?
To become a good professional, it is not only necessary to master the knowledge and
technical skills of the discipline (Guzzomi, Male, & Miller, 2015). Other competencies are
also required, such as critical thinking and problem solving, decision-making, commu-
nication, collaboration and innovation (Partnership for 21st Century Skills, 2010). It is
dicult for tests in themselves to promote a deep approach to learning, that requires
the construction of knowledge, reection and collaborative work, which limits the
achievement of central objectives of higher education (Endedijk & Vermunt, 2013).
Improving the assessment process can provide eective support for the development
of the skills graduates need today (Medland, 2016). One approach for making this
transition is to follow the principles of authentic assessment (Biggs & Tang, 2011).
Authentic assessment is a way to relate learning and work, creating a correspondence
between what is assessed in the university and what graduates do in settings in the
outside world (Neely & Tucker, 2012). It has an impact on the quality and depth of
learning achieved by the student and the development of higher-order cognitive skills
(Ashford-Rowe, Herrington, & Brown, 2014). It can support studentsgrowth in personal
condence (Martinez, O´Brien, Roberts, & Whyte, 2018) and autonomous practice
(Raymond, Homer, Smith, & Gray, 2012). Moreover, it can improve academic engage-
ment (Kearney & Perkins, 2014), motivation (Nicol, Thomson, & Breslin, 2014), self-
regulation (Ling Lau, 2013), and metacognition (Vanaki & Memarian, 2009).
Method
The purpose of this article is to explore how the advantages of authenticity in assess-
ment can be applied within the testingapproach to assessment, as described above. In
this way it acknowledges the need to improve rather than reject test methods, given
2V. VILLARROEL ET AL.
their dominant use in many higher education systems. It explores how the principles of
authenticity can be incorporated through the three chronological phases of the assess-
ment process: before, during and after the act of assessment in written tests.
The article focuses on the second part of a two-stage project. In stage 1, a systematic
review of the literature was undertaken to identify principles of authentic assessment
and in stage 2, reported here, the authors undertook an exploratory application of the
principles to a testing environment, identifying illustrative, and exemplar questions. In
stage 1, Villarroel, Bloxham, Bruna, Bruna, and Herrera-Seda (2018) carried out
a systematic review of 125 articles on authentic assessment published between 1988
and 2017 to identify its main characteristics as encapsulated in the literature. Thirteen
central characteristics were identied, which were grouped into three dimensions that
constitute the core of the construct: realism, cognitive challenge and authentic evalua-
tive judgement. Realism is the rst principle that distinguishes authentic assessment
(Bosco & Ferns, 2014), understood as representing something that might be encoun-
tered in the world beyond university. The second principle represents cognitive chal-
lenge whereby students use higher-order cognitive skills related to using, modifying, or
rebuilding knowledge into something new (Thornburn, 2008). Thirdly, evaluative judge-
ment is a necessary capability of graduates to make decisions about the quality of work
of oneself and others. It allows students to anticipate, monitor and improve the quality
of their work and that of others (Tai, Ajjawi, Boud, Dawson, P & Panadero, 2017).
Stage 2 adopts a more expository approach. It attempts to posit how the three
dimensions of authenticity (realism, cognitive challenge and authentic evaluative judge-
ment) can be applied in a testingassessment environment. To this end, each dimension
was mapped against some phases of assessment cycle (and its elements), using
a chronological sequence presented in gure 1:
In each phase of this chronological framework, we apply the conceptual description
of each dimension of authenticity (realism, cognitive challenge and authentic evaluative
judgement) to concrete aspects of assessment design, drawing on relevant research
literature to support the arguments. Illustrative examples of the principles in practice are
provided. Whilst the exploration is not exhaustive in either scope or reference to related
studies, it is original in theorising a range of ways in which testing methods can better
reect the essential features of authentic assessment.
Authenticity in the assessment cycle
I. Before: Planning authentic tests
1. Assessing what really matters
In authentic assessment, the validity of what is measured is fundamental. To facilitate
the adequate selection of the content, we propose three sources: the graduate prole,
course learning outcomes and professional requirements, where they exist. These three
elements improve the potential for realismin the assessment.
Graduate prole. Thegraduateprole represents the competences or learning stan-
dards that all graduates need to demonstrate once they nish their studies. These are
often articulated at an institutional level. This set of general standards (variously
INNOVATIONS IN EDUCATION AND TEACHING INTERNATIONAL 3
called transferable skills, generic attributes) enables course designers to determine
how their course will contribute to this prole and ensure that assessment will be
orientated to measure pertinent goals (Hart, Hammer, Collins, & Chardon, 2011). How
does each subject connect and contribute to achieving the competences of the graduate
prole?
Course learning outcomes. In any educational process, learning outcomes must be
assessed, so this is not something exclusive to authentic assessment. However, it is
necessary to emphasize the importance that educators ask themselves, what can stu-
dents do by the end of each course and course unit. Authentic assessment can be
generated by the use of a backward design methodology (Wiggins & McTighe, 2006)
which analyses course learning outcomes and identies which assessments are neces-
sary for it to be claimed that students have met them all.
Professional requirements. Courses which lead to professions also have professional
practice requirements. These includes what competences are needed for good profes-
sional performance. It is necessary to incorporate ways of assessing competences which
will allow students to face typical problems in professional work (Maxwell, 2012). Do the
Before
(planning
authentic
test)
•Assesing what really
matters.
•Injecting realism into
tests.
•Assesing complex
thinking.
During
(administer
ing test in
authentic
way)
•Using open book
tests.
•Allowing
collaborative answer
for complex tests.
•Simulating realistic
professionals
enviroments.
After
(following
up students
achievment
with authentic
feedback
strategies)
•Having students
delevolp marking
criteria.
•Engaging students in
peer review.
•Using self-assessment in
judging student own
work.
Authentic
Assessment cycle in
written
examinations
Figure 1. Phases and elements of the implementation of the authentic assessment cycle in written
examinations.
4V. VILLARROEL ET AL.
capabilities acquired in the course allow graduates to respond to the problems or functions
needed by the profession?
An example is shown below of an item from a third year undergraduate course in
Speaking and Hearing Therapy, Disorders and Intervention of the Swallowing::
Graduation prole:
Solve speaking and hearing problems systematically drawing on evidence and relevant
knowledge.
Course learning outcomes:
Identify therapeutic objectives based on the analysis of clinical cases of patients.
Professional requirements:
Plan a functional and neuromuscular evaluation of the phonoarticulatory organs with
patients in dierent stages of the life cycle.
Test item:
In the functional and neuromuscular evaluation of the phonoarticulatory organs of a 6-year-
old girl, it was observed that she presents diculty carrying food from the vestibule to the
occlusal face of the molars, therefore she carries the food to the occlusal face with her
nger. To determine the therapeutic objectives of your phonoaudiological therapy, you
must start, by identifying the muscle that needs rehabilitation to achieve a better function-
ing in this case. From the following options, select the correct one:
(a) Masetero
(b) Lateral Pterygoid
(c) Buccinator
(d) Temporary
2. Injecting realism into tests
Realism can be accomplished by presenting a real context that describes and delivers
a frame in which a problem is to be solved. Items can be drafted with rich context
simulating real-work situations that function as a proxy for professional performance
even when the course does not include assessment in a professional setting. The
information presented in the context may show more than one perspective of
a phenomenon or create limits or restrictions that must be considered in responding
to the problem.
It is not easy to create good contexts. It is a common occurence that questions can
be answered without analysing the context. In these, the context constitutes an
ornament or a frame which does not have information needed to solve the question.
Villarroel et al. (2018) showed that 47% of 4401 test items in 6 undergraduate
programs presented a context. However, in 73% of them, the information within
this context was not needed to answer the question. An example of an ornament
context followed by a well-constructed context in a biology course on the concept of
autophagy, is presented:
INNOVATIONS IN EDUCATION AND TEACHING INTERNATIONAL 5
Ornament context. The Nobel Prize in Medicine was assigned to the biologist Oshumi
for his discoveries of a process called autophagy. Describe the autophagy process and
comment on its implications for health.
Well-constructed context. Andrea and Luis are parents for the rst time. Andrea had
a complication so she had to be taken to the operating room immediately after delivery.
Because of this, she has not been able to breastfeed the baby. The father is very worried,
despite the fact that the doctor has told him that this is not a problem for the baby if s/he
does not receive nutrients from outside sources during the rst hours of life.
Explain to the father the biological mechanism that allows the baby to support its
metabolic requirements.
After being breastfed, analyse the changes to the metabolism of this newborn.
3. Assessing complex thinking
It is possible to identify three thinking skill levels (Anderson & Krathwohl, 2001). The rst
is related to memory skills (recognition or understanding); the second involves analytical
skills for information management (comparing, relating, contrasting, interpreting); and
the third compromises transfer skills (judging, deciding, criticizing, suggesting, design-
ing, innovating). Authentic assessment privileges the measurement of transfer skills,
where the emphasis is on whystudents learn that content (Avery, Freeman, &
Carmichael, 2012), which corresponds to the cognitive challengeprinciple. This princi-
ple seeks that students use knowledge for something, either to manageit by perform-
ing cognitive activities related to analysis, comparison or solving a problem.
Alternatively, they use it to display a professional performance that involves high-
order skills, such as evaluating, designing or criticizing. An example of the three levels
from neuroscience is described below:
Level 1: Memory Skills
Guillermo had a car accident. His frontal dorsal lateral cortex and ventral hypothalamus
were destroyed. Draw and label the sagittal section of the brain, labelling at least 10
damaged structures.
Level 2: Analytical Skills
Guillermo had a car accident. He has damaged structures of the cerebral cortex. The mother
listens to the doctor state: it is necessary to administer, externally, substances such as:
insulin, dopamine, leptin, peptides . . . to regulate it. Infer the areas of the cerebral cortex
that have been damaged, based on the medical indications.
Level 3: Transfer Skills
Guillermo had a car accident. His frontal dorsal lateral cortex and ventral hypothalamus
were destroyed. Evaluate severity, explaining three possible consequences according to the
damaged structures. Also, suggest one strategy that allows you to improve the quality of his
future life, compensating for the eects of the accident.
Multiple-choice questions can be designed in an authentic way (Douglas, Wilson, &
Ennis, 2012) if they require students to undertake decision-making or problem-solving in
6V. VILLARROEL ET AL.
a contextualized situation and to justify the option chosen through constructed
responses. This new format is more complex and students will take longer. They may
score lower because they are not used to these demands. In particular, students who
may have learned that success is obtained through memorization (Jensen, McDaniel,
Woodard, & Kummer, 2014). Students may need to be aided in making such a transition.
II. During: Administering tests
Sitting for a formal test is a stressful event, uncommon in the world outside educational
institutions (Brown, Bull, & Pendlebury, 2013). Tests induce anxiety, aecting self-esteem
and self-perceptions as learners, especially if they have previously had bad experiences
(Harlen, 2005). In contrast, assessment practices, such as problem-based assignments or
project work, are perceived by students to be fairer and more eective (Pereira, Flores, &
Barros, 2017). How can tests include these benets of performance-based tasks? Three
strategies are proposed that respond to the principle of realism, because they link the
assessment situation with the external world:
(1) Using open-book tests. Students report feeling less anxious and more condent on
open-book tests (Betts, Elder, Hartley, & Trueman, 2009). And, in addition, cogni-
tive sciences propose that human cognition is extended beyond the individual
mind, encompassing other people, symbolic meanings, environment and arte-
facts. A mind limited only to what we can remember at a certain time, is not
a good preparation for modern life, especially, considering that in workplaces
there is access to internet, books and other people to full tasks.
(2) Allowing collaborative answers for complex tests. Learning is built together with
others and in interaction. The concept of the zone of proximal development
explains the dierence between the individual performance in a given task and
the performance achieved when the same task is carried out with someone more
capable (Wass, Harland, & Mercer, 2011). Consequently, students with low indivi-
dual marks obtain higher marks in group tests, also displaying more active
learning than in individual tests (Almond, 2009). The level of commitment
between members is a factor in achieving a high-performance and learning
gains (Johnson & Johnson, 2009). Forming small groups and oering
asuciently complex task that requires dialogue and discussion can help pro-
mote this (Davies, 2009). Students learn more from having to argue for what they
thought was the right answer and listening to othersreasons (Zhang, Ding, &
Mazur, 2017)
(3) Simulating realistic professional environments. Authentic assessment can have
positive outcomes on student engagement and motivation in the learning pro-
cess (Nicol et al., 2014). It is likely that one reason students perform better is that
such tasks help develop their professional identity (Huxham, Campbell, &
Westwood, 2012). Therefore, it is important that the conduct of tests emulates
workplace´s conditions, for example: sending the test via e-mail and requesting
students send their answers in the same way at a stipulated time (O´Moore, &
INNOVATIONS IN EDUCATION AND TEACHING INTERNATIONAL 7
Baldock, 2007) and responding to the test on their laptop in the classroom (not
only paper and pencil tests). The following is an example of authentic test
administration:
The written test is performed in pairs with open books. The case is sent the previous day by
e-mail, without the associated questions. Then, the questions are delivered in the classroom
and students can also work outside of it with their materials.
III. After: Following up
Feedback is important in any assessment, being one of the most powerful inuences on
students´ learning (Hattie & Timperley, 2007). To make a feedback process authentic it is
necessary to include evaluative judgement activities that prepare students for what they
will have to do in the world beyond higher education (Tai et al., 2017), that is identify
how to judge good work and apply this to their own work and that of others. It helps
them to achieve knowledge, skills and predispositions that underpin lifelong learning
activities, promoting the development of autonomy (Carter, Sidebotham, Creedy,
Fenwick, & Gamble, 2015) and reective practice (Tait-McCutcheon, Drake, & Sherley,
2011). Three strategies are proposed:
(1) Having students develop marking criteria. Students can jointly construct criteria for
marking using their own resources. The act of co-creating marking criteria
engages students in a deep understanding of knowledge (Odonovan, Price, &
Rust, 2008), because they must go back to study, review and look for information
to create the guideline. Teachers can analyse the studentscriteria and select the
better descriptions when rewriting the nal rubric.
(2) Engaging students in peer review. Authentic feedback processes improve stu-
dentsability to judge others´ work, as this is what is required in workplaces, thus
developing evaluative judgment (Tai et al., 2017). Kearney and Perkins (2014)
reported that 82% of their students considered seeing otherswork in the process
of peer marking promoted better learning. In this context, peer review can be
carried out when two peers collaboratively mark another students
anonymous test, judging their performance in the test. In these settings, a nal
grade may incorporate teacher assessment and students´ co-assessment (Tai
et al., 2017).
(3) Using self-assessment in judging studentsown work. Assessment can be more
authentic when students are involved in dialogue and collaboration with their
teachers in feedback processes (Bloxham & West, 2004). Kearney et al. (2015)
point out that in the rst undergraduate year, students can self-assess, judge,
mark and defend their own answers in a test in conversation with the teacher.
Students develop an active role in constructing meaning with their teacher
through an intersubjective relationship, exchanging and negotiating points of
view (Lipnevich, Berg, & Smith, 2016; López-Pastor & Sicilia-Camacho, 2017). The
following is an example of authentic feedback:
8V. VILLARROEL ET AL.
In the class after the test, students dene marking criteria in pairs. Each group presents to
the class, and together they identify the main indicators and their description for three
dierent levels of performance. Using those indicators, students review their own work and
make qualitative comments about its strengths and weaknesses. Grades can be generated
from weighting the teachers evaluation of the test and the students´ comments.
Conclusion
Higher education must assess critical competences needed for solving realistic and con-
textualized problems using high-order skills in order that students become good profes-
sionals and citizens. As tests are so widely used in higher education, this paper proposes
changes to make them more authentic at three moments: planning, administering and
follow up. While it may be desirable to lessen the overall weighting of tests in assessment
regimes, and develop multiple forms of assessment, we have shown that some progress can
be made towards designing tests that draw on the key dimensions of authentic assessment,
and thus promote deep approaches to learning, more meaningful and engaging experience
for students, and better preparation for the demands of work and life. Making assessment
more authentic is a challenging process and will not occur without educational leadership
and a desire to ensure that courses serve the needs of students beyond graduation.
Disclosure statement
No potential conict of interest was reported by the authors.
Notes on contributors
Verónica Villarroel has PhD in Psychology. She works as a teacher and researcher at the Faculty of
Psychology in Universidad del Desarrollo in Chile. She is the Director of The Center for Research
and Improvement of Education (CIME), of the same Faculty and university.
David Boud has a PhD in Education. He has been involved in the research of higher education
topics, for more than 30 years. He is an Emeritus Professor of the School of Education in
the University of Technology, Sydney. He is also the founder, and Director, of the Centre for
Research in Assessment and Digital Learning, of the University of Deakin in Melbourne, Australia.
Susan Bloxham has a PhD in Educational Research. She is currently an Emeritus Professor in the
Faculty of Arts, Business and Science and the Research Institute for Professional Learning in
Education (RIPLE) following her retirement from the post of Director of the centre in the
University of Cumbria, England.
Daniela Bruna has a PhD in Psychology. She works as a teacher and researcher at the Faculty of
Psychology of Universidad del Desarrollo in Chile. And, also research in the Center for Research
and Improvement of Education (CIME), which belongs to the same faculty.
Carola Bruna has a PhD in Biological Sciences. She works as a teacher and researcher of the
Biochemistry and Molecular Biology Department, of the Faculty of Biological Sciencies, in
Universidad de Concepción, Chile.
INNOVATIONS IN EDUCATION AND TEACHING INTERNATIONAL 9
ORCID
Verónica Villarroel http://orcid.org/0000-0002-3000-2248
Daniela Bruna http://orcid.org/0000-0001-7424-2959
References
Almond, R. J. (2009). Group assessment: Comparing group and individual undergraduate module
marks. Assessment & Evaluation in Higher Education,34, 141148.
Altay, B. (2014). User-centered design through learner-centered instruction. Teaching in Higher
Education,19, 138155.
Anderson, L., & Krathwohl, D. A. (2001). Taxonomy for learning, teaching and assessing: A revision of
Blooms taxonomy of educational objectives. New York: Addison Wesley Longman.
Ashford-Rowe,K.,Herrington,J.,&Brown,C.(2014). Establishing the critical elements that
determine authentic assessment. Assessment & Evaluation in Higher Education,39, 205222.
Avery, P. G., Freeman, C., & Carmichael, D. (2012). Developing authentic instruction in the social
studies. Journal of Research in Education,12,5056.
Betts, L., Elder, T. J., Hartley, J., & Trueman, M. (2009). Does correction for guessing reduce students
performance on multiple-choice examinations? Yes? No? Sometimes? Assessment & Evaluation
in Higher Education,34,115.
Biggs, J., & Tang, C. (2011). Teaching for quality learning at university: What the student does.
Maidenhead, Berkshire: Open University Press.
Bloxham, S., & West, A. (2004). Understanding the rules of the game: Marking peer assessment as
a medium for developing students´ conceptions of assessment. Assessment & Evaluation in
Higher Education,14,2030.
Bosco, A. M., & Ferns, S. (2014). Embedding of authentic assessment in work- integrated learning
curriculum. Asia- Pacic Journal of Cooperative Education,15, 281290.
Boud, D. (2010). Sustainable assessment: Rethinking assessment for the learning society. Studies in
Continuing Education,22, 151167.
Brown, G. A., Bull, J., & Pendlebury, M. (2013). Assessing student learning in higher education. Oxford:
Routledge.
Carter, A., Sidebotham, M., Creedy, D., Fenwick, J., & Gamble, J. (2015). Strengthening partnership:
The involvement of health care providers in the evaluation of authentic assessment within
midwifery undergraduate education. Nurse Education in Practice,15, 327332.
Davies, W. M. (2009). Groupwork as form of assessment: Common problems and recommended
solutions. Higher Education,58, 563584.
Douglas, M., Wilson, J., & Ennis, S. (2012). Multiple-choice question test. A convenient, exible and
eective learning tool? A case study. Innovations in Education and Teaching International,49,
111121.
Edstrom, K. (2008). Was doing course evaluations as if learning matters most. Higher Education
Research and Development,27,95106.
Endedijk, M. D., & Vermunt, J. D. (2013). Relations between student teacherslearning patterns and
their concrete learning activities. Studies in Educational Evaluation,39,5665.
Entwistle, N. (2009). Teaching for understanding at university. Deep approaches and distinctive ways
of thinking. Basingstoke: Palgrave Macmillan.
Flores, M. A., Veiga-Simão, M. A., Barros, A., & Pereira, D. (2015). Perceptions of eectiveness,
fairness and feedback of assessment methods: A study in higher education. Studies in Higher
Education,40, 15231534.
Ghosh, S., Bowles, M., Ranmuthugala, D., & Brooks, B. (2017). Authentic assessment in seafarer
education: Using literature review to investigate its validity and reliability through rubrics. WMU
J Marit A,15, 317336.
Gitanjali, M. (2016). The three Rs of written assessment: The JIPMER experience. Journal of
Pharmacology and Pharmacotherapeutics,7, 115119.
10 V. VILLARROEL ET AL.
Guzzomi, A., Male, S., & Miller, K. (2015). Students´ responses to authentic assessment designed to
develop commitment to performing at their best. European Journal of Engineering Education,42,
122.
Harlen, W. (2005). Teacherssummative practices and assessment for learning Tensions and
synergies. Curriculum Journal,16, 207223.
Hart, C., Hammer, S., Collins, P., & Chardon, T. (2011). The real deal: Using authentic assessment to
promote student engagement in the rst and second years of a regional program. Legal
Education Review,21,97121.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research,77,
81112.
Hinton, D. P., & Higson, H. (2017). A large-scale examination of the eectiveness of anonymous
marking in reducing group performance dierences in higher education assessment. PloS one,
12, e0182711.
Huxham, M., Campbell, F., & Westwood, J. (2012). Oral versus written assessments: A test of
student performance and attitudes. Assessment & Evaluation in Higher Education,37, 125136.
Jensen, J., McDaniel, M., Woodard, S., & Kummer, T. (2014). Teaching to the test. . . or testing to
teach: Exams requiring higher order thinking skills encourage greater conceptual understand-
ing. Educational Psychology Review,26, 306329.
Johnson, D. W., & Johnson, F. P. (2009). Joining together. Group theory and group skills (10th ed.).
New Jersey, NJ: Pearson.
Kearney, S., & Perkins, T. (2014). Engaging students though assessment: The success and limita-
tions of the ASPAL (Authentic self and peer-assessment for learning) model. Journal of University
Teaching and Learning Practice,11,113.
Kearney, S., Perkins, T., & Kennedy-Clark, S. (2015). Using self- and peer-assessments for summative
purposes: Analysing the relative validity of the AASL (Authentic assessment for sustainable
learning) model. Assessment & Evaluation in Higher Education,41,114.
Lesage, E., Valcke, M., & Sabbe, E. (2013). Scoring methods for multiple choice ssessment in higher
educationIs it still a matter of number right scoring or negative marking? Studies in
Educational Evaluation,39(118193). doi:10.1016/j.stueduc.2013.07.001
Ling Lau, K. (2013). Chinese language teachersperception and implementation of self-regulated
leaning-based instruction. Teacher and Teaching Education,31,5666.
Lipnevich, A. A., Berg, D. A., & Smith, J. K. (2016). Handbook of human and social conditions in
assessment. Spain: Routledge.
López-Pastor, V., & Sicilia- Camacho, A. (2017). Formative and shared assessment in higher educa-
tion. Lessons learned and challenges for the future. Assessment & Evaluation in Higher Education,
42,7797.
Mahmoud, F. A. (2014). A cross-cultural study of studentsperceptions of assessment practices in
higher education. Education, Business and Society: Contemporary Middle Eastern Issues,7,
293315.
Martinez, M., O´Brien, M., Roberts, K., & Whyte, D. (2018). Critical pedagogy an assessment in higher
education: The ideal of authenticityin learning. Active Learning in Higher Education,19,921.
Martinez-Rizo, F., & Mercado, A. (2015). Estudios sobre prácticas de evaluación en el aula: Revisión
de la literatura. Revista Electrónica de Investigación Educativa,17,1732. Retrieved from http://
redie.uabc.mx/vol17no1/contenido-mtnzrizo-mercado.htm
Maxwell, T. (2012). Assessment in higher education in the professions: Action research as an
authentic assessment task. Teaching In Higher Education,17, 686696.
McCabe, A., & OConnor, U. (2014). Student-centred learning: The role and responsibility of the
lecturer. Teaching in Higher Education,19, 350359.
McColongue, T. (2012). But is it fair? Developing students´ understanding of grading complex
written work through peer assessment. Assessment & Evaluation in Higher Education,37,
113123.
Medland, E. (2016). Assessment in higher education: Drivers, barriers and directions for change in
the UK. Assessment & Evaluation in Higher Education,41,8196.
INNOVATIONS IN EDUCATION AND TEACHING INTERNATIONAL 11
Neely, P., & Tucker, J. (2012). Using business simulations as authentic assessment tools. American
Journal of Business Education,5, 449456.
Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education:
A peer review perspective. Assessment & Evaluation in Higher Education,39, 102122.
Odonovan, B., Price, M., & Rust, C. (2008). Developing student understanding of assessment
standards: A nested hierarchy of approaches. Teaching in Higher Education,13, 205217.
OMoore, L. M., & Baldock, T. E. (2007). Peer assessment learning sessions (PALS): An innovative
feedback technique for large engineering classes. European Journal of Engineering Education,32,
4355.
Partnership for 21st Century Skills. (2010). American management association critical skills survey
(pp. 21). Tucson: Survey.
Pereira, D., Flores, M. A., & Barros, A. (2017). Perceptions of Portuguese undergraduate students
about assessment: A study in ve public universities. Educational Studies,43, 442463.
Rawson, K., Dunlosky, J., & Sciartellli, S. (2013). The power of successive relearning: Improving
performance on course exams and long term retention. Educational Psychology Review,25,
523548.
Raymond, J., Homer, C., Smith, R., & Gray, J. (2012). Learning through authentic assessment. An
evaluation of a new development in the undergraduate midwifery curriculum. Nurse
Educational and Practice,13, 471476.
Richardson, J. T. (2015). Coursework versus examinations in end-of-module assessment:
A literature review. Assessment & Evaluation in Higher Education,40, 439455.
Schell, J., & Porter, J. (2018). Applying the science of learning to classroom teaching: The critical
importance of aligning learning with testing. Journal of Food Science Education,17,3641.
Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2017). Developing evaluative judgement:
Enabling students to make decisions about the quality of work. Higher Education,76, 467481.
Tait-McCutcheon, S., Drake, M., & Sherley, B. (2011). From direct instruction to active construction:
Teaching and learning basic facts. Mathematics Education Research Journal,23, 321345.
Thornburn, M. (2008). Articulating a Merleau-Pontain phenomenology of physical education: The
quest for active student engagement and authentic assessment in high-stakes examination
awards. European Physical Education Review,4, 263280.
Vanaki, Z., & Memarian, R. (2009). Professional ethics: Beyond the clinical competency. Journal
Professional Nursing,25, 285291.
Villarroel, V., Bloxham, S., Bruna, D., Bruna, C., & Herrera-Seda, C. (2018). Authentic assessment:
Creating a blueprint for course design. Assessment & Evaluation in Higher Education,43
(840854), 2018.
Vu, T., & Dall´Alba, G. (2014). Authentic assessment for student learning: An ontological
conceptualization. Educational Philosophy and Theory,46, 778791.
Wass, R., Harland, T., & Mercer, A. (2011). Scaolding critical thinking in the zone of proximal
development. Higher Education Research & Development,30, 317328.
Watkins, D., Dahlin, B., & Ekholm, M. (2005). Awareness of backwash eect of assessment:
A phenomenographic study of the views of Hong Kong and Swedish lectures. Instructional
Science,33, 283309.
Wiggins, G., & McTighe, J. (2006). Examining the teaching life. Educational Leadership,63,2629.
Zhang, P., Ding, L., & Mazur, E. (2017). Peer instruction in introductory physics: A method to bring
about positive changes in studentsattitudes and beliefs. Physical Review Physics Education
Research,13, 01010410101049.
12 V. VILLARROEL ET AL.
... In this context, authentic assessment aims to integrate realism, contextualization and problematization, thereby replicating tasks and outcomes typically undertaken in a professional environment (Villarroel et al., 2018;Wiggins, 1990). In contrast, traditional assessments tend to be limited to objective measurement at a single point in time (Koh et al., 2019) and tend to test memorization, which may not be a true reflection of a student's comprehension (Villarroel et al., 2020). Furthermore, students demonstrate a deeper understanding and more stable learning outcomes when engaged with assessments that test higher-order cognitive skills (Entwistle, 2017;Rawson et al., 2013). ...
... Well-designed authentic assessments can incorporate elements of critical thinking and problem-solving, and introduce students to real-world scenarios that prepare them for the workplace (Schultz et al., 2022). These elements should be aligned with program learning outcomes that not only include subject-specific knowledge acquisition but also opportunities to acquire transferable skills, develop critical thinking competencies, undertake self-reflection and promote an ethos of continued professional development (Meyers & Nulty, 2009;Sarkar et al., 2020;Villarroel et al., 2020). However, effectively implementing authentic assessments across a program of study is challenging. ...
... Generative AI encompasses advanced technologies such as natural language processing (NLP), machine learning, and neural networks, which are able to generate convincing and sophisticated human-like text. These capabilities open new avenues for developing more authentic assessments that directly integrate AI (Lawrie, 2023;Saher et al., 2022;Villarroel et al., 2020). Generative AI has found utility in several aspects of higher education, including as a personal learning tool, digital tutor, as an automated and predictive grading system, for immediate and elaborate feedback on student assignments, and for the generation of formative exercises and assessment questions (Owan et al., 2023). ...
Article
Full-text available
Generative AI has the potential to transform higher education assessment. This study examines the opportunities and challenges of integrating AI into coursework assessments, highlighting the need to rethink traditional paradigms. A case study is presented that explores AI as an auxiliary learning tool in postgraduate coursework. Students found AI valuable for text generation, proofreading, idea generation, and research but noted limitations in accuracy, detail, and specificity. AI integration offers advantages such as enhancing assessment authenticity, promoting self-regulated learning, and developing critical thinking and problem-solving skills. A holistic approach is recommended, incorporating AI into feedback, adapting assessments to leverage AI’s capabilities, and promoting AI literacy among students and educators. Embracing AI while addressing its challenges can enable effective, equitable, and engaging assessment and teaching practices. Universities are encouraged to strategically integrate AI into teaching and learning, ultimately transforming the educational landscape to better prepare students for an AI-driven world.
... 'Authenticity' is a form of Constructivist pedagogy that places value on student-driven construction of knowledge, depth-focused inquiry, and value beyond the educational context (Newmann et al., 1996;Bada, 2015;Villarroel et al., 2020). In higher education, it is often connected to assessment, and is frequently taken to refer to assessments that replicate, or are comparable to, tasks that students will experience post-graduation, often in workplace contexts (Colthorpe et al., 2021). ...
... This is arguably due to a drive to increase students' 'employability' that has taken root in recent decades (Small et al., 2018), meaning that many students (and their families) see university as preparatory for, and an investment towards, a career (Brooks et al., 2020). Consequently, the theory surrounding (and the design of) authentic assessments has become geared towards preparing students for workplace activities (Maxwell, 2012;Sotiriadou et al., 2020;Villarroel et al., 2020). The specificity of the workplace context varies across disciplines: it can take the form of conceptual skills such as 'leadership' that are applicable in multiple professional contexts (see Wiewiora & Kowalkiewicz, 2019) or specific activities that will be directly used in specific industrial spaces (see Poindexter et al., 2015;Maxwell, 2012;and Koretsky et al., 2021). ...
... In Indonesia, a survey conducted by the Ministry of Education, Culture, Research, and Technology found that 74% of educators struggled to effectively evaluate students' competencies during online learning (Syaifuddin, 2020). Against this backdrop, authentic assessment emerges as a critical approach due to its capacity to evaluate students' competencies based on their ability to apply knowledge in real-world scenarios (Villarroel et al., 2020). This study is crucial for addressing these challenges while leveraging opportunities for developing relevant assessments in the post-pandemic era. ...
... This process facilitates the development of metacognitive skills as they actively think about the best ways to complete tasks (Hidayat, 2024). Studies by Villarroel et al. (2020) demonstrate that educational programs integrating metacognition significantly improve students' learning outcomes. However, it is important to note that not all students possess the same level of metacognitive awareness. ...
Article
Full-text available
The transformation of learning in the post-pandemic era has heightened the demand for students’ skills in navigating real-world situations. This phenomenon necessitates both teachers and students to optimally prepare for the learning process, with authentic assessment serving as a strategic approach to address these challenges. This article aims to identify the challenges and benefits of implementing authentic assessment in the context of post-pandemic education in Indonesia and propose collaborative strategies as an innovative solution. The study employs a qualitative descriptive method based on a literature review, analyzing articles from reputable journals. Data were collected through selection, presentation, and conclusion-drawing processes. The findings reveal that authentic assessment offers significant benefits for both teachers and students, such as enhancing the relevance of learning to real-world needs and developing critical skills. However, several challenges persist, including the substantial time and cost requirements, pressure on teachers to design effective assessments, and student anxiety over unfamiliar assessment formats. To address these challenges, collaborative strategies between teachers and students are proposed as an innovative approach. These strategies involve intensive communication, joint planning, and continuous evaluation to optimize the implementation of authentic assessment. The novelty of this study lies in developing a collaborative approach to respond to the specific challenges of the post-pandemic era. Recommendations include integrating collaborative strategies into the education curriculum and providing teacher training to enhance the effectiveness of authentic assessments. These findings make a significant contribution to the development of social sciences and humanities, particularly in the context of educational reform in Indonesia.
... This finding is in harmony with Benediktsson and Ragnarsdóttir (2020), Gijbels and Dochy (2006), and Wang and Brown (2014) who consider such testing and assessment process as demotivating or has no positive impact on the students' learning and achievement as students' prefer to have assessment strategies that could leave impact on their learning and involve them deeper in the process of learning. Villarroel et al. (2019) agree that testing and assessment process which is ineffective does not lead to a favored washback/backwash. Tests and assessments, as they state, should be designed to support higher levels of thinking and critical involvement. ...
Article
Full-text available
The current study aims to investigate Kurdish EFL students' views of the assessment process conducted at EFL departments of public universities in the Kurdistan Region of Iraq (KRI). Due to the fact that assessment is the core factor for students' learning, involvement, and evaluation as the only gauge for their progress and development, much attention needs to be given to the assessment process. This study specifically aims at studying the perceptions of the Kurdish EFL students of the criteria including design, administration, purpose, effectiveness and washback, scoring and grading, and feedback of testing and assessment process. Hence, for the purpose of data collection, a questionnaire was administered to 116 students of semesters 3,5, and 7 at the English language departments of some public universities in the KRI during the academic year 2024-2025. Cronbach Alpha was used to analyze the reliability of the items of the questionnaire along with SPSS (version 25) to analyze the mean values of the items and ANOVA was utilized to compare the mean values across the six criteria. Findings indicate significant challenges in the alignment and execution of testing and assessment processes in higher education. While testing and assessment items align with course objectives, they often fail to adequately measure critical thinking and comprehensive language skills. Procedural issues, including unclear instructions, unfair scoring and grading practices, and overemphasis on grading rather than fostering students' progress and engagement, have badly affected the effectiveness of assessments. Additionally, environmental factors such as cheating, unsupportive classroom dynamics, and poor seating quality negatively impact students' performance. A lack of constructive feedback further hinders the development of students' overall skills and learning outcomes. The findings further highlight the need for a holistic approach to assessment that emphasizes student growth, fair evaluation, and the integration of diverse language competencies.
... The tests that are usually used are limited, to a large extent, to pursuing a reproduction of knowledge, and encourages memorization rather than comprehension (Álvarez, 2008). Even today, the exam still prevails as the main assessment instrument (Alsowat, 2022;Paternina & Quessep, 2017;Villarroel et al., 2020) and grading as the fundamental purpose. Likewise, methodologies still have a mainly summative character and do not require students to demonstrate other, complex skills and knowledge such as critical capacity or teamwork (Almerich et al., 2020) and, likewise, do not allow students to self-regulate, owing to the absence of feedback during learning. ...
Article
Full-text available
This study used a mixed approach to examine the implications and benefits of intergroup peer assessment in the context of cooperative work in higher education. The main objective was to analyze how this assessment strategy influenced learning, motivation, and the development of transversal competences of a sample of 305 students who were enrolled for a degree in primary education at the University of Zaragoza. An ad hoc questionnaire was used to collect quantitative and qualitative data on students' perceptions of their engagement with the task, the relationship between peer assessment and their learning, and the improvement of their competences, such as active listening, constructive criticism, and critical judgment. Firstly, quantitative and descriptive analyses were carried out. A Cronbach's alpha reliability analysis was also carried out. Secondly, in order to structure the qualitative information, a category tree was developed. The main results show that students value peer assessment positively because it facilitates their active learning, improves group cohesion, and fosters the development of key competences. In addition, the researchers observed an increase in motivation, ability to work in a team and critical reflection on the results and processes of students' own work. These findings suggest that intergroup peer assessment not only supports knowledge acquisition, but also promotes cooperation between and engagement and critical thinking by students. In conclusion, intergroup peer assessment emerges as a powerful tool to enhance learning and competence development in higher education by proposing a more participatory and formative approach to assessment processes.
... The tests that are usually used are limited, to a large extent, to pursuing a reproduction of knowledge, and encourages memorization rather than comprehension (Álvarez, 2008). Even today, the exam still prevails as the main assessment instrument (Alsowat, 2022;Paternina & Quessep, 2017;Villarroel et al., 2020) and grading as the fundamental purpose. Likewise, methodologies still have a mainly summative character and do not require students to demonstrate other, complex skills and knowledge such as critical capacity or teamwork (Almerich et al., 2020) and, likewise, do not allow students to self-regulate, owing to the absence of feedback during learning. ...
Article
Full-text available
This study used a mixed approach to examine the implications and benefits of intergroup peer assessment in the context of cooperative work in higher education. The main objective was to analyze how this assessment strategy influenced learning, motivation, and the development of transversal competences of a sample of 305 students who were enrolled for a degree in primary education at the University of Zaragoza. An ad hoc questionnaire was used to collect quantitative and qualitative data on students' perceptions of their engagement with the task, the relationship between peer assessment and their learning, and the improvement of their competences, such as active listening, constructive criticism, and critical judgment. Firstly, quantitative and descriptive analyses were carried out. A Cronbach's alpha reliability analysis was also carried out. Secondly, in order to structure the qualitative information, a category tree was developed. The main results show that students value peer assessment positively because it facilitates their active learning, improves group cohesion, and fosters the development of key competences. In addition, the researchers observed an increase in motivation, ability to work in a team and critical reflection on the results and processes of students' own work. These findings suggest that intergroup peer assessment not only supports knowledge acquisition, but also promotes cooperation between and engagement and critical thinking by students. In conclusion, intergroup peer assessment emerges as a powerful tool to enhance learning and competence development in higher education by proposing a more participatory and formative approach to assessment processes.
... Students should be encouraged to engage in self-evaluation employing performance-based assessments, wherein they are tasked with assessing their learning and the extent to which they have effectively performed within a given context. Performance-based assessments are pedagogical tools that foster students' capacity to employ their acquired knowledge in innovative and imaginative ways without dependence on GAI (Villarroel et al., 2020). ...
Chapter
Full-text available
Artificial intelligence (AI) in educational institutions leads to significant transformations within the conventional classroom setting. While this has led to an expansion of educational and instructional prospects, educators are increasingly also apprehensive about the potential impact of the widespread implementation of Generative Artificial Intelligence (GAI) on students' learning and the teacher's role in education. AI can potentially support educators in various aspects of their instructional practices. These include facilitating lesson content and delivery, offering constructive feedback on students' academic advancement, fostering student engagement and motivation, and providing valuable insights into students' progress. As students find increased access to GAI in completing their studies, educators face a mounting challenge in formulating novel approaches to address the ethical use of GAI to mitigate plagiarism. The advent of AI in education has presented educators with various potential advantages and challenges; this chapter will discuss how to navigate these challenges. This chapter investigates a framework for the possible integration of AI with the three significant elements of teaching and learning, i.e., curriculum design, curriculum delivery, and assessments, intending to preserve the stability of the advances made in the traditional classroom while harnessing AI's capabilities to enhance learning outcomes.
... Over-reliance on GAI may also lead to a reduction in students' critical thinking skills and ability to consolidate information from a range of sources (Warschauer et al., 2023). Coursework is regarded as a more authentic form of assessment (Villarroel et al., 2020), however, the widespread use of GAI may drive universities towards more closed-book exams and thus reduce the opportunity for students to develop these transferrable skills. There is, therefore, a risk to education quality and a potentially negative impact on student learning outcomes (Chan and Lee, 2023). ...
Article
Full-text available
Authenticity has been identified as a key characteristic of assessment design which promotes learning. Authentic assessment aims to replicate the tasks and performance standards typically found in the world of work, and has been found to have a positive impact on student learning, autonomy, motivation, self-regulation and metacognition; abilities highly related to employability. Despite these benefits, there are significant barriers to the introduction of authentic assessment, particularly where there is a tradition of ‘testing’ decontextualised subject knowledge. One barrier may be the lack of conceptualisation of the term authentic assessment sufficient to inform assessment design at the individual course level. This article tackles that omission by a systematic review of literature from 1988 to 2015. Thirteen consistent characteristics of authentic assessment are identified leading to the classification of three conceptual dimensions: realism, cognitive challenge and evaluative judgement. These dimensions are elaborated and used to propose a step-based model for designing and operating authentic assessment in individual higher education subjects. This is an Accepted Manuscript of an article published by Taylor & Francis, available online: http://www.tandfonline.com/10.1080/02602938.2017.1412396. Free e-print: https://www.tandfonline.com/eprint/wiwTYaX55jR5qDzFDI5G/full
Article
Full-text available
Evaluative judgement is the capability to make decisions about the quality of work of oneself and others. In this paper, we propose that developing students’ evaluative judgement should be a goal of higher education, to enable students to improve their work and to meet their future learning needs: a necessary capability of graduates. We explore evaluative judgement within a discourse of pedagogy rather than primarily within an assessment discourse, as a way of encompassing and integrating a range of pedagogical practices. We trace the origins and development of the term ‘evaluative judgement’ to form a concise definition then recommend refinements to existing higher education practices of self-assessment, peer assessment, feedback, rubrics, and use of exemplars to contribute to the development of evaluative judgement. Considering pedagogical practices in light of evaluative judgement may lead to fruitful methods of engendering the skills learners require both within and beyond higher education settings.
Article
Full-text available
The present research aims to more fully explore the issues of performance differences in higher education assessment, particularly in the context of a common measure taken to address them. The rationale for the study is that, while performance differences in written examinations are relatively well researched, few studies have examined the efficacy of anonymous marking in reducing these performance differences, particularly in modern student populations. By examining a large archive (N = 30674) of assessment data spanning a twelve-year period, the relationship between assessment marks and factors such as ethnic group, gender and socio-environmental background was investigated. In particular, analysis focused on the impact that the implementation of anonymous marking for assessment of written examinations and coursework has had on the magnitude of mean score differences between demographic groups of students. While group differences were found to be pervasive in higher education assessment, these differences were observed to be relatively small in practical terms. Further, it appears that the introduction of anonymous marking has had a negligible effect in reducing them. The implications of these results are discussed, focusing on two issues, firstly a defence of examinations as a fair and legitimate form of assessment in Higher Education, and, secondly, a call for the re-examination of the efficacy of anonymous marking in reducing group performance differences.
Article
Full-text available
Current forms of marketisation in university systems create pressures towards purely ends-focused expectations among students and have implications for learning and assessment processes. The potential harm that these trends have on ‘learning’ should be resisted by educators and students alike. Critical Pedagogy approaches offer one way of conceptualising and implementing such resistance in the interests of ‘authenticity’ in learning. However, the issue becomes sharpest at the point of assessment. Here, the ideals of Critical Pedagogy can collide with student expectations of final degree success. By addressing the question of ‘authenticity’ for assessment in relation to Critical Pedagogy, this article explores the challenges posed by this conundrum and draws upon interviews conducted with module leaders who apply recognisably (although not explicitly) Critical Pedagogy principles in their teaching and in the types of assessment they use. The themes that emerged present a picture of the kinds of potential that Critical Pedagogy influenced forms of assessment have for supporting authenticity in learning, as well as the difficulties involved in its application. It also helps to trace out the possible boundaries for further inquiry.
Article
Full-text available
Some efforts to improve the quality of education are based on the idea that student performance improves if teacher uses formative assessment, but research is inconclusive about it. However, in many projects assessment practice has been characterized as formative or not considering only the presence or absence of previously specified behaviors, without considering finer aspects, such as the level of cognitive demand of content, or the way teachers give feedback. This paper summarizes the results of projects that explore in detail teachers' classroom assessment practices. The results will guide the design of tools to gather good quality information about teachers' practices.
Article
In 2011 the authors created a model of self- and peer-assessment known as Authentic Self and Peer Assessment for Learning (ASPAL) in an attempt to better engage seemingly disengaged students in their undergraduate coursework. The model focuses on authentic assessment tasks and engages students by involving them in every step of the process from the creation of the criteria on which they will be marked, through to providing exemplars of work, pilot marking and providing peer feedback. This article examines the ASPAL process with regard to whether or not the students are better engaged in their studies as a result of taking part in this process. Although the results are not definitive, the present study shows that the majority of students who undertook the process found it beneficial and were open to try it again. This article seeks to open a discussion as to the capacity for a specific model of self- and peer-assessment to better engage students in their learning and discern the reasons why students found the model engaging so as to better inform future applications of the model and how it can be applied to a wider audience.
Article
This paper draws upon a broader piece of research on assessment in higher education, particularly focusing on issues regarding the fairness and effectiveness of the assessment methods and their implications for the learning process. The perceptions of undergraduate students are analysed taking into account the effectiveness and fairness of both traditional and learner-centred assessment methods, as well their influence on the learning process. In total, 624 students participated in this study in five Portuguese Public Universities in different areas of knowledge and programmes. Data were collected through questionnaires. Findings suggest that assessment is seen as more effective and fairer when it is done through the use of learner-centred assessment methods rather than by traditional assessment (e.g. written tests or exams). The students also claim that they devote more time to study when assessment is performed through learner-centred assessment methods than by traditional ones. The most used assessment methods are written tests and oral presentations in group. However, differences in the programmes were identified as well as differences according to gender. Implications of the findings for assessment and for the teaching and learning process are discussed.