ArticlePDF Available

Abstract and Figures

Authenticity has been identified as a key characteristic of assessment design which promotes learning. Authentic assessment aims to replicate the tasks and performance standards typically found in the world of work, and has been found to have a positive impact on student learning, autonomy, motivation, self-regulation and metacognition; abilities highly related to employability. Despite these benefits, there are significant barriers to the introduction of authentic assessment, particularly where there is a tradition of ‘testing’ decontextualised subject knowledge. One barrier may be the lack of conceptualisation of the term authentic assessment sufficient to inform assessment design at the individual course level. This article tackles that omission by a systematic review of literature from 1988 to 2015. Thirteen consistent characteristics of authentic assessment are identified leading to the classification of three conceptual dimensions: realism, cognitive challenge and evaluative judgement. These dimensions are elaborated and used to propose a step-based model for designing and operating authentic assessment in individual higher education subjects. This is an Accepted Manuscript of an article published by Taylor & Francis, available online: http://www.tandfonline.com/10.1080/02602938.2017.1412396. Free e-print: https://www.tandfonline.com/eprint/wiwTYaX55jR5qDzFDI5G/full
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=caeh20
Assessment & Evaluation in Higher Education
ISSN: 0260-2938 (Print) 1469-297X (Online) Journal homepage: http://www.tandfonline.com/loi/caeh20
Authentic assessment: creating a blueprint for
course design
Verónica Villarroel, Susan Bloxham, Daniela Bruna, Carola Bruna &
Constanza Herrera-Seda
To cite this article: Verónica Villarroel, Susan Bloxham, Daniela Bruna, Carola Bruna & Constanza
Herrera-Seda (2018) Authentic assessment: creating a blueprint for course design, Assessment &
Evaluation in Higher Education, 43:5, 840-854, DOI: 10.1080/02602938.2017.1412396
To link to this article: https://doi.org/10.1080/02602938.2017.1412396
Published online: 18 Dec 2017.
Submit your article to this journal
Article views: 579
View related articles
View Crossmark data
ASSESSMENT & EVALUATION IN HIGHER EDUCATION, 2018
VOL. 43, NO. 5, 840854
https://doi.org/10.1080/02602938.2017.1412396
Authentic assessment: creating a blueprint for course design
VerónicaVillarroela, SusanBloxhamb, DanielaBrunaa, CarolaBrunac and
ConstanzaHerrera-Sedad
aCenter for Research and Improvement of Education (CIME), Faculty of Psychology, Universidad del Desarrollo, City of
Concepción, Chile; bFaculty of Education, University of Cumbria, City of Carlisle, UK; cDepartment of Biochemistry and
Molecular Biology, Faculty of Biological Sciences, Universidad de Concepción, City of Concepción, Chile; dFaculty of
Psychology, Universidad del Desarrollo, City of Concepción, Chile
ABSTRACT
Authenticity has been identied as a key characteristic of assessment design
which promotes learning. Authentic assessment aims to replicate the tasks
and performance standards typically found in the world of work, and has
been found to have a positive impact on student learning, autonomy,
motivation, self-regulation and metacognition; abilities highly related to
employability. Despite these benets, there are signicant barriers to the
introduction of authentic assessment, particularly where there is a tradition
of ‘testing’ decontextualised subject knowledge. One barrier may be the lack
of conceptualisation of the term authentic assessment sucient to inform
assessment design at the individual course level. This article tackles that
omission by a systematic review of literature from 1988 to 2015. Thirteen
consistent characteristics of authentic assessment are identied leading
to the classication of three conceptual dimensions: realism, cognitive
challenge and evaluative judgement. These dimensions are elaborated and
used to propose a step-based model for designing and operating authentic
assessment in individual higher education subjects.
Introduction
Whilst there are dierent national traditions in assessment practices, we are witnessing a paradigm
change (Baeten, Struyven, and Dochy 2013) involving a transformation from a culture of objective
and standardised tests that are focused on measuring portions of atomised knowledge, towards a
more complex and comprehensive assessment of knowledge and higher-order skills (Shepard 2000;
Birenbaum 2003). This change in assessment relates to the emergence of the Assessment for Learning
(AFL) movement, where all assessment contributes to helping students learn (Sambell, McDowell, and
Montgomery 2013). AFL allows teachers to gather information to adjust their teaching and helps stu-
dents to regulate their own learning (Wiliam et al. 2004; Wiliam 2007).
From this perspective, assessment, teaching and learning are closely related, with each one being
part of the pedagogical process, and where feedback is used to adjust the learning cycle. Within this
paradigm, authenticity has been identied as a key characteristic of assessment design which promotes
learning and employability (Sambell, McDowell, and Montgomery 2013; Bloxham 2015). Authentic
assessment is the focus of this article.
KEYWORDS
Authentic assessment;
higher education; workplace
© 2017 Informa UK Limited, trading as Taylor & Francis Group
CONTACT Verónica Villarroel vvillarroel@udd.cl
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 841
What is authentic assessment?
Authenticity is understood as realism, contextualisation and problematisation when teaching and assess-
ing curricular content (Benner et al. 2009; Raymond et al. 2013). Realism involves linking knowledge
with everyday life and work, contextualisation characterises situations where knowledge can be applied
in an analytical and thoughtful way, and problematisation invokes a sense that what is learned can be
used to solve a problem or meet a need. Thereby authentic assessment aims to integrate what happens
in the classroom with employment, replicating the tasks and performance standards typically faced by
professionals in the world of work (Wiggins 1990).
Benets of authentic assessment
Studies indicate that authentic assessment has an impact on the quality and depth of learning achieved
by the student (Wiggins 1993; Dochy and McDowell 1997) and the development of higher-order cogni
-
tive skills (Ashford-Rowe, Herrington, and Brown 2014). Moreover, it improves autonomy (Raymond et
al. 2013), commitment and motivation for learning (Nicol, Thomson, and Breslin 2014), self-regulation
capacity (Pintrich 2000), metacognition and self-reection (Vanaki and Memarian 2009).
Furthermore, authentic assessment is a response to criticisms of higher education. Students have
diculty applying the knowledge acquired in dierent academic contexts (Andrews and Higson 2014).
They feel unprepared for employment (Ellström and Ellström 2014) and insecure when they begin
working (Ken and Chean 2012).
Employers are dissatised with the performance of recent graduates, who they consider rigid, unable
to adapt to the demands of working life (Plump 2010) and lacking basic skills such as problem solving,
critical thinking, communication skills and teamwork (Singh, Thambusamy, and Ramly 2014).
In this context, authentic assessment appears as a model that can enhance employability because
it promote abilities needed in the workplace, like problems solving skills (Wu, Heng, and Wang 2015),
autonomy (Swaeld 2011), motivation (Gulikers, Bastiaens, Kirschner and Kester 2008), self-regulation
and metacognition (Wu, Heng, and Wang 2015). It provides the opportunity for students to practice
skills and competences that are valued in work. In undertaking the assessment they have to deploy
skills and complete tasks that simulate the activities they will have to conduct in their future jobs. This
consolidates capabilities that are part of employability such as: coping with uncertainty, working under
pressure, planning and thinking strategically, communicating and interacting with others (Andrews
and Higson 2008), as well as better command of disciplinary content knowledge and skills, workplace
awareness, experience and generic skills (Dacre Pool and Sewell 2007).
Barriers to implementing authentic assessment
This research was carried out in Chile where there is a strong culture of testing as the principal form of
summative assessment, particularly in lower level courses. This is common in many higher education sys-
tems worldwide, where a focus on testing risks encouraging supercial approaches to learning (Endedijk
and Vermunt 2013; Beyaztas and Senemoglu 2015) and measuring decontextualised memorization and
understanding of content, and not the integration or application of knowledge (Biggs and Tang 2011)
indicated by authentic assessment. Such learning is unlikely to be useful beyond the classroom (Wiggins
1990; Vanaki and Memarian 2009). Teachers may use multiple-choice tests with adequate validity and
reliability indexes, but not question the relevance and signicance of the assessment. In such a culture,
there is a reluctance to use methods that evaluate the construction of knowledge, critical thinking or
problem solving (McCabe and O’Connor 2014).
Teachers are more willing to make changes where the assessment is of work place and practical
skills rather than subject knowledge (Watkins, Dahlin, and Ekholm 2005; Biggs and Tang 2011). Teachers
are reluctant to change formal assessments, such as examinations, because changing these practices
makes great demands on time, energy and intellectual resources (Brush and Saye 2008). They can also
842 V. VILLARROEL ET AL.
be perceived as risky (Dawson et al. 2017). In addition, teachers must have deep disciplinary knowledge
as well as great cognitive exibility to monitor, challenge and guide learners toward problem solutions
that have disciplinary rigour (Saye 2013).
Finally, whilst many professions have well-developed approaches to assessing practice-based learn-
ing, the authentic assessment of university-based learning presents more of a challenge. Although the
literature provides a broad understanding of its purposes and value, change may be hindered by a lack
of conceptualisation of authenticity and authentic assessment (Kreber et al. 2007), sucient to inform
individual course design.
In answer to this challenge, this article draws on a review of authentic assessment literature to deter-
mine the essential design dimensions required to bring authenticity to the assessment of classroom (as
opposed to work-based) learning. It aims to advance authentic and situated learning that encourages
students to develop relevant competencies for their working lives (Segers, Dochy, and Cascallar 2003).
It concludes by proposing a new four step model to implement authentic assessment derived from
the authors’ analysis of the existing literature.
Method
A systematic review of authentic assessment literature was carried out with the purpose of integrating,
analysing and identifying central themes, following Randolph (2009). We analysed 112 articles that
focused on the subject and were published between1988 and 2015. The articles were identied in the
Scielo, Scopus and Web of Science indexes, and all of them were published in English language journals.
The search keywords were: authentic assessment, authentic intellectual work and authentic instruction:
36% of articles referred to higher education whereas 64% were based in other education sectors.
The analysis sought to identify the core concepts of this construct. A rst read of the articles explored
the main characteristics of the construct and generated 13 central characteristics. In an iterative pro-
cess, a second read sought to deepen the analysis and elaborate preliminary dimensions of the con-
cept reected by these characteristics. This generated three dimensions. These preliminary ideas were
tested in a third reading by two research assistants who completed a template for each article. This
format permitted the researchers to indicate which characteristics from the list and which dimensions
appeared in each article.
The assistants worked independently and in parallel, completing the template for each article. The
concordance between the evaluators was analysed through the Cohen Kappa coecient. This generated
a value of 0.82, showing a high level of agreement between them. The characteristics most frequently
related to authentic assessment were identied, the dimensions that make it up were determined, and,
nally, a tentative model was developed to underpin authentic assessments in higher education. This
model was used in a research project, funded by the Chilean Ministry of Education.
The results
The characteristics of authentic assessment
Thirteen characteristics of authentic assessment found in the literature were identied as set out in
Table 1. The following text draws on indicative examples of those texts to illustrate the role of these
characteristics in the development of the overall construct. The 13 characteristics are highlighted in bold.
Archbald and Newmann made the rst formal use of the term ‘authentic’ in the context of learning
and assessment in 1988. At rst, these authors used the term authentic performance, which was asso-
ciated with production of knowledge, deep understanding, integration of knowledge, and use of prior
knowledge and relevant performance beyond assessment. It has also been associated with practical
use, alluding to the purpose, utility or ultimate goal of learning, especially in primary school (Moon et
al. 2005; Meisels, Wen, and Beachy-Quick 2010).
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 843
Table 1.Dimensions of the authentic assessment (AA).
AA dimensions N° and % of articles in the dimension Concepts and ideas associated with AA, with the N° and %of articles
Realism 79 (71%) Problems contextualised to everyday life (55/49%).
Relevance beyond the classroom (54/48%).
Authentic performance (48/43%).
Competencies for work performance (32/29%).
Similar tasks to the real/working world (28/25%).
Practical value (5/5%).
Cognitive challenge 62 (55%) Higher order thought (54/48%).
Ability to solve problems (52/46%).
Ability to make decisions (20/18%).
Evaluative judgement 42 (38%) Feedback (45/40%).
Formative sense (51/46%).
Assessment criteria known a priori (20/18%).
844 V. VILLARROEL ET AL.
Later, emphasis was placed on its relationship with higher-order thinking (Avery, Freeman, and
Carmichael 2012; Bosco and Ferns 2014; ), the ability to solve problems (Elliot and Higgins 2005;
Newmann, King, and Carmichael 2007; Wu, Heng, and Wang 2015) and decision-making (Newman,
Bryk, and Nagaoka 2001; Ohaja, Dunlea, and Muldoon 2013).
Wiggins (1993) and Torrance (1995) introduced the concept of relevance in assessment. Authentic
assessment engages students with problems or important questions, which have worth beyond the
classroom. The tasks are replicas or analogies of the types of problems that are faced in working life.
The idea is that students use knowledge to show eective and creative performances (Wiggins and
McTighe 2006; Saye 2013).
In this way, researchers began to talk about authentic assessment as a strategy to relate learning and
work, creating a correspondence between what is assessed in the university and what students need
to do in the workplace (Gulikers, Bastiaens and Kirschner 2004). This methodology introduces similar
tasks to those faced in real life or work (Brown 2005; Raymond et al. 2013). However, the literature
does not provide much detail about these real-world elements and how this kind of assessment is
properly implemented (Cummings and Maxwell 1999). Some refer to problems contextualised to
everyday life (Ashford-Rowe, Herrington, and Brown 2014). Benner et al. (2009) consider that authen-
tic assessment consists of asking students to tackle cases, accompanied by a rubric for its assessment.
The case must be a real-life situation, where students are asked to apply their knowledge and make
decisions to solve the problem.
Frey, Schmitt and Allen (2012) went further in emphasising that the context of assessment should
be realistic and cognitively complex. The task should involve performance and play a formative role (a
characteristic noted in multiple articles). From this perspective, authenticity became a crucial element
for assessing relevant skills for successful job performance (Segers, Dochy, and Cascallar 2003; Gielen,
Dochy and Dierick 2003). Assessment should be similar to what happens and what is evaluated in the
professional eld, including collaborative or peer-to-peer work (Raymond et al. 2013; Ashford-Rowe,
Herrington, and Brown 2014).
Wiggins (1990) amongst others highlighted that the structure and expectations of authentic assess-
ment ought to be transparent: the assessment criteria should be known in advance. In this regard,
feedback to students was central and they could repeat the same assessment more than once, since the
aim was that students learn and improve their performance eectively (Swan and Hofer 2013). Swaeld
(2011) notes that wrong answers are an opportunity to diagnose what needs to be improved. Error
must be worked on through mechanisms of self and peer-assessment, using formative assessment as
a means of feedback (Boud and Walker 1990; Frey, Schmitt and Allen 2012; Wu, Heng, and Wang 2015).
Table 1 sets out the central characteristics associated with authentic assessment methodology found
in the existing literature. It reveals that many of these are reected in a high proportion of research
articles, with the most frequent seven featuring in over 40%.
However, it is interesting to see that some characteristics with low frequency, such as ability to
make decisions and teamwork/collaborative work, do not feature strongly in the authentic assessment
literature, despite the reoccurring stress on these ‘soft skills’ for employers (Archer and Davison 2008;
NACE 2016). This may be a consequence of the individualised nature of most assessment methods,
but it suggests that authentic assessment approaches which do not foster these skills will have less of
a potential impact on students’ learning for employability.
Components of authentic assessment: realism, cognitive challenge and feedback
The second reading distinguished three dimensions that represent the essence of authentic assessment.
These overarching dimensions are: (1) realism, (2) cognitive challenge and (3) evaluative judgement,
and they are present in all theoretical formulations of the concept. Table 1 also lists the dimensions
and their component characteristics according to the frequency of articles that refer to them as an
inseparable part of the authentic assessment.
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 845
Realism
Realism can come in two forms: onthe one hand, the presence of a ‘real context’ that describes and
delivers a frame for the problem to be solved (Bosco and Ferns 2014); on the other hand, a task to be
solved that is ‘similar’ to what is faced in real and/or professional life (Saye 2013).
In authentic assessment, the context is realistic when information about the described situa-
tion-problem comes from real and/or professional life, involving pertinent and relevant questions to
solve (Swan and Hofer 2013), applicable to realistic situations (Wiggins and McTighe 2006). This transfer
is possible when ideas relate to facts and skills to experiences, applying previous knowledge to new
situations and tasks (Ashford-Rowe, Herrington, and Brown 2014). This realistic context can be present
in examinations and written tasks when items are prepared such as case analyses, problem solving,
and short or extensive essay questions, which act as a proxy of the real world.
The second way to create realism is through performance-based tasks, where students produce work
or demonstrate knowledge, understanding and skills in activities that are close to the profession (Palmer
2004). Wiggins and McTighe (2006) designated as authentic assessment requirements that demand
a true representation of performance in that eld of employment. Teachers must know what are the
typical tasks and functions that employment demands, and design assessments that are authentic
simulations of real professional tasks.
Cognitive challenge
In authentic assessment, the task involves building knowledge, and using higher-order cognitive skills,
such as those proposed in Bloom and Anderson’s taxonomies (Wiggins 1993; Avery and Freeman 2002).
Through the encouragement of assessment, it aims to generate processes of problem solving, appli-
cation of knowledge and decision-making which correspond to the development of cognitive and
metacognitive skills (Elliot and Higgins 2005; Newmann, King, and Carmichael 2007).
This type of assessment intends that students go beyond the textual reproduction of fragmented and
low order content, and move towards understanding, establishing relationships between new ideas and
previous knowledge, linking theoretical concepts with everyday experience, deriving conclusions from
the analysis of data, allowing them to examine both the logic of the arguments present in the theory, as
well as its practical scope (Gulikers, Bastiaens and Kirschner 2004; Ohaja, Dunlea, and Muldoon 2013).
Students should not only respond well to a question, but also demonstrate performances (such as
critical and reective analysis) and concrete products (such as a diagnostic report), exhibiting genuine
mastery of content (Avery, Freeman, and Carmichael 2012).
Dierent research has compared short, medium and long-term performances in tests that meas-
ure memory skills in closed-response items and in tests involving cognitive performances, measuring
higher-order cognitive abilities using open answer items. The results show that the stability in stu-
dents’ performance is greater in items that measure complex cognitive abilities (Rawson, Dunlosky,
and Sciartelli 2013), suggesting that assessing higher-order cognitive performance generates a level
of learning that lasts over time.
Transfer of knowledge is promoted by such assessments, since they stimulate skills that can be used
in contexts other than academic ones that are required and valued in the world beyond the university.
This is rearmed by Bloxham and Boyd (2007), who argue that being able to reproduce knowledge in
a decontextualised examination does not guarantee that knowledge can be used in a real-life environ-
ment. Students need to practice these applications and knowledge transfer skills to solve real problems.
Evaluative judgement
One of the aims of authentic assessment is for students to develop criteria and standards about what a
good performance means, in order that they can judge their own performance and regulate their own
learning; we are referring to this as evaluative judgement’, a term which is emerging in the literature to
describe these capabilities (Tai et al. 2016). Evaluative judgement is a recognition that the assessment
of student achievement involves both standards (for example in rubrics) and the practice of judgement
(Wyatt-Smith and Klenowski 2012). Developing the skills of evaluative judgement is also considered
846 V. VILLARROEL ET AL.
benecial to eective learning. Boud and Molloy (2013) argue that, in order to learn, students need
to build a precise judgement about the quality of their work, and calibrate these judgements in the
light of evidence. Thus, students can identify areas that need improvement and see changes over time,
developing a growing understanding of acceptable standards of performance (Sadler 1989, 2005; Boud
and Falchikov 2006).
A chief component of developing evaluative judgement is formative assessment. Students need to
be exposed to a variety of tasks with diverse performance requirements, and have the experience of
learning about quality, judging quality and seeking and receiving feedback. As part of this, revealing
assessment criteria to students has been shown to help them compare their eorts with the desired
standard and plan their work (Pandero and Romero 2014), although there is growing recognition of the
limitations of published criteria alone in conveying requirements. Furthermore, recent developments
in feedback research stress its potential to nurture students’ capacity for independent judgement as
well as problem-solving, self-appraisal and reection (Carless and Yang 2013). Studies increasingly
emphasise the use of feedback dialogues to engage students with disciplinary problems and to develop
their self-regulation. They posit students as active agents, inducted into their role in creating and using
feedback to help them improve their understanding of quality and to self-regulate their own work
accordingly (Sadler 1989; Boud and Molloy 2013; Carless and Yang 2013). Thus, when evaluative judge-
ment is incorporated into the assessment process, it adds to the authenticity by, rstly, helping students
understand the concept of teacher ‘quality’ and what it means for a task to be of excellence’ (Nicol and
Macfarlane-Dick 2006; Sadler 2010), and, secondly, developing the lifelong capability to assess and
regulate their learning and performance.
Turning dimensions into design
These three dimensions of authentic assessment clarify the construct in a way that holds sucient
consistency across the articles to invite adoption by the sector; but the way to implement this meth-
odology is not suciently described in the literature. Of all the reviewed articles, only 11% have an
authentic assessment model that involves practical conditions or principles to follow. One example
is that of Gulikers, Bastiaens and Kirschner (2006), who propose that authentic assessment has ve
practical requirements. Ashford-Rowe, Herrington, and Brown (2014) identied eight relevant aspects
to consider in designing authentic assessment. The USEM model (Yorke 2010) is probably the most
respected model in this eld (Brown 2005; Andrews and Higson 2008) and is an acronym for four
interrelated components of employability: understanding, skills, ecacy beliefs and metacognition.
However, these examples do not entirely reect the dimensions found in our review of literature
and remain largely at the level of required characteristics, rather than a stage-based model for plan-
ning implementation of authentic assessment by teachers and programmes. Consequently, this paper
concludes by drawing on the three dimensions and their component characteristics to propose a
tentative four step model for building authentic assessments in higher education. The model has been
developed by considering how the three dimensions should inuence curriculum design. The model
uses constructive alignment (Biggs and Tang 2011) where the assessment is designed to support the
student in constructing relevant learning through alignment between the learning outcomes, the
teaching methods and the assessment. Deriving the learning outcomes directly from the complexity
of ‘graduation proles’ and ‘work requirements’ provides the potential for both realism and cognitive
challenge (Step 1). Furthermore, the model draws on evidence relating assessment design to high
quality learning (Bloxham and Boyd 2007), and leads to creating a rich context, worthwhile tasks and
use of higher order skills (Step 2). Finally, the model draws on the curriculum design features associated
with supporting evaluative judgement particularly in steps three and four.
These four steps develop in dierent levels of abstraction, complexity and application of teach-
ing practices. Step 1 takes a macro perspective linked with the relationship between undergraduate
programmes and the working world, considering how the curriculum can nourish a connection with
the workplace. Step 2 advances to the planning and design of assessment. Steps 3 and 4 have a micro
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 847
perspective focusing on what happens in the classroom. They identify concrete pedagogical strategies
designed to give students a more active role in their learning process and help them grasp standards,
practice judgement-making and receive feedback. This model has been successfully implemented in
a pilot study in two Chilean universities, involving 30 teachers from six undergraduate programmes
who had previously been trained in its use (see Figure 1).
Proposal: a model to build authentic assessments in the university
Step 1: considering the workplace context
Graduation prole. The rst condition is that the teacher knows and understands the graduation
prole of the programme that their course contributes to (often called programme learning outcomes).
This prole represents the learning that all graduates must deploy once they nish their studies and
enter the labour market (often formulated as a list of professional standards or competences). This will
allow them to determine how their course will contribute to the graduation prole and ensure, through
assessment, that students achieve the expected learning goals (Handley and Williams 2011). For this
step, the teacher should ask: How does my subject connect and contribute to achieving the competences
of the graduation prole that this programme is committed to develop in students?
Work requirements. It is necessary to nurture students’ skills for employment. These may be specic
professional skills, but also transferable skills demanded by the world of work and relevant whether
the programme is vocational or non-vocational (Yorke and Knight 2004). The development of these
skills must be part of the subjects that make up the curriculum. In this way, it can be ensured that once
graduated, professionals can successfully face the typical problems of the workplace (Maxwell 2012).
To respond to this stage, the teacher should ask: How is the knowledge and skills learned in my subject
related to the typical problems faced by professionals in the world of work?
Step 2: designing authentic assessment
To accomplish the second step, teachers’ pedagogical decisions regarding the assessment process
must reect the challenges that professionals of this discipline face in work. This can be seen in three
areas: (a) decisions about the conditions in which the assessment is taken (for example, individual or
STEP ONE
Workplace Context
• Formative feedback
• Summative feedback
• Sustainable feedbac
k
STEP TWO
Design
Assessment
• Drafting a rich context.
• Creating a worthwhile task
•Recquiring higher order skills
STEP THREE
Judgement
•Assessment criteria and rubrics
• Engage students with criteria
•Engage the students in judgement
STEP FOUR
Feedback
Graduation Profile
Work requirements
Figure 1.Model to build authenthic assessment.
848 V. VILLARROEL ET AL.
group, access to reading and information, time available), (b) decisions about the assessment formats
(for example, online or in the classroom, open or closed construction answer, development of discipli-
nary knowledge or deployment of professional performance), (c) decisions about the kind of problem
to which students will apply knowledge (for example, derived from employers, former students or
students’ experience in professional placements). In relation to c, professional problems derived from
contemporary work places assist courses in keeping their assessment problems up to date with the
demands of the working world for that profession.
Drafting rich context. The rst dimension that distinguishes an authentic assessment is its realism
(Saye 2013; Bosco and Ferns 2014). It refers to a simulation of real-work or real-world situations that
function as a proxy for professional performance. In creating a problem situation, we place the student
in a real context that urges them to make decisions about what they need to do. In this way, it is not a
matter of the student reproducing course content but of discriminating what areas of their learning
are needed to answer the question.
The inclusion of context in the question can also be used to bring authenticity to traditional written
tests, in problem-solving items, brief and extended development questions, case analyses and even
multiple-choice questions. This is done by the construction of realistic and problematising contexts
that must be analysed in order to answer.
Creating a worthwhile task. One challenge of authentic assessment is to make sure that the methods
go beyond academic formats and become useful for third parties (in addition to the teacher and the
student). The idea is that the teacher, when designing an assessment strategy, thinks: ‘to whom would
it be important that my students learn this knowledge?’ Based on this question, the assessment design
may consider the participation of third parties in the form of clients, employers, colleagues from the
same or from another profession, and/or external teachers who review and evaluate the performance of
the students. Moreover, another possible role for ‘third parties’ is as beneciaries of students’ knowledge.
For example, receiving treatment, intervention or advice (Brown 2005; Andrews and Higson 2008). This
strategy gives a purpose to student learning, making it meaningful.
Requiring higher order skills. Authentic assessment is designed to promote the use of higher order
cognitive skills related to using, modifying or rebuilding knowledge into something new. This is based
on the higher levels of cognitive skills identied in Bloom’s taxonomy (Bloom, Masia, and Krathwohl
1964) and its later formulations (Kennedy 2007; Marzano and Kendall 2008). Authentic assessment,
thus, privileges the judgement of students’ cognitive ability to judge, decide, criticise, suggest, design,
innovate, propose or to invent.
To meet the guidelines of this second step, teachers must design assessments that test knowledge
construction and application in contextualised and realistic questions. For example, assessment of
intelligence theories in psychology might use, as context, a dialogue between two primary teachers.
They discuss why some of their students don’t learn as expected and posit dierent reasons, using dif-
ferent theories of intelligence. The questions can ask their students to infer: (a) the theory of intelligence
used by each teacher, (b) possible critiques that each teacher would make of the other’s reasoning,
(c) possible teaching practices that each primary teacher must use. The aim is to ask students to use
their knowledge to identify, analyse, apply, transfer, conclude and decide in a real situation that has an
impact on others (in this case, school students).
Step 3: learning and applying standards for judgement
Steps three and four of the model are necessarily integrated as a cyclical process of guidance and feed-
back loops (Hounsell et al. 2008), which enable students to both improve their learning and develop
evaluative judgement. They are set out separately here to emphasise the importance of the dierent
steps. Step three focuses on helping students grasp standards and practice evaluative judgement
whereas step four outlines the specic stages of feedback.
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 849
Assessment criteria and rubrics. Rubrics typically combine assessment criteria with the standards
required to achieve dierent grades, and are challenging to write when assessment tasks require
complex and divergent responses. A key characteristic of authentic assessment is that such information
is known to students in order that they can gradually develop the ability to evaluatively judge their
own work and that of others on the journey to becoming autonomous students and, eventually,
professionals. Therefore, consideration should be given to criteria and standards and how to make
these available to the student. These include not only published criteria (‘explicit’) but also the ‘latent’
and ‘meta’ criteria used in the act of judgement (Wyatt-Smith and Klenowski 2012). The latter two are
not readily communicated because of their tacit nature and require engagement in judgement. Explicit
criteria are, therefore, only a rst stage in a continuum of processes to help students acquire knowledge
of assessment expectations (O’Donovan, Price and Rust 2004).
Engage students with criteria. Assessment is a ‘social and cultural practice’ (Wyatt-Smith and Klenowski
2012, 37) where teachers acquire tacit knowledge of standards and judgement through participation,
observation, imitation and dialogue (Rust, Price, and O’donovan 2003). Therefore, communicating
the tacit aspects of assessment criteria to students requires similar approaches. For example, actively
engaging students in marking using assessment criteria and exemplar assignments can signicantly
improve their performance (O’Donovan, Price and Rust 2008). Alternatively, the act of co-creating
assessment criteria with students assists them in developing evaluative judgement (Fraile, Pandero
and Pardo 2016), as it provides a clear opportunity for detailed dialogue about standards.
Judgement-making practice. A complementary process to engaging students with criteria is providing
them with the opportunity for self and peer assessment using those criteria. Evidence suggests there is
formative benet from judgement activities, providing students with feedback both as a peer reviewer
and as a receiver of peer review (Dawson et al. 2017). These activities can help to clarify the assessment
criteria and better understand what is expected of student’s performance level (Nicol and Macfarlane-
Dick 2006). Dawson et al. (2017) stress the value of self-assessment in helping students identify criteria
to use in judging their own assignments and Tai et al. (2016) found explicit benets of peer observation
and feedback in developing students’ evaluative judgement. Opportunities for judgement can also be
provided by engaging students with exemplars; that is anonymised examples of student work (Handley
and Williams 2011, 103). Exemplars, involving dierent levels of accomplishment, can be marked and
discussed by the students to help them discover the criteria used through concrete expressions of
dierent levels of achievement.
Step 4: giving feedback
Feedback research is increasingly emphasising a change from a conventional approach which generally
positions students as passive recipients (Carless et al. 2011). Recent work advocates ‘feedback mark 2’
(Boud and Molloy 2013), where feedback is part of an assessment cycle involving students as active in
gathering and responding to feedback. In this model, feedback to foster evaluative judgement involves
dialogue with and between students with a view to helping them clarify appropriate criteria, make
increasingly accurate judgements about their own performance and decide what changes they need
to make.
In this fourth step, teachers must provide formative instances, in which students assume an active
role in identifying and understanding the gap between their performance and the one expected, and
also analyse what action to take, discovering strategies to reduce that gap.
Formative feedback. There is a tendency to think about feedback as information provided to students
in response to a completed assignment. However, the emphasis on evaluative judgement in creating
authentic assessment means that students need access to feedback throughout their studies. There
are numerous ways to help students acquire and consider formative feedback, including peer review,
practice tasks, group test taking, observations of work colleagues and feedback on draft assignments.
850 V. VILLARROEL ET AL.
Summative feedback. Summative feedback is often important for quality assurance purposes as
teachers are held to account for the quality of their information to students explaining marking decisions.
However, the impact of such feedback is often limited. Such feedback must provide information to
students about their performance in a way that helps them understand the strengths and weaknesses
of their work (Panadero, Brown and Strijbos 2016).
Sustainable feedback. Sustainable assessment, coined by Boud (2010), is intrinsically linked with the
concept of evaluative judgement. It was dened as assessment that meets students’ present needs
and prepares them to meet their own future learning needs. The intention is that students gradually
become able to make judgements about their own performance, a crucial element of professional
work. Therefore, it is important that students learn how to gather, recognise and use feedback in the
absence of a teacher.
Conclusion
The literature identifies multiple benefits to students (and to employers) from the use of authentic
assessment. However, devising authentic assessment, particularly in systems with strong tradi-
tions of ‘testing’, is not easy as we lack a robust concept on which to base guidance for assessment
design and operation. This article has attempted to contribute to the debate by clarifying three
key dimensions of authentic assessment. These dimensions provide guidance for teachers seek-
ing more authentic assessment, including assessment-related teaching practices which develop
‘authentic’ capabilities for employment. The breadth of the dimensions and their reflection in the
proposed four step model encourages the integration of discipline-specific skills and knowledge
with application in the workplace, but also, importantly, with the generic capacity to evaluate
and improve performance. They also highlight the complexity of learning for authentic practice,
and the potential of assessment to create a richer learning environment and build capability for
higher order and lifelong learning.
Arguably, the step-based model guides those reluctant to adopt authentic assessment by provid-
ing concrete stages that can be applied to conventional testing methods, for example by describing
a rich context for, and demanding problem-solving and decision-making in, individual questions.
We recognise that comprehensive knowledge of a ‘graduation prole’ may be beyond most teach-
ers. However, clarication of ‘programme outcomes’ reecting the ‘graduation prole’ can provide
the basis for a mapping exercise for individual courses, identifying how their course/s contributes
to teaching and assessing the programme outcomes. This approach reects growing eorts to
encourage a programme approach to assessment design and its capacity to improve the student
learning experience.
The next stage is to further test, evaluate and rene the model and consider its acceptability with
those most reluctant to adopt authentic methods in assessing classroom learning. We welcome feedback
from others who are interested in implementing authentic assessment, particularly in higher education
systems with strong traditions of testing.
Disclosure statement
No potential conict of interest was reported by the authors.
Funding
This work was supported by the Ministry of Education of Chile [grant number UDD1303].
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 851
Notes on contributors
Verónica Villarroel is an assistant professor and director of the Center for Research and Improvement of Education (CIME),
Faculty of Psychology, Universidad del Desarrollo. Her research lines are linked to the assessment of learning, discursive
practices in the classroom and development of perspective taking in early childhood.
Susan Bloxham is a emeritus professor of Academic Practice, University of Cumbria. Her current work explores how aca-
demics use standards in their academic judgement, particularly external examiners, and how student understand academic
judgement.
Daniela Bruna is an assistant professor and researcher at Center for Research and Improvement of Education (CIME),
Faculty of Psychology, Universidad del Desarrollo. She has developed research aimed in the promotion self-regulated in
higher education and in learning assessment.
Carola Bruna is an assistant professor, Facultad de Ciencias Biológicas, Universidad de Concepción. Her lines of research
is the study of the eect of active-participatory methodologies in the teaching of science.
Constanza Herrera-Seda is an assistant professor, Faculty of Psychology, Universidad del Desarrollo. She has developed
research aimed at strengthening the teaching and learning, with special interest in studies on educational inclusion, teacher
training and improvement of teaching and assessment practices in university education.
ORCID
Verónica Villarroel http://orcid.org/0000-0002-3000-2248
Daniela Bruna http://orcid.org/0000-0001-7424-2959
References
Andrews, J., and H. Higson. 2008. “Graduate Employability, ‘Soft Skills’ versus ‘Hard’ Business Knowledge: A European Study.
Higher Education in Europe 33 (4): 411–422. doi:10.1080/03797720802522627.
Andrews, J., and H. Higson. 2014. “Is Bologna Working? Employer and Graduate Reections of the Quality, Value and
Relevance of Business and Management Education in Four European Union Countries. Higher Education Quarterly 68
(3): 267–287. doi:10.1111/hequ.12054.
Archbald, D. A., and F. M. Newmann. 1988. Beyond Standardized Testing: Assessing Authentic Academic Achievement in the
Secondary School. Reston, VA: National Association of Secondary School Principals Assessment.
Archer, W., and J. Davison. 2008. Graduate Employability: What Do Employers Think and Want?. London: CIHE.
Ashford-Rowe, K., J. Herrington, and C. Brown. 2014. “Establishing the Critical Elements That Determine Authentic
Assessment.Assessment & Evaluation in Higher Education 39 (2): 205–222. doi:10.1080/02602938.2013.819566.
Avery, P., and C. Freeman. 2002. “Developing Authentic Instruction in the Social Studies. Journal of Research in Education
12 (1): 50–56.
Avery, P. G., C. Freeman, and D. Carmichael. 2012. “Developing Authentic Instruction in the Social Studies.Journal of
Research in Education 12 (1): 50–56.
Baeten, M., K. Struyven, and F. Dochy. 2013. “Student-centred Teaching Methods: Can They Optimise Students’ Approaches
to Learning in Professional Higher Education?” Studies in Educational Evaluation 39 (1): 14–22. doi:10.1016/j.
stueduc.2012.11.001.
Benner, P., M. Sutphen, V. Leonard, and L. Day. 2009. Educating Nurses: A Call for Radical Transformation. San Francisco, CA:
Jossey-Bass. doi:10.3928/01484834-20120402-01.
Beyaztas, D. I., and N. Senemoglu. 2015. “Learning Approaches of Successful Students and Factors Aecting Their Learning
Approaches. Education and Science 40 (179): 193–216.
Biggs, J., and C. Tang. 2011. Teaching for Quality Learning at University: What the Student Does. Maidenhead: Open University
Press.
Birenbaum, M. 2003. “New Insights into Learning and Teaching and Their Implications for Assessment. In Optimizing New
Modes of Assessment. in Search of Qualities and Standards, edited by Mien Segers, Filip Dochy and E. Cascallar, 13–36.
Dordrecht: Kluwer Academic Publishers. doi:10.1007/0-306-48125-1.
Bloom, B., B. Masia, and D. Krathwohl. 1964. Taxonomy of Educational Objectives. New York: McKay.
Bloxham, S. 2015. “Assessing Assessment: New Developments in Assessment Design, Feedback Practices and Marking in
Higher Education.” In A Handbook for Teaching and Learning in Higher Education, edited by H. Fry, S. Ketteridge, and S.
Marshall, 107–122. 4th ed. Abingdon: Routledge.
Bloxham, S., and P. Boyd. 2007. Developing Eective Assessment in Higher Education: A Practical Guide. Maidenhead: Open
University Press.
852 V. VILLARROEL ET AL.
Bosco, A. M., and S. Ferns. 2014. “Embedding of Authentic Assessment in Work-integrated Learning Curriculum. Asia-Pacic
Journal of Cooperative Education 15 (4): 281–290.
Boud, D. 2010. “Sustainable Assessment: Rethinking Assessment for the Learning Society. Studies in Continuing Education
22 (2): 151–167. doi:10.1080/713695728.
Boud, D., and N. Falchikov. 2006. “Aligning Assessment with Long‐term Learning.Assessment and Evaluation in Higher
Education 31 (4): 399–413. doi:10.1080/02602930600679050.
Boud, D., and E. Molloy. 2013. “Rethinking Models of Feedback for Learning: The Challenge of Design.Assessment &
Evaluation in Higher Education 38 (6): 698–712. doi:10.1080/02602938.2012.691462.
Boud, D., and D. Walker. 1990. “Making the Most of Experience. Studies in Continuing Education 12 (2): 61–80.
doi:10.1080/0158037900120201.
Brown, S. 2005. “Assessment for Learning.” Learning and Teaching in Higher Education 1: 81–89.
Brush, T., and J. Saye. 2008. “The Eects of Multimedia-Supported Problem-based Inquiry on Student Engagement, Empathy,
and Assumptions about History. Interdisciplinary Journal of Problem-Based Learning 2 (1): 21–56. doi:10.7771/1541-
5015.1052.
Carless, D., D. Salter, M. Yang, and J. Lam. 2011. “Developing Sustainable Feedback Practices. Studies in Higher Education
36 (4): 395–407.
Carless, D., and M. Yang. 2013. “The Feedback Triangle and the Enhancement of Dialogic Feedback Processes. Teaching in
Higher Education 18 (3): 285–297.
Cummings, J., and G. Maxwell. 1999. “Contextualising Authentic Assessment. Assessment in Education: Principles, Policy &
Practice 6 (2): 177–194. doi:10.1080/09695949992865.
Dacre Pool, L., and P. Sewell. 2007. “The Key to Employability: Developing a Practical Model of Graduate Employability.
Education + Training 49 (4): 277–289. doi:10.1108/00400910710754435.
Dawson, P., D. Boud, J. Tai, R. Ajjawi and E. Panadero 2017. Building Courses to Develop ‘Evaluative Judgement’: Learning to
Make Decisions about Quality Work. Assessment in Higher Education conference. Manchester, UK, June.
Deeley, S. J., and C. Bovill. 2017. “Sta Student Partnership in Assessment: Enhancing Assessment Literacy through
Democratic Practices. Assessment & Evaluation in Higher Education 42 (3): 463–477.
Dochy, F., and L. McDowell. 1997. “Assessment as a Tool for Learning.Studies in Educational Evaluation. 23 (4): 279–298.
doi:10.1016/S0191-491X(97)86211-6.
Elliott, N., and A. Higgins. 2005. “Self and Peer Assessment – Does It Make a Dierence to Student Group Work?” Nurse
Education in Practice 5: 40–48. doi:10.1016/j.nepr.2004.03.004.
Ellström, E., and P. E. Ellström. 2014. “Learning Outcomes of a Work-based Training Programme.European Journal of Training
and Development 38 (3): 180–197. doi:10.1108/EJTD-09-2013-0103.
Endedijk, M. D., and J. D. Vermunt. 2013. “Relations between Student Teachers’ Learning Patterns and Their Concrete Learning
Activities.Studies in Educational Evaluation 39 (1): 56–65. doi:10.1016/j.stueduc.2012.10.001.
Fraile, J., E. Panadero, and R. Pardo. 2016. “The Eect of Co-Created Rubrics on Self-regulation, Performance and Self-ecacy.
Paper presented at the 8th Biennial Conference of EARLI SIG 1: Assessment & Evaluation, Munich.
Frey, B., V. Schmitt, and J. Allen. 2012. “Dening Authentic Classroom Assessment. Practical Assessment, Research & Evaluation
17 (2): 1–18.
Gielen, S., F. Dochy, and S. Dierick. 2003. “Evaluating the Consequential Validity of New Modes of Assessment: The Inuence
of Assessment on Learning, including the Pre-, Post-and True Assessment Eects”. In Optimising New Models of Assessment:
In Search of Qualities and Standards, edited by M. Segers, F. Dochy and E. Cascallar, 37–54. Dordrecht: Kluwer Academic
Publishers. doi:10.1007/0-306-48125-1_3.
Gulikers, J., T. Bastiaens, and P. Kirschner. 2004. A Five-dimensional Framework for Authentic Assessment. Educational
Technology Research and Development 52 (3): 67–86. doi: 10.1007/BF02504676.
Gulikers, J., T. Bastiaens, and P. Kirschner. 2006. Authentic Assessment, Student and Teacher Perceptions: The
Practical Value of the Five Dimensional Framework. Journal of Vocational Education and Training 58 (3): 337–357.
doi:10.1080/13636820600955443.
Gulikers, J., T. Bastiaens, P. Kirschner, and L. Kester. 2008. “Relation between Student Perceptions of Assessment
Authenticity, Study Approaches and Learning Outcome. Studies in Educational Evaluation 32: 381–400. doi:10.1016/j.
stueduc.2006.10.003.
Handley, K., and L. Williams. 2011. “From Copying to Learning: Using Exemplars to Engage Students with Assessment Criteria
and Feedback. Assessment & Evaluation in Higher Education 36 (1): 95–108.
Hart, C., S. Hammer, P. Collins, and T. Chardon. 2011. “The Real Deal: Using Authentic Assessment to Promote Student
Engagement in the First and Second Years of a Regional Program. Legal Education Review 21 (1): 97–121.
Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. “The Quality of Guidance and Feedback to Students. Higher
Education Research and Development. 27 (1): 67.
Ken, T., and Y. Chean. 2012. “Business Graduates’ Competences in the Eyes of the Employers: An Exploratory Study in
Malaysia.World Review of Business Research 2 (2): 176–190.
Kennedy, D. 2007. Writing and Using Learning Outcomes. Dublin: Watermans Printers.
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 853
Kreber, C., M. Klampeitner, V. McCune, S. Bayne, and M. Knottenbelt. 2007. “What Do You Mean by ‘Authentic’? A
Comparative Review of the Literature on Conceptions of Authenticity in Teaching. Adults Education Quarterly 58 (1):
22–43. doi:10.1177/0741713607305939.
Marzano, R., and J. Kendall. 2008. Designing and Assessing Educational Objectives: Applying the New Taxonomy. Thousand
Oaks, CA: Corwin Press.
Maxwell, T. 2012. “Assessment in Higher Education in the Professions: Action Research as an Authentic Assessment Task.”
Teaching in Higher Education 17 (6): 686–696. doi:10.1080/13562517.2012.725220.
McCabe, A., and U. O’Connor. 2014. “Student-centred Learning: The Role and Responsibility of the Lecturer. Teaching in
Higher Education 19 (4): 350–359. doi:10.1080/13562517.2013.860111.
Meisels, S. J., X. Wen, and K. Beachy-Quick. 2010. “Authentic Assessment for Infants and Toddlers: Exploring the Reliability
and Validity of the Ounce Scale. Applied Developmental Science 14 (2): 55–71. doi:10.1080/10888691003697911.
Moon, T. R., C. M. Brighton, C. M. Callahan, and A. Robinson. 2005. “Development of Authentic Assessments for the Middle
School Classroom.Journal of Secondary Gifted Education 16 (2–3): 119–133.
NACE. 2016. Job Outlook 2016: The Attributes Employers Want to See on New College Graduates’ Resumes. Bethlehem, PA:
NACE. Accessed October, 2017. http://www.naceweb.org/career-development/trends-and-predictions/job-outlook-
2016-attributes-employers-want-to-see-on-new-college-graduates-resumes/
Newman, F., A. Bryk, and J. Nagaoka. 2001. Authentic Intellectual Work and Standardized Test: Conict or Coexistence?. Chicago,
IL: Consortium on Chicago School Research.
Newmann, F., B. King, and D. Carmichael. 2007. Authentic Instruction and Assessment. Common Standards for Rigor and
Relevance in Teaching Academic Subjects. Document Prepared for the Iowa Department of Education.
Nicol, D., and D. Macfarlane-Dick. 2006. “Formative Assessment and Self-regulated Learning: A Model and Seven Principles
of Good Feedback Practice. Studies in Higher Education 31 (2): 199–218. doi:10.1080/03075070600572090.
Nicol, D., A. Thomson, and C. Breslin. 2014. “Rethinking Feedback Practices in Higher Education: A Peer Review Perspective.
Assessment & Evaluation in Higher Education 39 (1): 102–122. doi:10.1080/02602938.2013.795518.
O’donovan, B., M. Price, and C. Rust. 2004. “Know What I Mean? Enhancing Student Understanding of Assessment Standards
and Criteria.Teaching in Higher Education 9 (3): 325–335.
O’Donovan, B., M. Price, and C. Rust. 2008. “Developing Student Understanding of Assessment Standards: A Nested Hierarchy
of Approaches. Teaching in Higher Education 13 (2): 205–217.
Ohaja, M., M. Dunlea, and K. Muldoon. 2013. “Group Marking and Peer Assessment during a Group Poster Presentation: The
Experiences and Views of Midwifery Students. Nurse Education in Practice 13 (5): 466–470. doi:10.1016/j.nepr.2012.11.005.
Palmer, S. 2004. Authenticity in Assessment: Reecting Undergraduate Study and Professional Practice. European Journal
of Engineering Education 29 (2): 193–202. doi:10.1080/03043790310001633179.
Panadero, E., G. T. L. Brown, and J. W. Strijbos. 2016. “The Future of Student Self-assessment: A Review of Known Unknowns
and Potential Directions. Educational Psychology Review 28 (4): 803–830. doi:10.1007/s10648-015-9350-2.
Panadero, E., and M. Romero. 2014. “To Rubric or Not to Rubric? The Eects of Self-assessment on Self-regulation,
Performance and Self-ecacy.Assessment in Education: Principles, Policy and Practice 21 (2): 133–148.
Pintrich, P. 2000. “Multiple Goals, Multiple Pathways: The Role of Goal Orientation in Self-Regulated Learning and
Achievement. Journal of Educational Review 92: 544–555. doi:10.1037/0022-0663.92.3.544.
Plump, C. 2010. “Dealing with Problem Employees: A Legal Guide for Employers. Business Horizons 53: 607–618. doi:10.1016/j.
bushor.2010.07.003.
Randolph, J. 2009. “A Guide to Writing the Dissertation Literature Review.Practical Assessment, Research & Evaluation 14
(13): 2–13.
Rawson, K., J. Dunlosky, and S. Sciartelli. 2013. “The Power of Successive Relearning: Improving Performance on Course
Exams and Long-term Retention. Educational Psychology Review 25: 523–548. doi:10.1007/s10648-013-9240-4.
Raymond, J., C. Homer, R. Smith, and J. Gray. 2013. “Learning through Authentic Assessment. An Evaluation of a New
Development in the Undergraduate Midwifery Curriculum. Nurse Education in Practice 13 (5): 471–476. doi:10.1016/j.
nepr.2012.10.006.
Rust, C., M. Price, and B. O’donovan. 2003. “Improving Student’ Learning by Developing Their Understanding of Assessment
Criteria and Processes. Assessment and Evaluation in Higher Education 28 (2): 147–164.
Sadler, D. Royce. 1989. “Formative Assessment and the Design of Instructional Systems. Instructional Science 18: 119–144.
doi:10.1007/BF00117714.
Sadler, D. Royce. 2005. “Interpretations of Criteria-based Assessment and Grading in Higher Education.Assessment &
Evaluation in Higher Education 30 (2): 175–194. doi:10.1080/0260293042000264262.
Sadler, D. Royce. 2010. “Beyond Feedback: Developing Student Capability in Complex Appraisal. Assessment & Evaluation
in Higher Education 35 (5): 535–550. doi:10.1080/02602930903541015.
Sambell, K., L. McDowell, and C. Montgomery. 2013. Assessment for Learning in Higher Education. London: Routledge.
Saye, J. 2013. “Authentic Pedagogy: Its Presence in Social Studies Classrooms and Relationship to Student Performance on
State-mandated Tests. Theory & Research in Social Education 41: 89–132. doi:10.1080/00933104.2013.756785.
Segers, M., F. Dochy, and E. Cascallar. 2003. Optimising New Models of Assessment: In Search of Qualities and Standards.
Dordrecht: Kluwer Academic Publishers. doi:10.1007/0-306-48125-1.
Shepard, L. 2000. “The Role of Assessment in a Learning Culture.Educational Researcher 29 (7): 4–14.
854 V. VILLARROEL ET AL.
Singh, P., R. Thambusamy, and M. Ramly. 2014. “Fit or Unt? Perspectives of Employers and University Instructors of
Graduates’ Generic Skills.Social and Behavioral Sciences 123: 315–324. doi:10.1016/j.sbspro.2014.01.1429.
Swaeld, S. 2011. “Getting to the Heart of Authentic Assessment for Learning. Assessment in Education: Principles, Policy
and Practice. 18 (4): 433–449. doi:10.1080/0969594X.2011.582838.
Swan, K., and M. Hofer. 2013. “Examining Student-Created Documentaries as a Mechanism for Engaging Students in
Authentic Intellectual Work.Theory & Research in Social Education 41: 133–175. doi:10.108.0/00933104.2013.758018.
Tai, J., B. J. Canny, T. P. Haines, and E. K. Molloy. 2016. “The Role of Peer-assisted Learning in Building Evaluative Judgement:
Opportunities in Clinical Medical Education.” Advances in Health Sciences Education 21: 659–676.
Torrance, H. 1995. Evaluating Authentic Assessment. Buckingham: Open University Press.
Vanaki, Z., and R. Memarian. 2009. “Professional Ethics: Beyond the Clinical Competency. Journal of Professional Nursing
25: 285–291. doi:10.1016/j.profnurs.2009.01.009.
Watkins, D., B. Dahlin, and M. Ekholm. 2005. Awareness of the Backwash Eect of Assessment: A Phenomenographic Study
of the Views of Hong Kong and Swedish Lecturers.” Instructional Science 33: 283–309. doi:10.1007/s11251-005-3002-8.
Wiggins, G. 1990. “The Case for Authentic Assessment.Practical Assessment, Research & Evaluation 2 (2): 28–37.
Wiggins, G. 1993. Assessing Student Performance. San Francisco, CA: Jossey-Bass.
Wiggins, G., and J. McTighe. 2006. “Examining the Teaching Life. Educational Leadership. 63 (6): 26–29.
Wiliam, D. 2007. “Keeping Learning on Track: Formative Assessment and the Regulation of Learning. In Second Handbook
of Mathematics Teaching and Learning, edited by F. K. Lester, 1053–1098. Greenwich, CT: Information Age.
Wiliam, D., C. Lee, C. Harrison, and P. Black. 2004. “Teachers Developing Assessment for Learning: Impact on Student
Achievement. Assessment in Evaluating 11 (1): 49–65. doi:10.1080/0969594042000208994.
Wu, X., M. Heng, and W. Wang. 2015. “Nursing Students’ Experiences with the Use of Authentic Assessment Rubric and Case
Approach in the Clinical Laboratories. Nurse Education Today 35: 549–555. doi:10.1016/j.nedt.2014.12.009.
Wyatt-Smith, C., and V. Klenowski. 2012. “Explicit, Latent and Meta-criteria: Types of Criteria at Play in Professional Judgement
Practice. Assessment in Education: Principles, Policy and Practice 20 (1): 35–52.
Yorke, M. 2010. “Employability: Aligning the Message, the Medium and Academic Values.Journal of Teaching and Learning
for Graduate Employability 1 (1): 2–12.
Yorke, M., and P. T. Knight. 2004. Embedding Employability into the Curriculum. York: HEA.
... With a goal of improving positive student outcomes (e.g., engagement and enjoyment as well as increased performance) while reducing negative outcomes (e.g., frustration and complacency as well as decreased performance), the University of Mississippi courses have changed to reflect needs of both the course curriculum and student goals, limitations of course administration (e.g., academic calendar, lecture and laboratory space, etc.), and limitation of course (e.g., laboratory equipment) and student resources both financial and temporal. Using the high-impact, pedagogical techniques of course blueprinting (Coderre et al., 2009;Villarroel et al., 2018), backwards design (Wiggins and McTighe, 2005;Emory, 2014), and high-structure (Wilton et al., 2019;Beck and Roosa, 2020) in conjunction with careful selection and development of learning activities and assessments and regular opportunities for retrieval practice (Bae et al., 2019;Dobson et al., 2017;Ritchie et al., 2019), has led to subjective year-to-year improvements in outcomes but analyses had never been undertaken to identify long-term, statistically significant changes. ...
... Identification of assessments appropriate to the desired learning outcomes is the second step of backwards design and can be simultaneously incorporated with course blueprinting principles. While the primary function of a course blueprint is to validate assessment tools, it can also be used to guide the selection of learning activities (Coderre et a., 2009;Villarroel et al., 2018). A course blueprint can be constructed methodically and quantitatively enabling objective insight to the relative weighting of course assessments (McDonald et al., 2016) and should be maintained through a systematic monitoring of course content (Coderre et al., 2009;Villarroel et al., 2018). ...
... While the primary function of a course blueprint is to validate assessment tools, it can also be used to guide the selection of learning activities (Coderre et a., 2009;Villarroel et al., 2018). A course blueprint can be constructed methodically and quantitatively enabling objective insight to the relative weighting of course assessments (McDonald et al., 2016) and should be maintained through a systematic monitoring of course content (Coderre et al., 2009;Villarroel et al., 2018). ...
Article
Full-text available
Reflections on the efficacy of pedagogical changes and practices and their effect on student performance are often hindered by incomplete data, small sample sizes, and the confounding variables of multiple instructors and teaching sites. Observations from such retrospective analyses, however, are highly sought after by instructors and administrators interested in what methods significantly enhance student learning and comparisons of student success across instructors and institutions. Compilation of student data from ten years of Human Anatomy and Physiology I at the University of Mississippi enabled statistical analyses of how changes in course design over ten years of instruction, including remote instruction during the COVID-19 pandemic, were associated with student engagement and performance in Human A&P I with a large data set (n=3305) from students taught by a single instructor. Univariate analysis of variance, bivariate correlation, and discriminant function analysis (DFA) tests revealed multiple significant differences over time. Specifically, the DFA indicated that 89.5% (Discriminant Function 1) of the variation in overall course performance (i.e., letter grade) is explained by student performance variables of exam average, lab practical average, lab quiz average, and the number of Supplemental Instruction (SI) sessions attended. For Discriminant Function 2, 8.1% of the variation is explained by student engagement variables of the number of missed lecture assignments, lab assignments, and online assignments. Institutionally, these results will be used to continue effective course practices, identify engagement strategies that enhance student motivation and reduce anxiety, and develop a performance dashboard that will both identify struggling students and coach students towards success in A&P.
... This is of no surprise given Croy's (2018) earlier argument that assessment of professional development is not prioritised in research. Additionally, it is recognised that there is limited guidance to support the design of authentic and sustainable assessments (Villarroel et al., 2017). That said, some recommendations are available for addressing the shorter-term goals of summative assessment, to measure the intended learning outcomes (Kirkwood, 2007). ...
... The justification for student choice, in creating either a reflective report or an article for an SLT magazine was to allow students to have a purpose for their assignment and to authenticate the purpose of writing for a real audience (Freeman and Lewis, 1998). Students can, therefore, see the real-life authenticity of developing report writing, which is important for the SLT role (Kandlbinder, 2007;Peeters and Vaidya, 2016) and beneficial for employment (Villarroel et al., 2017). The option of writing a contribution for an article was felt to give students the opportunity to add value to the profession (Sambell, McDowell and Montgomery, 2013) by writing about developing reflective practice and the benefits, for their peers. ...
... Eventually, from the analysis emerged three different patterns for the implementation of VR role-plays based on the relationships between the different dimensions and components of the didactical framework (cf. Villarroel et al., 2018). Due to the limitations of the paper, we only focus on the main feedback and final blueprints. ...
... Blueprints describe and analyse the relationships between different components of empirical research (cf. Villarroel et al., 2018). In terms of the Hotel Academy project, the blueprints were developed based on the dimensions of the didactical framework: didactics, organization, technology, economy, and culture. ...
... Thus, self-and peer-assessment can develop students' evaluative judgment skills and support self-regulated and life-long learning (Boud & Falchikov, 2006). Villarroel et al. identified self-and peer-assessment as essential tasks that support authentic assessment due to the relevance of the skillset to the world of work (Villarroel et al., 2018). When scaffolded from first to final years, self-and peer-assessment of teamwork have the potential to provide a multifaceted learning environment for students as they progress towards the attainment of their GLOs. ...
... For example, the strategy cannot ensure that students do not participate in contract cheating. Authentic assessments have been shown to reduce academic misconduct and improve employability skills (Villarroel et al., 2018). An evaluation of the SITPAT task against an authentic assessment framework (Schultz et al., 2022) suggested that criteria were met, except those associated with industry engagement. ...
Article
Full-text available
A STEM-based faculty in an Australian university leveraged online educational technology to help address student and academic concerns associated with team-based assessment. When engagement and contribution of all team members cannot be assured, team-based assessment can become an unfair and inaccurate measure of student competency. This case study explores the design and capacity of an online self and intra-team peer-assessment of teamwork strategy to measure student engagement and enable peers to hold each other accountable during team-based assessments. Analysis of student interactions across 39 subjects that implemented the strategy in 2020, revealed that an average of 94.4% of students completed the self and intra-team peer-assessment task when designed as part of a summative team-based assessment. The analysis also revealed that an average of 10.3% of students were held accountable by their peers, receiving feedback indicating their teamwork skills and behaviours were below the required minimum standard. Furthermore, the strategy was successfully implemented in cohorts ranging from seven to over 700 students, demonstrating scalability. Thus, this online self and intra-team peer-assessment strategy provided teaching teams with evidence of student engagement in a team-based assessment while also enabling students to hold each other accountable for contributing to the team task. Lastly, as the online strategy pairs with any discipline-specific team-based assessment, it provided the faculty with a method that could be used consistently across its schools to support management and engagement in team-based assessments.
... The process through which the dynamic interactions of the reader's background knowledge, the information inferred by the written language, and the reading situation context is constructing meaning (Rouet et al., 2017). This news, more divergent definition of reading requires that will be accompanied what has come to be known as authentic assessment (Villarroel et al., 2018). The assessment model has four characterized by the following: a) it should address the reader's cognitive ability to construct meaning our of what is implied in the text. ...
Article
Full-text available
The objectives of the research are to know the students’ ability in reading comprehension of TOEFL test for the EFL Learners. This study was categorized as a descriptive quantitative research. This study was conducted at the eighth semester students of Hamzanwadi University. The population of this study was 144 students from 3 classes and the sampling method was purposive sampling. The sample of this study was 30 students. The research instrument used to collect data was reading comprehension of TOEFL test. The collected data were submitted to descriptive statistic by using SPSS 22 for windows. The mean score of the test was 4.17, it categorized into low. Based on that result, the EFL Learners did not have good ability in finding main ideas, references, and meaning of words on reading of TOEFL test. Meanwhile, the mean score in finding information was 7.2 and inference was 6.4. So, the EFL Learners did not have good ability in finding information and inference.
... She hoped that this would herald a permanent shift away from the "ineffectiveness, fragility and inauthenticity of traditional exams". Brown (2020b) further posited that ERT would be an opportunity to enact transformative and pedagogically driven assessment principles built on decades of higher education research, while Swaffield (2011) and Villarroel et al. (2018) suggested it would be an opportunity to move towards more authentic, meaningful () tasks that foster student engagement, motivation and self-regulation. ...
Article
It is widely recognized that assessment impacts on the process and behavior of learning. In this article, we, as academic staff development professionals in two faculties at a research intensive South African university, explore the assessment challenges, processes and behaviours that emerged in the context of Emergency Remote Teaching and Learning during the Covid 19 pandemic. We argue that an analysis of changes in assessment culture and behaviour point to possibilities for a shift from the pre-Covid-19 dominance of the “assessment of learning” paradigm, to an orientation of assessment where both “assessment of learning” and “assessment for learning” are more equitably balanced, with potentially profound implications for shaping the ways students construct their understandings and succeed academically.
Article
In her early work, Suellen Shay argues that higher education tends to valorise objective notions of reality while downplaying the subjectivity of individuals. In Shay’s view, this one-sided approach leads to a deficiency in the preparation of students for the holistic reality of the world they will experience beyond university. Taking Shay’s provocation as a starting point, this paper explores a framework for both the design and analysis of educational assessment practices, with the aim of accounting for both objectivity and subjectivity in serving as legitimate bases for educational experiences. We follow Shay in adopting concepts from Legitimation Code Theory to achieve this, and, drawing on our own experiences and prior work, discuss how this framework may be applied.
Article
The ability to solve authentic real-world problems in science, engineering, and medicine is an important goal in post-secondary education. Despite extensive research on problem solving and expertise, the teaching and assessing of advanced problem-solving skills in post-secondary students remains a challenge. We present a template for creating assessments of advanced problem-solving skills that is applicable across science, engineering, and medical disciplines. It is based on a cognitive model of the problem-solving process that is empirically grounded in the study of skilled practitioners (‘experts’) solving authentic problems in their disciplines. These assessments have three key features that overcome shortcomings of current assessment approaches: 1) a more authentic amount and sequence of information provided, 2) opportunities for students to make decisions, and 3) scoring based on comparison with skilled practitioners. This provides a more complete and accurate assessment of authentic problem-solving skills than currently exists for science, engineering, and medicine. Such assessments will be valuable to instructors and curriculum designers for evaluating and improving the teaching of problem solving in these disciplines. We provide examples of problem-solving assessments that illustrate the use of this template in several disciplines.
Article
Purpose To explore the conceptualisation and operationalisation of authentic assessment in work-based learning and research. Design/methodology/approach The relationship between authentic assessment and work-based learning and research is examined using a postgraduate degree program at a regional university in Australia as a case example to identify unique pedagogical features of work-based learning as they are linked to assessment. Findings A dynamic is created between formative and summative authentic assessment practices and the cross-current nature of learning in work and research, leading to a range of lifelong learning outcomes. A framework for such a dynamic is presented. Originality/value The pedagogy informing work-based learning emphasises developing higher-order thinking through reflective practice, developing competencies and capabilities associated with professional practice and developing academic writing and research skills to enhance professional identity at the postgraduate level for mid- to senior-career professionals. However, the relationship of authentic assessment to work-based learning and research has not been explicated in the literature and its application in post-COVID work environments has yet to be fully examined.
Chapter
This volume is based on an ERASMUS+ project that ran from 2017 to 2020. It aimed at empowering both prospective teachers and teacher educators to actively become agents of their own continuing professional development. It further intended to cooperatively establish a culture of self-reflection, as well as an intercultural network of professionals who creatively use mobile technologies and innovative ways of teaching and learning in the field of foreign language teaching. All contributions were provided by our partners from Germany, Sweden, Spain, and the UK and give an excellent insight into all the manifold aspects dealt with in this project – including voices of participating students.
Article
Full-text available
For a long time, links have been made between higher education and economic activity. The relatively recent emphases on employability (in the UK) and graduate attributes (largely in Australia) can be construed as contemporary variations. This article describes some of the developmental work that has taken place in the UK but which has obvious relevance to other higher educational systems. Reticence to embrace employability in curricula may in part be due to the failure to present a convincing evidence-base: two initiatives in England have attempted with some success to rectify the weakness. The article concludes by outlining some of the challenges that face both institutions and the higher education sector if employability is to be taken seriously.
Article
Full-text available
Contemporary perspectives of higher education endorse a work integrated learning (WIL) approach to curriculum content, delivery and assessment. It is agreed that authenticity in learning relates to real-world experience, however, differentiating and strategically linking WIL provision and facilitation to assessment tasks and collation of authentic student evidence is critical. Irrespective of whether authentic learning tasks can be achieved in the education or workplace settings, the imperative of why an assessment is regarded as highly or minimally authentic needs to be better understood. The literature doesn't clearly describe such parameters for assessments, nor does comprehensive course review (CCR) use a structured framework to analyze WIL assessments in curriculum. An Authentic Assessment Framework (AAF) was designed to assist this gap in CCR and piloted at Curtin University to enable a consistent approach across programs and disciplines. This paper describes the process for developing that framework, highlighting the effectiveness in engaging WIL practitioners and informing authentic curriculum development.
Article
Full-text available
In recent years, research and practice focused on staff and students working in partnership to co-design learning and teaching in higher education has increased. However, within staff–student partnerships a focus on assessment is relatively uncommon, with fewer examples evident in the literature. In this paper, we take the stance that all assessment can be oriented for learning, and that students’ learning is enhanced by improving their level of assessment literacy. A small study in a Scottish university was undertaken that involved a range of different adaptations to assessment and feedback, in which students were invited to become partners in assessment. We argue that a partnership approach, designed to democratise the assessment process, not only offered students greater agency in their own and their peers’ learning, but also helped students to enhance their assessment literacy. Although staff and students reported experiencing a sense of risk, there was immense compensation through increased motivation, and a sense of being part of an engaged learning community. Implications for partnership in assessment are discussed and explored further. We assert that adopting staff–student partnership in assessment and more democratic classroom practices can have a wide range of positive benefits.
Article
Full-text available
This paper reviews current known issues in student self-assessment (SSA) and identifies five topics that need further research: (1) SSA typologies, (2) accuracy, (3) role of expertise, (4) SSA and teacher/curricular expectations, and (5) effects of SSA for different students. Five SSA typologies were identified showing that there are different conceptions on the SSA components but the field still uses SSA quite uniformly. A significant amount of research has been devoted to SSA accuracy, and there is a great deal we know about it. Factors that influence accuracy and implications for teaching are examined, with consideration that students’ expertise on the task at hand might be an important prerequisite for accurate self-assessment. Additionally, the idea that SSA should also consider the students’ expectations about their learning is reflected upon. Finally, we explored how SSA works for different types of students and the challenges of helping lower performers. This paper sheds light on SSA research needs to address the known unknowns in this field.
Article
Full-text available
This study explored the contribution of peer-assisted learning (PAL) in the development of evaluative judgement capacity; the ability to understand work quality and apply those standards to appraising performance. The study employed a mixed methods approach, collecting self-reported survey data, observations of, and reflective interviews with, the medical students observed. Participants were in their first year of clinical placements. Data were thematically analysed. Students indicated that PAL contributed to both the comprehension of notions of quality, and the practice of making comparisons between a given performance and the standards. Emergent themes included peer story-telling, direct observation of performance, and peer-based feedback, all of which helped students to define 'work quality'. By participating in PAL, students were required to make comparisons, therefore using the standards of practice and gaining a deeper understanding of them. The data revealed tensions in that peers were seen as less threatening than supervisors with the advantage of increasing learners' appetites for thoughtful 'intellectual risk taking'. Despite this reported advantage of peer engagement, learners still expressed a preference for feedback from senior teachers as more trusted sources of clinical knowledge. While this study suggests that PAL already contributes to the development of evaluative judgement, further steps could be taken to formalise PAL in clinical placements to improve learners' capacity to make accurate judgements on the performance of self and others. Further experimental studies are necessary to confirm the best methods of using PAL to develop evaluative judgement. This may include both students and educators as instigators of PAL in the workplace.
Article
A commonly advocated best practice for classroom assessment is to make the assessments authentic. Authentic is often used as meaning the mirroring of real-world tasks or expectations. There is no consensus, however, in the actual definition of the term or the characteristics of an authentic classroom assessment. Sometimes, the realistic component is not even an element of a researcher’s or practitioner’s meaning. This study presents a conceptual analysis of authentic as it is used in educational research and training to describe an approach to classroom assessment. Nine distinct components or dimensions of authenticity are identified and only one of those is the realistic nature of the assessment.