Content uploaded by Susan Deeley
Author content
All content in this area was uploaded by Susan Deeley on Nov 15, 2019
Content may be subject to copyright.
Available via license: CC BY 4.0
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=rhep20
Higher Education Pedagogies
ISSN: (Print) 2375-2696 (Online) Journal homepage: https://www.tandfonline.com/loi/rhep20
Exploring the ‘wicked’ problem of student
dissatisfaction with assessment and feedback in
higher education
Susan J. Deeley, Moira Fischbacher-Smith, Dimitar Karadzhov & Elina
Koristashevskaya
To cite this article: Susan J. Deeley, Moira Fischbacher-Smith, Dimitar Karadzhov & Elina
Koristashevskaya (2019) Exploring the ‘wicked’ problem of student dissatisfaction with
assessment and feedback in higher education, Higher Education Pedagogies, 4:1, 385-405, DOI:
10.1080/23752696.2019.1644659
To link to this article: https://doi.org/10.1080/23752696.2019.1644659
© 2019 The Author(s). Published by Informa
UK Limited, trading as Taylor & Francis
Group.
Published online: 14 Oct 2019.
Submit your article to this journal
Article views: 83
View related articles
View Crossmark data
Exploring the ‘wicked’problem of student dissatisfaction
with assessment and feedback in higher education
Susan J. Deeley
a
, Moira Fischbacher-Smith
b
, Dimitar Karadzhov
c
and Elina Koristashevskaya
d
a
School of Social and Political Sciences, University of Glasgow, Glasgow, United Kingdom of Great Britain
and Northern Ireland;
b
Adam Smith Business School, University of Glasgow, Glasgow, United Kingdom of
Great Britain and Northern Ireland;
c
Institute of Health & Wellbeing, University of Glasgow, Glasgow,
United Kingdom of Great Britain and Northern Ireland;
d
Learning Enhancement & Academic Development
Service, University of Glasgow, Glasgow, United Kingdom of Great Britain and Northern Ireland
ABSTRACT
Student dissatisfaction with assessment and feedback is a signifi-
cant challenge for most UK Higher Education Institutions accord-
ing to a key national survey. This paper explores the meaning,
challenges and potential opportunities for enhancement in assess-
ment and feedback within the authors' own institution as illustra-
tive of approaches that can be taken elsewhere. Using a qualitative
design, a review of assessment and feedback, which included an
exploration of students' perceptions, was made in one College of
the University. The findings highlighted variations in assessment
and feedback practice across the College with dissatisfaction typi-
cally being due to misunderstanding or miscommunication
between staffand students. Drawing on the review, we assert in
this paper that students' dissatisfaction with assessment and feed-
back is not a 'tame' problem for which a straightforward solution
exists. Instead, it is a 'wicked' problem that requires a complex
approach with multiple interventions.
ARTICLE HISTORY
Received 23 February 2018
Revised 27 June 2019
Accepted 10 July 2019
KEYWORDS
Assessment and feedback;
higher education; student
dissatisfaction; ‘wicked’
problem
Introduction
Widespread student dissatisfaction with assessment and feedback practices in higher
education, as evidenced by the National Student Survey (NSS), presents a complex and
multi-faceted, ‘wicked’problem (Grint, 2008). This reflects not only a phenomenon
occurring in research-intensive universities, but also more widely in the UK and
internationally. In a funded research study undertaken between March 2016 and
February 2017, our aim was to investigate the complex challenges surrounding assess-
ment and feedback practice in a research-intensive university with a view to imple-
menting effective and sustainable change. The focus was on the College of Social
Sciences, which is one of the four Colleges at the University and comprises five
Schools and approximately 9,000 students, of whom 5,000 are undergraduates. The
College offers twelve main undergraduate degree programmes, the largest of which is
CONTACT Susan J. Deeley susan.deeley@glasgow.ac.uk School of Social and Political Sciences, University of
Glasgow, 25–29 Bute Gardens, Glasgow, United Kingdom of Great Britain and Northern Ireland
HIGHER EDUCATION PEDAGOGIES
2019, VOL. 4, NO. 1, 385–405
https://doi.org/10.1080/23752696.2019.1644659
© 2019 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons AttributionLicense (http://creativecommons.org/licenses/
by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
the MA Social Sciences degree. Typical of Scottish four-year general degrees, the MA
Social Sciences offers breadth of study in years 1 and 2 that exposes students to several
disciplines and enables them to take a variety of subject combinations within their
degrees. On this degree programme, there are almost 2,400 students enrolled in 75
different single and joint honours pathways; many of the latter are offered in collabora-
tion with the College of Arts or College of Science and Engineering. There are 11 other
degree programmes that are mostly professionally oriented degrees in Education, Law
and Accounting which have fewer cross-College pathways. Within this one College, it
was clear from our existing disciplinary reviews, NSS results, student feedback and
other sources of information that student satisfaction with assessment and feedback
varied within and between subject disciplines and that some areas need to make
improvements. The research reported in this paper was undertaken as a part of College-
specificefforts to improve assessment and feedback practices, and within the context of
institution-wide initiatives to improve assessment and feedback practices.
We begin with a scrutiny of selected literature that is followed by an outline of how
the investigation was conducted. In this paper, we focus on undergraduate students’
perceptions in the context of an overview of assessment and feedback practice in the
College. We report on our findings and recommend several strategic and holistic
approaches to alleviating student dissatisfaction with assessment and feedback from
the viewpoint that this is a ‘wicked’problem. Being a ‘wicked’problem infers that social
relationships and interactions are central to the issue and that there is no single elegant
solution. Consequently, the approaches we propose ‘signify the importance of the
collective’(Grint, 2008, p. 13) which involve concerted action at different levels of the
hierarchical structures within the University. We advocate an engaged community of
staffand students approach which could be adopted for use across the University and
indeed further afield to the benefit of other higher education institutions. Poignantly,
we consider student dissatisfaction with assessment and feedback to be a symptom,
rather than the cause, of a problem.
Assessment and feedback
Assessment is a necessary requirement for the award of a degree and a vital requirement to
enable accreditation, assuring professional competencies are met. This summative type of
assessment is usually regarded as being firmly located within the power and domain of staff.
Interestingly, Price, Rust, O’Donovan, Handley, and Bryant (2012,p.18)contendthat
where ‘summative marks are given, there is (and will always need to be) a clear divide
between assessor and assessed.’The implication of this prevailing orthodox stance is that
students are passive recipients of assessment, rather than being actively engaged in its
processes (author 1, 2015).
Another function of assessment is to help students learn and improve their academic
performance. These characteristics are typically attributed to formative assessment,
which does not normally contribute per se to students’grades. However, the difference
between formative and summative assessment can be a misleading dichotomy (Boud,
Cohen, & Sampson, 1999) because summative assessment may also provide rich learn-
ing opportunities for students if constructive feedback is provided. Feedback on sum-
mative examinations, which are commonly used and often heavily weighted in terms of
386 S. J. DEELEY ET AL.
course credit, can be particularly helpful as examination performance tends to be lower
than in coursework (Rust, 2007).
The concept of assessment for learning (McDowell, Wakelin, Montgomery, & King,
2011; Sambell, McDowell, & Montgomery, 2013) could be applied more widely and
utilised in all assessment (Taras, 2002). To aid learning, it is vital that feedback is used
effectively. As Sadler (1989) asserts, it is important that feedback helps to close the gap
between students’actual performance and what constitutes a potentially better perfor-
mance. In a broader context, assessment for learning and learning from assessment and
feedback processes can help students in their independent learning, attributes and skills
development, employment and lifelong learning (author 1, 2014; Brown, 2015).
Consequently, there are two relevant issues here; firstly: meeting the aims of assessment
and feedback effectively and secondly, engaging students in active learning in assess-
ment and feedback.
Dissatisfaction with assessment and feedback
Since 2005, students in their final year of undergraduate study in the UK are invited
annually to participate in the NSS. The survey is reported in terms of the percentage of
students who agree or strongly agree in response to 27 questions concerning teaching,
course organisation, assessment and feedback, learning resources, student voice, com-
munity and development. Not only are the results reported nationally, but they also
contribute to a range of league tables, and more recently to the new Teaching
Excellence Framework (TEF) and as such, are a key focus for most universities. What
is clear is that across the UK sector, and regardless of overall satisfaction trends,
students typically demonstrate much less satisfaction with assessment and feedback
than with other measures. Student dissatisfaction with assessment and feedback across
the UK (Author 1 et al., 2017) is thus sharpening the focus on assessment and feedback
as a priority concern across higher education institutions (Boud, 2007). Assessment
consumes much time and effort by students and staff. Feedback also eats into stafftime,
for example, when lengthy periods may be spent providing written comments on
students’assignments. Moreover, staffmay perceive their efforts as wasted if their
feedback is not collected, read or heeded by students. Sadler (2010, p. 535) affirms
that, ‘for many students, feedback seems to have little or no impact, despite the
considerable time and effort put into its production’. This suggests there is
a mismatch between staffand students’understanding of feedback, especially if students
consider it to be of little use or value to them (Lunt & Curran, 2010). A contributory
factor to feedback being of value is its timing because its relevance can be lost if it is
given too late, for example, after a course has been completed, as various studies have
shown (Hattie & Timperley, 2007; Jonsson, 2012;O’Donovan, Rust, & Price, 2015).
Understanding the academic language and concepts typically used in feedback can
present a problem for students (Lea & Street, 1998; Stefani, 1998), but staffmay assume
that they will grasp its meaning and understand how to apply it to improve their future
performance in assessment (Blair & McGinty, 2013; Sadler, 2010). In other words, the
understanding and expectations of assessment and feedback can differ between students
and staffand as a consequence, significant differences can emerge between what
students want and what staffprovide (Adcroft, 2011; Carless, 2006). The source of
HIGHER EDUCATION PEDAGOGIES 387
this discord may reside in conventions and assumptions about assessment and feed-
back. Challenging these assumptions may be a starting point for addressing such
dissonance.
Addressing problematic areas of assessment and feedback
It would be useful then for staffto clarify to students the purpose of assessment and feedback
so there is mutual understanding. Linking these processes explicitly to the aims and intended
learning outcomes of courses and programmes, or what Biggs and Tang (2011, p. 95) refer to
as ‘constructive alignment’, can contribute to students’understanding. It is also important
that students are aware of what they must do to attain the required standards, for example, in
terms of being aware of how to meet course aims, intended learning outcomes and marking
criteria (Bloxham & West, 2004;Price,Handley,&Millar,2010). Problematically, there may
be an assumption that students clearly understand the academic language, and terms unique
to subject disciplines, that describe course aims, outcomes and criteria. To address this
potential problem, and to help students develop metacognitive skills, overt explanations
and clear communication between staffand students is essential (Biggs & Tang, 2011;
Nicol & Macfarlane-Dick, 2006;Weaver,2006). This involves assessment and feedback
literacies (Smith, Worsfold, Davies, Fisher, & McPhail, 2013;Sutton,2012)whichcanbe
embedded in academic courses to nurture students’active participation in assessment and
feedback processes (Author 1 et al., 2017;HigherEducationAcademy,2012;O’Donovan
et al., 2015;Priceetal.,2012). However, this necessitates dialogue and social interaction
between staffand students, heralding a shift away from students’conventional passive role.
Evidence suggests that this transition enhances students’learning (Higher Education
Academy, 2014)andaffirms an engaged learning approach that differs from conventional
didactic pedagogy.
Part of this approach may include dialogic feedback (Adcroft, 2011; Carless, 2015;
Carless, Salter, Yang, & Lam, 2011;O’Donovan et al., 2015; Yang & Carless, 2013), which
can involve a shared construction of assessment (Author 1 et al., 2017) and feedback (Boud
& Molloy, 2013). This co-design approach serves to mitigate any mismatch between staff
and students in their understanding or expectations of feedback. Students’metacognitive
skills development in appraisal and making evaluative judgements of performance con-
tribute to their critical thinking and independent learning, both of which are valuable for
their future employment and lifelong learning. To facilitate students’skills development,
self- and peer review are useful exercises and can be utilised as assessment methods. Such
co-operative activities offer a vast array of opportunities for student learning (Boud &
Falchikov, 2006; Mulder, Pearce, & Baik, 2014). Co-assessment, for example, self-
assessment combined with the assessment by staff, is likely to encourage students’deep
learning (author 1, 2014), which facilitates longer term understanding.
Assessment and feedback are interrelated processes central to student learning,
especially when formative assessment is used (Nicol & Macfarlane-Dick, 2006). It
would be advantageous to integrate and align these processes harmoniously within
accredited programmes of study. For example, rather than being planned in isolation
within the domains of single courses, assessment and feedback would be more effective
if they are structured, using a variety of methods (Evans, 2013), as part of the overall
aims and intended learning outcomes of a degree programme. Curriculum mapping
388 S. J. DEELEY ET AL.
(Harden, 2001) can be used to achieve this with an explicitly planned approach to offer
students a strategic and structured variety of opportunities to hone their skills and
demonstrate their learning.
The study
This study draws from a College review that investigated students’perspectives of assess-
ment and feedback with a view to addressing causes of student dissatisfaction. One of the
aims was to gain an overview of the existing assessment and feedback practices in the
College across its five Schools. Another aim was to gain insight into the student experience
through an in-depth exploration of their perceptions of how and in what ways assessment
and feedback impactedon their learning, and whether they felt engaged and motivated. The
review also involved gleaning additional data from written course documentation, teaching
awards, periodic subject disciplinary reviews, and other assessment and feedback innova-
tions and projects across the University, in addition to seeking the views of a small selection
of staff. The objectives of the College review were to:
●Identify the varieties and types of assessment and feedback used across the
College’s undergraduate programmes;
●Identify examples of good and innovative assessment and feedback practice that
might be adopted more widely across the Schools in the College;
●Investigate the rationale for using specific assessment and feedback methods;
●Explore the students’perceptions of the effects of assessment and feedback meth-
ods on their learning;
●Examine the findings of the study in light of the literature on assessment and
feedback;
●Make recommendations for enhancing assessment and feedback practice.
Research methods
The review was conducted by a team of three academics, two research assistants,
asenior administrator, and included all five Schools in the College of Social Sciences.
With help from administrative staff, information was gathered from relevant University
policies and documentation, for example, ‘Feedback following Summative
Examinations’guidance; Periodic Subject Reviews and ‘Summary of Good Practice
2014-15ʹ; course documentation for the relevant programmes; and the College’s
‘Action Plan’in response to the NSS results.
Qualitative research methods were chosen to explore the students’perceptions of the
effects of assessment and feedback methods on their learning. Qualitative methods are
useful for obtaining authentic and nuanced accounts of the ‘Who, What, Where and
Why’of the experience of interest (Neergaard, Olesen, Andersen, & Sondergaard, 2009,
p. 54). In education research, qualitative methods have been instrumental in generating
novel recommendations in applied settings (Johnson & Christensen, 2008).
A qualitative descriptive design was used (Sandelowski, 2010). Qualitative description
HIGHER EDUCATION PEDAGOGIES 389
is a useful technique for obtaining in-depth, multi-faceted and contextualised accounts
of complex, multi-determined phenomena (Sandelowski, 2010).
Qualitative research methods were used by conducting focus groups and interviews
with students from each of the Schools within the College. To extend the data collec-
tion, an online questionnaire was used to capture students’views. The aim was to
garner their perspectives of assessment and feedback by asking them about:
●Formative and summative assessment methods;
●The purposes of assessment and feedback
●The effects of assessment and feedback on their learning
●Using technology in assessment and feedback
●Innovative assessment and feedback
●The language used in assessment and feedback
●Self-assessment
●Peer assessment
●Co-assessment
●Good practice in assessment and feedback
●Clarity of feedback
●Timeliness of feedback
●How they used feedback to improve their learning.
Participants
The undergraduate students were chosen through a process of purposive and convenience
sampling and recruited through theStudents’Representative Council (SRC), course forums
and social media. Care was taken to ensure that each of the five Schools in the College of
Social Sciences was represented in some way. Altogether, 44 students participated in the
review, which includes the questionnaire responses. In two of the Schools, two focus groups
were conducted with twelve students, with six in each focus group. Each group represented
a mix of single and joint, junior and senior Honours students. There was also one pre-
honours student in one of the focus groups. In a third School, three students were
interviewed individually as the lack of response made organising a focus group impossible.
In a fourth School, representation was made through a discussion group of ten students.
The group consisted of class representatives from the School, and the interviewer joined
their discussion to gather group responses related to assessment and feedback. However,
the small data set from this discussion was omitted from the overall review as it was
obtained under different conditions from the focus groups. Finally, representation from the
remaining fifth School was achieved through information available from a written report
on an investigation made previously into assessment and feedback by staffin that School.
Data analysis
The individual interview and focus group data were analysed using content analysis and
qualitative description (Sandelowski, 2010). Content analysis has been deemed the
analytic technique of choice for qualitative studies aiming to generate detailed, rich
descriptive accounts that remain firmly grounded in the original text (Kim, Sefcik, &
390 S. J. DEELEY ET AL.
Bradway, 2017; Sandelowski, 2000). The data were collected and analysed sequen-
tially. The interview and focus group recordings were transcribed verbatim and coded
according to the research objectives, as referred to earlier. The approach to coding
reflected the researchers’aim to represent the participants’accounts authentically
while aiming to yield contextualised generalisations (Neergaard et al., 2009). Each
transcript was read independently by two members of the research team and then
re-read while listening to the audio recording in order to capture the full nuances of
the participants’responses. All the transcripts were then highlighted for recurring
ideas and similar response patterns, which were then categorised according to the
research questions. One of the researchers categorised the data within tables, while
the other researcher used concept maps (Hay & Kinchin, 2006) as a method of
categorising the data. The researchers then used these two methods to cross refer-
ence the findings. This led to further scrutiny and revision of the categories for
internal consistency, distinctiveness and significance (Sandelowski, 2000). This pro-
cess allowed the emerging themes to become visible. The results were reviewed and
validated by the research team, ensuring greater reliability and minimising any
potential bias (Neergaard et al., 2009). The overarching themes reflect the objectives
of the study in that they relate to a) students’understanding of the functions of
assessment, b) their attitude to feedback, c) their views of the perceived problems of
assessment and feedback, and d) what students believe constitutes good practice.
Ethics
The University requires all non-clinical research involving human subjects to be scrutinised
by the appropriate ethics committee. Each College has trained ethical reviewers who follow
guidance that is consistent with the requirements of the Research Funding Councils and the
Vitae Research Concordat (https://www.vitae.ac.uk/policy/concordat-to-support-the-
career-development-of-researchers). Each College Committee is overseen by a University
Committee which ensures consistency of approach, delivers training for reviewers, and
updates theprocess in light of sector guidelines andbest practice. Applicantsare required to
identify and mitigate against risks to researchers and participants, and to submit all
documentation that would be used in relation to consent, data usage, data storage, and
dissemination. In keeping with those requirements, the research team identified that there
were potential ethical risks in this project in that some of the student participants were in
a dependent relationship with the review team. To mitigate this risk, postgraduate students
were employed as research assistants to conduct the focus groups and interviews. The
participants were assured of confidentiality, as far as is possible within a focus group, and
that theirtaking part in the project would not affect their coursework, grades, or outcome of
their degree. For the purpose of analysis of the findings, individual participants remained
anonymous to the rest of the review team. Participants were assured that their involvement
was entirely voluntary and that they did not need to answer any question they did not wish
to, and that they could withdraw from the study at any time and without question or
consequence. To ensure anonymity of the participants, all references to them were femin-
ised. All the participants gave their informed consent and the study was approved by the
College of Social Sciences Ethics Committee.
HIGHER EDUCATION PEDAGOGIES 391
Limitations
It was difficult to recruit large numbers of student participation for this study. Whether
this was due to our ill-timing for students with assessment deadlines taking precedence,
a lack of incentives, or a disinterest in the topic, is difficult to ascertain. Our study
therefore focused mainly on students in their two final years of study (Honours). Thus,
we capture perhaps more mature reflections on assessment and feedback as the study
does not include the views of students new to the University and the expectations they
bring. Participation in the online questionnaire was limited and although it provided
useful qualitative data, it was insufficient for us to undertake rigorous quantitative
analysis. With the limited time and resources available to the project team, we were
unable to conduct a full mapping of assessment and feedback practices in the College.
The assessment and feedback documentation alone across all 12 degrees amounted to
500 pages which was too extensive for a project of this scale.
Other limitations pertaining to the data analytic process must also be noted. First,
the possibility of investigator bias could not be eliminated since all analysts, including
the researchers who designed and conducted the focus groups, were part of the Review
Team for this project. Second, due to lack of time informants’validation was not
conducted, thus missing the opportunity to further enhance the authenticity and
credibility of the findings (Neergaard et al., 2009). Third, since the analytic method of
choice was qualitative description through the use of content analysis, the data analysis
was limited to the coding and theming of manifest content only, thus potentially
omitting insights from any latent content as offered, for instance, by grounded theory
and phenomenological approaches.
Findings
Summary of assessment methods used in the college of social sciences
The overview of current assessment practice began by extracting information from the
University’s‘Course Catalogue’which contains the minimum required course informa-
tion. This template-driven documentation identifies pre-designated types of summative
assessment as follows: a written examination, essay, report, dissertation, portfolio,
project, oral assessment and presentation, practical skills assessment or other set
exercise. Specific details about the assessment strategies are provided in supplementary
narrative text but vary in form, length, specificity and rationale. Because more detailed
accounts of assessment are typically located in course handbooks on Moodle (the
University’s Virtual Learning Environment), there was limited granularity in the data
available to the project team. Nonetheless, it was clear that written summative exam-
inations and written summative assignments, such as essays, were found to be the most
commonly adopted and heavily weighted assessment methods across all five Schools in
the College. Indeed, written examinations and essays accounted an average of 80.41% of
the course grade, ranging from 67.85% to 88.79%. Variety in assessment (Evans, 2013)
was not in evidence and where alternative assessments existed, they typically repre-
sented a low percentage of the overall course grade. It was also apparent that formative
assessment practices differed from School to School.
392 S. J. DEELEY ET AL.
Students’perspectives on assessment
Overall, students believed that the main purpose of assessment was to test disciplinary
knowledge. Indicative of the conventional use of written examinations, one student
asserted that they were used ‘basically to see how much knowledge you can regurgitate
at a single point’. So, examinations were perceived as little more than memory tests, as
expressed by one student who considered that it was ‘a very narrow or limited pool that
you need to study’. This encouraged students to take a strategic and surface approach to
learning, yet they generally agreed that assessment ought to involve an opportunity to
demonstrate their understanding of the subject material and ‘to engage with it criti-
cally’. Some students went on to say that assessment should give them a chance to be
creative and to demonstrate innovative thinking, as one student explained, ‘I want to
write a piece of assessment that brings something new to (the marker’s) eyes and
surprises them’. Similarly, another student believed that ‘it should be the thought
process that is assessed, not the conclusions’.
Formative assessment, on the other hand, was generally regarded by students as being
less important than summative assessment because it did not contribute directly to their
course grade. However, most of the students we spoke to considered it would be useful to
receive stafffeedback on written drafts of their work before it was finally submitted. This
suggests that formative learning and practising different types of assessment are valuable
exercises that can help students in their learning as well as in building their confidence.
Also, some students gave examples where formative assessment was ‘particularly beneficial
to their learning’, such as self-assessment, peer assessment, and ‘practical work’.Shrewdly,
several students measured the value of assessment in terms of its relevanceto their potential
future employment. They perceived work experience, such as internships or placements, to
be of value because they increased their employability. Moreover, assessment could also
provide them with transferable skills and help them to develop attributes that would also be
useful in the workplace. For example, time management was a skill that students developed
in having to meet assessment deadlines. More importantly, assessment was useful if it
involved ‘actual skills that are required in a future professional life .. . actually applying
theory to a problem and bringing it into real life’. Here, exampleswere given of theMoot (in
Law) which a student explained as, ‘solving a fictional court case’and elsewhere, ‘making
a business case competition mandatory’(in Business). One student added that ‘variation
supports the development of multiple skills, for example, portfolio work prepares students
for their teaching career through planning, research and application skills’. Assessed group
presentations were also noted as being useful in developing communication and leadership
skills. As a student explained, ‘that’s basically (what) every single employer wants.’
However, they were acutely aware of the associated problems with group assessment, as
there could be varying levels of individual contributions to them. A student remarked,
‘I can see the value of (group assessment) if everyone pulls their weight and does their part
of the work’, but overall, group assessment was perceived as challenging and not entirely
satisfactory for all students.
Nevertheless, it was clear that students appreciated opportunities to develop their
skills through diverse assessment methods. They shared the view that having multiple
pieces of coursework ‘improves different kinds of skills’, such as writing, time manage-
ment, presentation and organisational skills. Traditional types of assessment, such as
HIGHER EDUCATION PEDAGOGIES 393
essays and examinations, were not generally perceived as providing useful employability
skills, although they conceded that critical thinking skills could be developed through
these conventional methods. One student suggested that ‘there could be different forms
of exams –maybe you could make them longer or just one question so you can really
develop and plan your argument rather than having to scribble down really fast and . ..
just facts and kinda . . . .spill them out on a paper without having the chance to really
polish it.’There is no doubt that the students favoured different and innovative
assessment methods but were clear that any new assessment method should be
explained to them first. It is interesting to note here that students’concerns were
about understanding the purpose of the assessment, as well as about approaching the
task itself.
Some students were enthusiastic about being more actively involved in assessment,
for example, having more choice in their essay topic. One student explained that she
would be ‘way more engaged because they could actually do something they were more
interested in.’Strikingly, students expressed their dismay at having several examinations
clustered at the end of a semester, saying they preferred continuous assessment instead.
Students’perceptions of problems with assessment practice
Overall, there were two closely linked themes relating to the perceived problems with
assessment. Overwhelmingly important to students was fairness in assessment and
transparency in its processes. In terms of fairness, the marking criteria for different
assignments in some cases were unclear. An example of this was what students saw as
ambiguity or vagueness in terms such as ‘critical thinking’, which is invariably part of
the marking criteria for pieces of coursework such as essays. In addition, the extent of
how critical a piece of written work should be, did not seem to be understood by the
students or clearly and consistently communicated by staff. If and how criteria were
explained depended on individual staff, which led students to allude to ‘serendipity’,
asserting that many students ‘would just say that it depends on how lucky you are’.
What they perceived as a lack of consistency in assessment practices was indicative of
a mismatch between the students’interpretations of marking criteria and the way
criteria were applied by staffin marking students’assignments. Some students believed
that their work had to conform to the markers’beliefs and expectations rather than
being assessed on its own merit. Students’perceptions of inconsistency in marking were
a prevalent issue as students were concerned that there was insufficient transparency in
marking and grading across the different courses. This was most notable in examination
marking where students rarely saw their marked scripts afterwards (although they could
ask to see the scripts), prompting one student to comment that she had ‘generally
learned the most (by) writing papers for classes rather than written exams’.
Students’perspectives on feedback
The functions and rationale of feedback were clear to most students. They shared the
belief that feedback is important, firstly, to enable them to identify the strengths and
weaknesses of their work. Secondly, students believed that feedback should provide
them with specific advice they can apply to subsequent assessments. Certainly, the
feedback that helped students in their future assessments appeared to have the most
394 S. J. DEELEY ET AL.
impact on them. They noted that feedback worked well ‘when it helped you understand
where you went wrong and how to improve’. The rationale underpinning feedback,
they believed, is that it contributes to a continuous process of learning and improving
performance. Several students said that completing an assessment, having it marked
and returned relatively early in a course was very important to them. It meant that they
could have the opportunity of using the feedback to improve their next assessment.
Students placed a considerable emphasis on being able to compare past feedback with
feedback on subsequent assessments. They saw this as a vital indicator of their progress.
In this respect, timely and analogous feedback is imperative.
Several students identified individual written feedback as the most beneficial to their
learning, but it was acknowledged that other forms of feedback were also beneficial such
as ‘feedback on formative assessment from peers’. They explained that working together
through peer review was beneficial because ‘it challenges your beliefs’. Additionally,
students acknowledged that there were benefits to receiving ‘whole-group’or ‘generic’
feedback. Although generic feedback served various functions, for example, informing
students of the grade distribution within a class and offering them general advice on
how standards could be improved, it was regarded as less effective than individualised
feedback. Effective feedback was described by students as containing detailed comments
about their work and an explanation for its grade. They also wanted advice about how
to improve their work in future assessments. Therefore, feedback was of limited value if
it was too specific to the one current piece of work. They regarded markers’comments
as constructive if they contained concrete points and clear explanations because these
were useful in understanding exactly how their written work could be improved.
Additionally, students admitted to becoming highly motivated if they perceived
a tone of encouragement from the marker. They valued supportive staffwho provided
comprehensive and constructive feedback and who were accessible and approachable
for advice or further feedback. One student averred that the best feedback they received
was when it involved an individual discussion with a member of staff. It appears that
effective feedback is not just about content, but also about how it is conveyed. Despite
this positive view, some problems with feedback were uncovered.
Students’perceptions of problems with feedback
Dishearteningly, a few students commented that feedback had little or no impact on
their learning. Barriers to effective feedback were identified as inconsistency, lack of
detail, late return, and negative comments in the feedback. A commonly held belief was
that examinations do not enable useful, if any, feedback to be given. Despite the
University’s policy on giving feedback on summative examinations in a timely manner,
this belief prevailed because most examinations are held at the end of courses. If this
feedback cannot be used to improve students’future work, then it becomes redundant
and futile. Similarly, late feedback from assessments held during the course was
perceived by students as a significant problem as it hindered any potential positive
learning or improved performance. Poignantly, a student explained this as ‘a Catch-22.
I mean-if you are getting feedback after you’ve completed your course, then it is again
too late. I have a course where we handed in an essay two weeks ago, which has not
been marked. Now we don’t have class any more. By the time it comes out, we will have
sat our exams.’
HIGHER EDUCATION PEDAGOGIES 395
As well as a paucity of feedback on examinations, students noted the lack of
consistency in the way feedback was given by different staffon other assessment. One
student explained, ‘I have no way of understanding what I have actually done wrong
and I have no way of improving. It’s always about rolling the dice and trying to figure
out what the (marker) is looking for’. This implies that it is not the amount of feedback
that is important, but its quality. Students defined low-quality feedback as containing
‘just . . . ticks’, and they pointed out that ‘some markers (do) not even (do) that’.
Students were acutely aware of the mismatch between their expectations of feedback
and what they received. They were also clear about what they wanted from feedback,
which was an understanding of how they could improve their work in future and
observed that brief feedback rarely explained this to them. Another problem that
students highlighted was receiving negative comments from staffif they requested
clarification of their feedback or wanted additional feedback. Some students thought
that this may be due to some staffinterpreting the requests as challenging their
academic expertise or authority.
Discussion
Drawing from the College review, the findings in this paper are time-specific and relate
to the data that were collected from a relatively small group of students and the findings
that emerged are therefore not fully representative of all assessment and feedback
activities within the College or the wider University. Nevertheless, the findings are
significant in that they resonate with previously published studies. It is also the case that
the study examined areas that need to be improved and so this paper does not offer an
equivalent account of the successes, strengths of approach in the College, or areas
students recognise as good practice.
The role of assessment
It is clear from this project that students in the College perceived assessment to be
important for different reasons. These reasons include assessment to gain academic
coursework credit and professional accreditation. The main purpose of assessment was
perceived as testing students’knowledge and understanding academic course material.
However, assessment was also seen to be important for gaining more than subject-
specific knowledge, as students recognised that the acquisition of transferable skills
could also be made through assessment and students appreciated that this was valuable
for their future employability. Students asserted that developing their critical thinking
skills was paramount and that this could be achieved through assessment. They believed
that critical thinking could lead to being creative and innovative, which would also be
highly valued by prospective employers.
Although summative assessment was a strategic concern and focus for students in terms
of passing courses, gaining academic credit, and ultimately obtaining a degree, it was also
seen by many as an opportunity to learn. Students acknowledged the potential for sum-
mative assessment to function also as assessment for learning and were aware of the value of
their active engagement in assessment. This resonates with findings from the literature that
indicate assessment as a tool for learning (Carless, 2006; McDowell et al., 2011;Pitt&
396 S. J. DEELEY ET AL.
Norton, 2016;Sambelletal.,2013; Taras, 2002), which encourages student engagement
(Higher Education Academy, 2012,2014). Interestingly, some students claimed they felt
actively engaged when they were involved in assessment. Such engagement may give rise to
deep learning, as referred to in the literature (author 1, 2014). Students referred to their
engagement in learning as curiosity, a desire to learn, and ‘intellectual hunger’. Given the
right circumstances, it seems that assessment can inspire intrinsic motivation. Assessment
can capture an inherent desire to learn if students believe that it allows them to express
a personal interest in, or passion for, a topic. The co-design of the curriculum and/or
assessment between staffand students may be ways to facilitate this (Author 1 et al., 2017;
Bovill & Bulley, 2011).
Towards a shared understanding of assessment and its processes
It is helpful to students to understand the purpose of assessment, which implies that
clear communication from staffabout its rationale is essential. And if new methods
are introduced, students need to know in advance what is entailed, preferably with
a chance to practice beforehand, through formative assessment. Students also noted
that clear marking criteria, transparency and fairness in assessment and feedback
processes are also vital to their learning and good performance in assessment. This
can be facilitated through co-design as mentioned above, or by embedding assess-
ment and feedback literacies into coursework (Author 1 et al., 2017;Higher
Education Academy, 2012; Lea & Street, 1998;O’Donovan et al., 2015;Priceetal.,
2012;Smithetal.,2013).
Students did not believe that there was always a consistency in the marking and
sometimes perceived it to be a matter of luck as to who marked their coursework. Given
the rigour around second marking and moderation of marks, this points to a wider, and
more complex ‘wicked’, problem where perceptions of fairness and consistency of
marking are misaligned between students and staff. Naturally, different staffmay
adopt various styles of giving feedback and some may assume that students understand
the academic discourse used in feedback. This reinforces the need for clear commu-
nication, understanding, and agreement of expectations between staffand students. It
also reflects a potential dilemma in reconciling what students want and what staff
provide, which is a tension highlighted by Carless (2006) and Adcroft (2011). Again,
this suggests that student dissatisfaction is not an individual problem but one that
belongs to a system, in other words, a ‘wicked’problem for which ‘no-one has the
solution in isolation’(Grint, 2008, p. 11).
Positive aspects of assessment and feedback
The positive aspects of assessment and feedback raised by our participants resonate
with previous studies (Biggs & Tang, 2011; Bloxham & West, 2004; Nicol & Macfarlane-
Dick, 2006; Price et al., 2010; Weaver, 2006). Crucial for students’good performance is
their clear understanding of what is expected in assessment and this was apparent in the
study. As stated in the literature, an effective way of helping students to understand
what is expected of them is by embedding assessment and feedback literacies within the
curriculum (Author 1 et al., 2017; Higher Education Academy, 2012; Lea & Street, 1998;
HIGHER EDUCATION PEDAGOGIES 397
O’Donovan et al., 2015; Price et al., 2012; Smith et al., 2013; Sutton, 2012) in addition to
authentic assurance from staffthat stringent moderation or second marking policies
and procedures are in place.
Students in the study believed that diverse assessment would lead to students’
increased motivation. Diversification necessitates a more flexible approach and includes
innovative assessment methods, in addition to students being actively involved in
making choices about their assessment. The issue of student dissatisfaction must there-
fore be contextualised as a ‘wicked’problem that reaches beyond individual teaching
staffand is a function of a range of factors including, but not limited to, the assessment
methods across a programme of study, the consistency of dialogue around assessment
and feedback, disciplinary influences, and institutional custom and practice. This is not
to say that students like, or are receptive to, all kinds of alternative forms of assessment.
Indeed, some may be resistant (author 1, 2018) and prefer more conventional modes of
assessment such as essays and examinations. Although many students in the study did
not favour group assessment, such as presentations, some clearly did. Inevitably, there
can be problems with uneven contributions to group presentations, but there are ways
in which this can be managed, for example, by requiring each student to produce
written evidence of their contribution to the presentation. Another alternative to
conventional end of course examinations is to introduce continuous assessment,
which is recommended by Smith, Pearson, and Hennes (2016). Students believed that
continuous assessment would counteract the stressful demand of sitting several exam-
inations close together at the end of courses. Continuous assessment may also be more
conducive to introducing exercises for students that develop their employability skills
and attributes.
Problems of feedback
For learning to occur through assessment, it was clear that all participants felt that
effective feedback is essential. However, effective feedback may present a conundrum.
As referred to earlier, staffmay spend a large amount of time writing comments on
students’work, yet this feedback often remains a source of dissatisfaction for students.
This discord can arise if feedback does not provide detailed and clear scaffolded support
to students (Sadler, 2013). Examples of low-quality feedback included merely ticking
a student’s essay without any accompanying comments, or negative comments written on
coursework. Students claimed that this depersonalised manner of feedback demotivated
them and made them feel disengaged. A sense of their work and efforts being valued by
staffis important for building and maintaining students’confidence and further engage-
ment. Perhaps it is not surprising that audio-visual-recorded feedback, even on anon-
ymised coursework and examination scripts, is favoured by many students as it is more
personalised (Kerr, Dudau, Deeley, Kominis, & Song, 2016), as referred to below.
Use of technology
Although there is limited information gathered in this study about the extent and
diversity of uses of technology in assessment and feedback within the College, there is
no doubt that technology is and can be used in a variety of ways. For example, there is
398 S. J. DEELEY ET AL.
ample scope for further exploration of the use of technology in assessment and feedback
(Hepplestone & Chikwa, 2016; Parkin, Hepplestone, Holden, Irwin, & Thorpe, 2012),
especially given the opportunities offered by innovative technology enhanced active
learning spaces. Technology can help to deliver feedback quite quickly and, of course,
timeliness of feedback is an important factor. Returning marked work to students
within a short time period can lead to increasing tension for staffand dissatisfaction
for students if it is not returned on time. End of course assessments allow little time for
providing feedback, especially in large classes, and can add stressful demands on staff.
Moreover, feedback that is given after students have finished an academic course may
render the feedback redundant as it may lose its relevance (Jonsson, 2012) and thus lead
to student dissatisfaction. This is inextricably bound to a structural institutional system
and is part of the ‘wicked’problem.
Effective feedback
Nevertheless, students heartily agreed that effective feedback can be a means to improv-
ing their work, or, in other words, it is ‘a useful learning tool’(Pitt & Norton, 2016,
p. 1). They appreciated receiving regular feedback, such as comments on drafts of their
work in progress. Unfortunately, this can be problematic if not impossible, to sustain if
there is a large student cohort, although peer review may assist in some ways here. But
students were adamant that helpful feedback was something they could use to improve
their work. They explained this as containing specific and constructive comments on
their work, as justifying the mark that was awarded, and being returned to them within
a few weeks. These factors mirror the findings of Smith et al. (2016, p. 4) who reported
succinctly that students described effective feedback as ‘timely, detailed and actionable’.
It also echoes good feedback practice advocated by others (Hattie & Timperley, 2007;
Jonsson, 2012; Nicol & Macfarlane-Dick, 2006;O’Donovan et al., 2015; Sadler, 2010).
Personalisation
Significantly, from this study it appears that students respond positively to staffwho are
approachable, and willing to offer help, support and encouragement (Pitt & Norton,
2016). Moreover, students may be more inclined to act on their feedback if they
perceive it to be individualised. As mentioned above, the effects of one-to-one feedback
can be perceived through online audio-visual feedback (Kerr et al., 2016). This personal
approach to feedback reiterates what Sutton (2012, p. 39) refers to as an ‘ethos of care’
which, he asserts, is conducive to enhancing student learning. As with learning and
teaching, feedback can be most effective if it is part of a social process that actively
engages students in dialogue (Ajjawi & Boud, 2017; Carless et al., 2011;O’Donovan
et al., 2015). The idea that dialogue and collaborative support can also be achieved
through peer review was recognised by students as effectively developing their learning,
which is echoed by Hamer, Purchase, Luxton-Reilly, and Denny (2015). We should not
assume that students inherently know how to review, assess and give feedback effec-
tively, but working with staffin partnership can help to develop students’skills,
improve their work and become self-regulated learners. Indeed, there was little infor-
mation from students in the study about the process by which they apply feedback to
HIGHER EDUCATION PEDAGOGIES 399
their future assignments. This ‘feedback loop’signposts ways in which students can
improve their work (QAA, 2006,p.10–11) and creates a space for dialogic feedback
(Boud & Molloy, 2013; Boud & Soler, 2016; Carless et al., 2011; Yang & Carless, 2013).
However, using metaphor to refer to a more sustainable approach to learning through
dialogic feedback beyond the limits of a particular course of study, a feedback coil may
be more apt as this infers infinite development, rather than being confined to a finite
loop.
Summary
In sum, this study reveals students’perceptions of assessment and feedback, which
interestingly resonate clearly with previous studies. From our study, it is evident that
examples of excellent practice in assessment and feedback exist within the College that
also reflect recommendations made in the literature. In seeking the views of students,
our College-wide overview has allowed us to gain an insight into individuals’views in
combination with an institutional perspective. Student dissatisfaction with assessment
and feedback is a multi-faceted issue based within and inextricably bound to a specific
context of an institution and its culture. This issue is not a simple or ‘tame’problem,
with a simple or ‘elegant’solution. On the contrary, student dissatisfaction is a ‘wicked’
problem that is complex and, being contiguous with the multifarious activities within
the university, cannot be addressed in isolation with a ‘one size fits all’, quick fix, or
definitive solution. In research-intensive universities, where learning and teaching
frequently struggles to compete with research in terms of resources, time, and esteem,
tackling this ‘wicked’problem initially calls for an acknowledgement and acceptance
that responsibility for potential solutions lie collaboratively within the institutional
structures, culture, and communities of practice. Far from being elegant, approaches
to ‘wicked’problems are complex and holistic.
Holistic approaches to the ‘wicked’problem of student dissatisfaction with
assessment and feedback
The work reported here was undertaken as one of the several initiatives within the
University that are designed to: encourage discussion about assessment and feedback,
challenge existing practice, support new approaches, emphasise the value of peer
assessment, and share practices across subject disciplines. At the same time as introdu-
cing changes, however, and as this research reinforced, there is a need to ensure
consistent and authentic approach to assessment and feedback and a need to make
changes in dialogue with students so that the rationale for policy and practice is
transparent. Where this can be achieved, these factors may engender higher levels of
student engagement, learning, and may ultimately contribute to student satisfaction.
From this study, and the literature reviewed here, there are several interventions that
we recommend. They are interconnected, do not present a list of priorities, and have
been subdivided in terms of the structural levels from which they might be approached.
Like many UK Higher Education Institutions, the University in this study is already
engaged in College-specific and cross-institutional dialogues about many of these
interventions and they feature in early career development programmes, continuing
400 S. J. DEELEY ET AL.
professional development and working group activities. However, given the current
NSS results across the sector, many universities still have some way to go in terms of
successfully implementing these kinds of interventions.
We assert that a strategy is necessary which involves an engaged community of staff
and students, facilitated and supported by institutional leadership, and informed by
empirical evidence from the literature. The outcome is potentially transformative, but it
requires concerted and related initiatives at all levels and fuller engagement within
communities of practice. We recommend holistic approaches to the ‘wicked’problem
of student dissatisfaction through multiple interventions. These interventions range
from broad approaches such as curriculum mapping, integrating assessment and feed-
back literacies into courses and supporting the use of technology in assessment and
feedback, to localised interventions such as facilitating students’active participation,
enabling more effective communication between staffand students, and providing
opportunities to cultivate staff–student partnerships. Ultimately, by using this multi-
faceted strategy of enhancing communities of practice, the ‘wicked’problem of assess-
ment and feedback can be transformed into a source of student learning, engagement
and motivation. These interventions are noted below.
University/college level interventions to:
●Explicitly align assessment and feedback with the aims and intended learning
outcomes of degree programmes and courses (e.g. curriculum mapping);
●Encourage and support more flexible, diverse and innovative approaches to assess-
ment and feedback (e.g. opportunities for continuous assessment, self- and peer
assessment), to complement extant conventional methods;
●Explore further and share widely the use of technology in assessment and
feedback;
●Ensure that course documentation is accurately completed, sufficiently detailed,
and up to date.
School/subject discipline level interventions to:
●Design assessment and feedback practices that are more relevant to the real world
where feasible;
●Integrate metacognitive skills into learning outcomes and assessments within
courses;
●Introduce assessment and feedback literacies into courses;
●Offer opportunities for students’active engagement and choices in assessment and
feedback where appropriate;
●Optimise the timing of assessments and feedback provision (e.g. using assessment
and feedback calendars for planning as well as for reporting).
Programme/course level interventions to:
●Ensure communication about assessment and feedback between staffand students
is clear, timely and regular;
●Ensure that the rationale for using the assessment and feedback methods is explicit
and transparent to students;
●Clarify at a very early stage, the assessment criteria and where possible, allow
students to identify areas on which they would welcome feedback;
HIGHER EDUCATION PEDAGOGIES 401
●Make opportunities for learning through assessment explicit to students (e.g. focus
on critical thinking skills; employability);
●Ensure that feedback is timely and adequate;
●Give students opportunities to engage in dialogic feedback;
●Nurture an ethos of care through staffapproachability and support for students.
Conclusion
Despite the limitations to this study, the data gathered from the qualitative research
methods provide ample material with which to paint with some depth and perspective,
a picture of assessment and feedback within one College of a research-intensive university.
The contours of this assessment and feedback landscape depict terrain that is familiar and
resonates with the literature. Clearly, there are areas of excellent practice in the College
which are recognised by students, but equally there are areas of assessment and feedback
that can be improved. What emerges is that engaged staffand students’active engagement
are key motivating factors that can lead to students’satisfaction. In any institution, poor
assessment and feedback practice tends to disengage and demotivate students, which
inevitably leads to their dissatisfaction and means that any excellent learning outcomes
may be achieved despite current practice, rather than because of it. Overall, the study
revealed many sources of students’satisfaction as well as dissatisfaction with the assessment
and feedback processes they had experienced. While these issues may or may not be
indicative of a wider consensus among students in general, the students’responses still
offer us insight into how assessment and feedback practice can be improved.
The data representing the students’views concur with findings from the literature,
prompting a series of suggestions to improve practice. These are not solutions per se,
but rather, complex and multi-faceted approaches, requiring effort and action at
different structural levels and an engagement of different ‘communities of practice’
within the institution. Implementation of the recommendations therefore depend
largely on a genuine thirst and sustained commitment among institutional leaders
and staffto effect change. Only then will we, and other universities be able to measure
the success of a concerted attempt to solve the ‘wicked’problem of student dissatisfac-
tion with assessment and feedback. Where changes are piecemeal, this can only actually
serve to be counterproductive –highlighting disparities in the eyes of students and
further embedding dissatisfaction. The challenge we face in the sector is to introduce
a raft of integrated, mutually reinforcing approaches to support staffand students, and
to engage in the cultural change that often underpins such developments. This is a long-
term commitment that requires sustained leadership and authentic dialogue. This study
suggests that efforts directed towards enhancing assessment and feedback practices,
whilst they demand considerable investment in people, systems and processes will
ultimately reflect brightly on the student experience.
Acknowledgments
This study was funded by the College of Social Sciences at the University of Glasgow.
We thank all our participants and contributors to the College Review and the reviewers for
their constructive comments on our paper.
402 S. J. DEELEY ET AL.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
This work was supported by the University of Glasgow [113040-01];
ORCID
Dimitar Karadzhov http://orcid.org/0000-0001-8756-6848
References
Adcroft, A. (2011). The mythology of feedback. Higher Education Research & Development,30
(4), 405–419. doi:10.1080/07294360.2010.526096
Ajjawi, R., & Boud, D. (2017). Researching feedback dialogue: An interactional analysis approach.
Assessment & Evaluation in Higher Education,42(2), 252–265. doi:10.1080/02602938.2015.1102863
Deeley, S.J. (2014). Summative co-assessment: a deep learning approach to enhancing employ-
ability skills and attributes, Active Learning in Higher Education 15(1), 39–51. http://alh.
sagepub.com/content/15/1/39.
Deeley, S.J. (2015). Critical Perspectives on Service-Learning in Higher Education Basingstoke:
Palgrave Macmillan.
Deeley, S.J. and Bovill, C. (2017) Staffstudent partnership in assessment: Enhancing assessment
literacy through democratic practices, Assessment & Evaluation in Higher Education 42(3),
463-477 doi: 10.1080/02602938.2015.1126551.
Deeley, S.J. (2018). Using technology to facilitate effective assessment for learning and feedback
in higher education, Assessment & Evaluation in Higher Education 43(3), 439-448. doi:
10.1080/02602938.2017.1356906.
Biggs, J., & Tang, C. (2011). Teaching for quality learning at university. Maidenhead: Open
University Press/McGraw-Hill Education 4th edition.
Blair, A., & McGinty, S. (2013). Feedback-dialogues: Exploring the student perspective. Assessment &
Evaluation in Higher Education,38(4), 466–476. doi:10.1080/02602938.2011.649244
Bloxham, S., & West, A. (2004). Understanding the rules of the game: Peer assessment as
a medium for developing students’conceptions of assessment. Assessment and Evaluation in
Higher Education,29(6), 721–733. doi:10.1080/0260293042000227254
Boud, D. (2007). Reframing assessment as if learning were important. In D. Boud & N. Falchikov
(Eds.), Rethinking assessment in higher education (pp. 14–25). Abingdon: Routledge Chapter 2.
Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment &
Evaluation in Higher Education,24(4), 413–426. doi:10.1080/0260293990240405
Boud, D., & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment &
Evaluation in Higher Education,31(4), 399–413. doi:10.1080/02602930600679050
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design.
Assessment & Evaluation in Higher Education,38(6), 698–712. doi:10.1080/02602938.2012.691462
Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in Higher
Education,41(3), 400–413. doi:10.1080/02602938.2015.1018133
Bovill, C., & Bulley, C.J. (2011). A model of active student participation in curriculum design:
Exploring desirability and possibility. In C. Rust (Ed.), Improving student learning (18) global
theories and local practices: Institutional, disciplinary and cultural variations (pp. 176–188).
Oxford: Oxford Centre for Staffand Educational Development.
Brown, S. (2015). Learning, teaching and assessment in higher education. GlobalPerspectives.
London: Palgrave Macmillan.
HIGHER EDUCATION PEDAGOGIES 403
Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education,31
(2), 219–233. doi:10.1080/03075070600572132
Carless, D. (2015). Excellence in university assessment. London: Routledge.
Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices.
Studies in Higher Education,36(4), 395–407. doi:10.1080/03075071003642449
Evans, C. (2013). Making sense of assessment feedback in higher education. Review of
Educational Research,83(1), 70–120. doi:10.3102/0034654312474350
Grint, K. (2008)‘Wicked problems and clumsy solutions: The role of leadership’,Clinical
Leader1(2) http://leadershipforchange.org.uk/wp-content/uploads/Keith-Grint-Wicked-
Problems-handout.pdf
Hamer, J., Purchase, H., Luxton-Reilly, A., & Denny, P. (2015). A comparison of peer and tutor
feedback. Assessment & Evaluation in Higher Education,40(1), 151–164. doi:10.1080/
02602938.2014.893418
Harden, R.M. (2001). Curriculum mapping: A tool for transparent and authentic teaching and
learning. Medical Teacher,23(2), 123–137. doi:10.1080/01421590120036547
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of EducationalResearch,77(1), 81–112.
Hay, D.B., & Kinchin, I.M. (2006). Using concept maps to reveal conceptual typologies.
Education and Training,48(2/3), 127–142. doi:10.1108/00400910610651764
Hepplestone, S., & Chikwa, G. (2016). Exploring the processes used by students to apply
feedback. Student Engagement and Experience Journal. ISSN (online) 2047-9476. doi:
10.7190/seej.v5i1.104.
Higher Education Academy (2012)Amarked improvement: Transforming assessment in higher
education York: The higher education academy https://www.heacademy.ac.uk/sites/default/
files/A_Marked_Improvement.pdf doi: 10.1094/PDIS-11-11-0999-PDN
Higher Education Academy. (2014). Framework for partnership in learning and teaching in higher
education. York: Author.
Johnson, B., & Christensen, L. (2008). Educational research: Quantitative, qualitative, and mixed
approaches. London: Sage.
Jonsson, A. (2012). Facilitating productive use of feedback in higher education. Active Learning
in Higher Education,14(1), 63–76. doi:10.1177/1469787412467125
Kerr, J., Dudau, A., Deeley, S., Kominis, G., & Song, Y. (2016)Audio-visual feedback: Student
attainment and student and staffperceptions. University of Glasgow: unpublished report.
Kim, H., Sefcik, J.S., & Bradway, C. (2017). Characteristics of qualitative descriptive studies:
A systematic review. Research in Nursing & Health,40(1), 23–42.
Lea, M.R., & Street, B.V. (1998). Student writing in higher education: An academic literacies
approach. Studies in Higher Education,23(2), 157–172.
Lunt, T., & Curran, J. (2010). ‘Are you listening please?’The advantages of electronic audio feedback
compared to written feedback. Assessment & Evaluation in Higher Education,35(7), 759–769.
McDowell, L., Wakelin, D., Montgomery, C., & King, S. (2011). Does assessment for learning
make a difference? The development of a questionnaire to explore the student response.
Assessment & Evaluation in Higher Education,36(7), 749–765.
Mulder, R.A., Pearce, J.M., & Baik, C. (2014). Peer review in higher education: Student percep-
tions before and after participation. Active Learning in Higher Education,15(2), 157–171.
Neergaard, M.A., Olesen, F., Andersen, R.S., & Sondergaard, J. (2009). Qualitative description –
The poor cousin of health research? BMC Medical Research Methodology,9(1), 52.
Nicol, D.J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model
and seven principles of good feedback practice. Studies in Higher Education,31(2), 199–218.
O’Donovan, B., Rust, C., & Price, M. (2015). A scholarly approach to solving the feedback
dilemma in practice. Assessment & Evaluation in Higher Education. doi:10.1080/
02602938.2015.1052774
Parkin, H.J., Hepplestone, S., Holden, G., Irwin, B., & Thorpe, L. (2012). A role for Technology
in enhancing students’engagement with feedback. Assessment & Evaluation in Higher
Education,37(8), 963–973.
404 S. J. DEELEY ET AL.
Pitt, E., & Norton, L. (2016). ‘Now that’s the feedback I want!’Students’reactions to feedback on
graded work and what they do with it. Assessment & Evaluation in Higher Education.
doi:10.1080/020602938.2016.1142500
Price, M., Handley, K., & Millar, J. (2010). Feedback: Focusing attention on engagement. Studies
in Higher Education,36(8), 879–896.
Price, M., Rust, C., O’Donovan, B., Handley, K., & Bryant, R. (2012). Assessment literacy. Oxford:
Oxford Brookes University.
QAA. (2006). Code of Practice for the assurance of academic quality and standards in higher
education: Section 6: Assessment of students. Gloucester: Quality Assurance Agency for Higher
Education.
Rust, C. (2007). Towards a scholarship of assessment. Assessment & Evaluation in Higher
Education,32(2), 229–237.
Sadler, D.R. (1989). Formative assessment and the design of instructional systems. Instructional
Science,18, 119–144.
Sadler, D.R. (2010). Beyond feedback: Developing student capability in complex appraisal.
Assessment & Evaluation in Higher Education,35(5), 535–550.
Sadler, D.R. (2013). Opening up feedback. Teaching learners to see. In S. Merry, M. Price,
D. Carless, & M. Taras (Eds.), Reconceptualising feedback in higher education (pp. 54–63).
London: Routledge Chapter 5.
Sambell, K., McDowell, L., & Montgomery, C. (2013). Assessment for learning in higher educa-
tion. London: Routledge.
Sandelowski, M. (2000). Whatever happened to qualitative description? Research in Nursing &
Health,23(4), 334–340.
Sandelowski, M. (2010). What’s in a name? Qualitative description revisited. Research in Nursing
& Health,33(1), 77–84.
Smith, A., Pearson, E., & Hennes, L. (2016)Assessment and feedback toolkit: Student resources
University of Glasgow: unpublished report.
Smith, C.D., Worsfold, K., Davies, L., Fisher, R., & McPhail, R. (2013). Assessment literacy and
student learning: The case for explicitly developing students’‘assessment literacy’.Assessment
& Evaluation in Higher Education,38(1), 44–60.
Stefani, L.A.J. (1998). Assessment in partnership with learners. Assessment & Evaluation in
Higher Education,23(4), 339–350.
Sutton, P. (2012). Conceptualizing feedback literacy: Knowing, being, and acting. Innovations in
Education and Teaching International,49(1), 31–40.
Taras, M. (2002). Using assessment for learning and learning from assessment. Assessment &
Evaluation in Higher Education,27(6), 501–510.
Weaver, M.R. (2006). Do students value feedback? Student perceptions of tutors’written
responses. Assessment & Evaluation in Higher Education,31(3), 379–394.
Yang, M., & Carless, D. (2013). The feedback triangle and the enhancement of dialogic feedback
processes. Teaching in Higher Education,18(3), 285–297.
HIGHER EDUCATION PEDAGOGIES 405