ArticlePDF Available

Exploring the ‘wicked’ problem of student dissatisfaction with assessment and feedback in higher education

Authors:

Abstract

Student dissatisfaction with assessment and feedback is a significant challenge for most UK Higher Education Institutions according to a key national survey. This paper explores the meaning, challenges and potential opportunities for enhancement in assessment and feedback within the authors' own institution as illustrative of approaches that can be taken elsewhere. Using a qualitative design, a review of assessment and feedback, which included an exploration of students' perceptions, was made in one College of the University. The findings highlighted variations in assessment and feedback practice across the College with dissatisfaction typically being due to misunderstanding or miscommunication between staff and students. Drawing on the review, we assert in this paper that students' dissatisfaction with assessment and feedback is not a 'tame' problem for which a straightforward solution exists. Instead, it is a 'wicked' problem that requires a complex approach with multiple interventions.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=rhep20
Higher Education Pedagogies
ISSN: (Print) 2375-2696 (Online) Journal homepage: https://www.tandfonline.com/loi/rhep20
Exploring the ‘wicked’ problem of student
dissatisfaction with assessment and feedback in
higher education
Susan J. Deeley, Moira Fischbacher-Smith, Dimitar Karadzhov & Elina
Koristashevskaya
To cite this article: Susan J. Deeley, Moira Fischbacher-Smith, Dimitar Karadzhov & Elina
Koristashevskaya (2019) Exploring the ‘wicked’ problem of student dissatisfaction with
assessment and feedback in higher education, Higher Education Pedagogies, 4:1, 385-405, DOI:
10.1080/23752696.2019.1644659
To link to this article: https://doi.org/10.1080/23752696.2019.1644659
© 2019 The Author(s). Published by Informa
UK Limited, trading as Taylor & Francis
Group.
Published online: 14 Oct 2019.
Submit your article to this journal
Article views: 83
View related articles
View Crossmark data
Exploring the wickedproblem of student dissatisfaction
with assessment and feedback in higher education
Susan J. Deeley
a
, Moira Fischbacher-Smith
b
, Dimitar Karadzhov
c
and Elina Koristashevskaya
d
a
School of Social and Political Sciences, University of Glasgow, Glasgow, United Kingdom of Great Britain
and Northern Ireland;
b
Adam Smith Business School, University of Glasgow, Glasgow, United Kingdom of
Great Britain and Northern Ireland;
c
Institute of Health & Wellbeing, University of Glasgow, Glasgow,
United Kingdom of Great Britain and Northern Ireland;
d
Learning Enhancement & Academic Development
Service, University of Glasgow, Glasgow, United Kingdom of Great Britain and Northern Ireland
ABSTRACT
Student dissatisfaction with assessment and feedback is a signi-
cant challenge for most UK Higher Education Institutions accord-
ing to a key national survey. This paper explores the meaning,
challenges and potential opportunities for enhancement in assess-
ment and feedback within the authors' own institution as illustra-
tive of approaches that can be taken elsewhere. Using a qualitative
design, a review of assessment and feedback, which included an
exploration of students' perceptions, was made in one College of
the University. The ndings highlighted variations in assessment
and feedback practice across the College with dissatisfaction typi-
cally being due to misunderstanding or miscommunication
between staand students. Drawing on the review, we assert in
this paper that students' dissatisfaction with assessment and feed-
back is not a 'tame' problem for which a straightforward solution
exists. Instead, it is a 'wicked' problem that requires a complex
approach with multiple interventions.
ARTICLE HISTORY
Received 23 February 2018
Revised 27 June 2019
Accepted 10 July 2019
KEYWORDS
Assessment and feedback;
higher education; student
dissatisfaction; wicked
problem
Introduction
Widespread student dissatisfaction with assessment and feedback practices in higher
education, as evidenced by the National Student Survey (NSS), presents a complex and
multi-faceted, wickedproblem (Grint, 2008). This reects not only a phenomenon
occurring in research-intensive universities, but also more widely in the UK and
internationally. In a funded research study undertaken between March 2016 and
February 2017, our aim was to investigate the complex challenges surrounding assess-
ment and feedback practice in a research-intensive university with a view to imple-
menting eective and sustainable change. The focus was on the College of Social
Sciences, which is one of the four Colleges at the University and comprises ve
Schools and approximately 9,000 students, of whom 5,000 are undergraduates. The
College oers twelve main undergraduate degree programmes, the largest of which is
CONTACT Susan J. Deeley susan.deeley@glasgow.ac.uk School of Social and Political Sciences, University of
Glasgow, 2529 Bute Gardens, Glasgow, United Kingdom of Great Britain and Northern Ireland
HIGHER EDUCATION PEDAGOGIES
2019, VOL. 4, NO. 1, 385405
https://doi.org/10.1080/23752696.2019.1644659
© 2019 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons AttributionLicense (http://creativecommons.org/licenses/
by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
the MA Social Sciences degree. Typical of Scottish four-year general degrees, the MA
Social Sciences oers breadth of study in years 1 and 2 that exposes students to several
disciplines and enables them to take a variety of subject combinations within their
degrees. On this degree programme, there are almost 2,400 students enrolled in 75
dierent single and joint honours pathways; many of the latter are oered in collabora-
tion with the College of Arts or College of Science and Engineering. There are 11 other
degree programmes that are mostly professionally oriented degrees in Education, Law
and Accounting which have fewer cross-College pathways. Within this one College, it
was clear from our existing disciplinary reviews, NSS results, student feedback and
other sources of information that student satisfaction with assessment and feedback
varied within and between subject disciplines and that some areas need to make
improvements. The research reported in this paper was undertaken as a part of College-
speciceorts to improve assessment and feedback practices, and within the context of
institution-wide initiatives to improve assessment and feedback practices.
We begin with a scrutiny of selected literature that is followed by an outline of how
the investigation was conducted. In this paper, we focus on undergraduate students
perceptions in the context of an overview of assessment and feedback practice in the
College. We report on our ndings and recommend several strategic and holistic
approaches to alleviating student dissatisfaction with assessment and feedback from
the viewpoint that this is a wickedproblem. Being a wickedproblem infers that social
relationships and interactions are central to the issue and that there is no single elegant
solution. Consequently, the approaches we propose signify the importance of the
collective(Grint, 2008, p. 13) which involve concerted action at dierent levels of the
hierarchical structures within the University. We advocate an engaged community of
staand students approach which could be adopted for use across the University and
indeed further aeld to the benet of other higher education institutions. Poignantly,
we consider student dissatisfaction with assessment and feedback to be a symptom,
rather than the cause, of a problem.
Assessment and feedback
Assessment is a necessary requirement for the award of a degree and a vital requirement to
enable accreditation, assuring professional competencies are met. This summative type of
assessment is usually regarded as being rmly located within the power and domain of sta.
Interestingly, Price, Rust, ODonovan, Handley, and Bryant (2012,p.18)contendthat
where summative marks are given, there is (and will always need to be) a clear divide
between assessor and assessed.The implication of this prevailing orthodox stance is that
students are passive recipients of assessment, rather than being actively engaged in its
processes (author 1, 2015).
Another function of assessment is to help students learn and improve their academic
performance. These characteristics are typically attributed to formative assessment,
which does not normally contribute per se to studentsgrades. However, the dierence
between formative and summative assessment can be a misleading dichotomy (Boud,
Cohen, & Sampson, 1999) because summative assessment may also provide rich learn-
ing opportunities for students if constructive feedback is provided. Feedback on sum-
mative examinations, which are commonly used and often heavily weighted in terms of
386 S. J. DEELEY ET AL.
course credit, can be particularly helpful as examination performance tends to be lower
than in coursework (Rust, 2007).
The concept of assessment for learning (McDowell, Wakelin, Montgomery, & King,
2011; Sambell, McDowell, & Montgomery, 2013) could be applied more widely and
utilised in all assessment (Taras, 2002). To aid learning, it is vital that feedback is used
eectively. As Sadler (1989) asserts, it is important that feedback helps to close the gap
between studentsactual performance and what constitutes a potentially better perfor-
mance. In a broader context, assessment for learning and learning from assessment and
feedback processes can help students in their independent learning, attributes and skills
development, employment and lifelong learning (author 1, 2014; Brown, 2015).
Consequently, there are two relevant issues here; rstly: meeting the aims of assessment
and feedback eectively and secondly, engaging students in active learning in assess-
ment and feedback.
Dissatisfaction with assessment and feedback
Since 2005, students in their nal year of undergraduate study in the UK are invited
annually to participate in the NSS. The survey is reported in terms of the percentage of
students who agree or strongly agree in response to 27 questions concerning teaching,
course organisation, assessment and feedback, learning resources, student voice, com-
munity and development. Not only are the results reported nationally, but they also
contribute to a range of league tables, and more recently to the new Teaching
Excellence Framework (TEF) and as such, are a key focus for most universities. What
is clear is that across the UK sector, and regardless of overall satisfaction trends,
students typically demonstrate much less satisfaction with assessment and feedback
than with other measures. Student dissatisfaction with assessment and feedback across
the UK (Author 1 et al., 2017) is thus sharpening the focus on assessment and feedback
as a priority concern across higher education institutions (Boud, 2007). Assessment
consumes much time and eort by students and sta. Feedback also eats into statime,
for example, when lengthy periods may be spent providing written comments on
studentsassignments. Moreover, stamay perceive their eorts as wasted if their
feedback is not collected, read or heeded by students. Sadler (2010, p. 535) arms
that, for many students, feedback seems to have little or no impact, despite the
considerable time and eort put into its production. This suggests there is
a mismatch between staand studentsunderstanding of feedback, especially if students
consider it to be of little use or value to them (Lunt & Curran, 2010). A contributory
factor to feedback being of value is its timing because its relevance can be lost if it is
given too late, for example, after a course has been completed, as various studies have
shown (Hattie & Timperley, 2007; Jonsson, 2012;ODonovan, Rust, & Price, 2015).
Understanding the academic language and concepts typically used in feedback can
present a problem for students (Lea & Street, 1998; Stefani, 1998), but stamay assume
that they will grasp its meaning and understand how to apply it to improve their future
performance in assessment (Blair & McGinty, 2013; Sadler, 2010). In other words, the
understanding and expectations of assessment and feedback can dier between students
and staand as a consequence, signicant dierences can emerge between what
students want and what staprovide (Adcroft, 2011; Carless, 2006). The source of
HIGHER EDUCATION PEDAGOGIES 387
this discord may reside in conventions and assumptions about assessment and feed-
back. Challenging these assumptions may be a starting point for addressing such
dissonance.
Addressing problematic areas of assessment and feedback
It would be useful then for stato clarify to students the purpose of assessment and feedback
so there is mutual understanding. Linking these processes explicitly to the aims and intended
learning outcomes of courses and programmes, or what Biggs and Tang (2011, p. 95) refer to
as constructive alignment, can contribute to studentsunderstanding. It is also important
that students are aware of what they must do to attain the required standards, for example, in
terms of being aware of how to meet course aims, intended learning outcomes and marking
criteria (Bloxham & West, 2004;Price,Handley,&Millar,2010). Problematically, there may
be an assumption that students clearly understand the academic language, and terms unique
to subject disciplines, that describe course aims, outcomes and criteria. To address this
potential problem, and to help students develop metacognitive skills, overt explanations
and clear communication between staand students is essential (Biggs & Tang, 2011;
Nicol & Macfarlane-Dick, 2006;Weaver,2006). This involves assessment and feedback
literacies (Smith, Worsfold, Davies, Fisher, & McPhail, 2013;Sutton,2012)whichcanbe
embedded in academic courses to nurture studentsactive participation in assessment and
feedback processes (Author 1 et al., 2017;HigherEducationAcademy,2012;ODonovan
et al., 2015;Priceetal.,2012). However, this necessitates dialogue and social interaction
between staand students, heralding a shift away from studentsconventional passive role.
Evidence suggests that this transition enhances studentslearning (Higher Education
Academy, 2014)andarms an engaged learning approach that diers from conventional
didactic pedagogy.
Part of this approach may include dialogic feedback (Adcroft, 2011; Carless, 2015;
Carless, Salter, Yang, & Lam, 2011;ODonovan et al., 2015; Yang & Carless, 2013), which
can involve a shared construction of assessment (Author 1 et al., 2017) and feedback (Boud
& Molloy, 2013). This co-design approach serves to mitigate any mismatch between sta
and students in their understanding or expectations of feedback. Studentsmetacognitive
skills development in appraisal and making evaluative judgements of performance con-
tribute to their critical thinking and independent learning, both of which are valuable for
their future employment and lifelong learning. To facilitate studentsskills development,
self- and peer review are useful exercises and can be utilised as assessment methods. Such
co-operative activities oer a vast array of opportunities for student learning (Boud &
Falchikov, 2006; Mulder, Pearce, & Baik, 2014). Co-assessment, for example, self-
assessment combined with the assessment by sta, is likely to encourage studentsdeep
learning (author 1, 2014), which facilitates longer term understanding.
Assessment and feedback are interrelated processes central to student learning,
especially when formative assessment is used (Nicol & Macfarlane-Dick, 2006). It
would be advantageous to integrate and align these processes harmoniously within
accredited programmes of study. For example, rather than being planned in isolation
within the domains of single courses, assessment and feedback would be more eective
if they are structured, using a variety of methods (Evans, 2013), as part of the overall
aims and intended learning outcomes of a degree programme. Curriculum mapping
388 S. J. DEELEY ET AL.
(Harden, 2001) can be used to achieve this with an explicitly planned approach to oer
students a strategic and structured variety of opportunities to hone their skills and
demonstrate their learning.
The study
This study draws from a College review that investigated studentsperspectives of assess-
ment and feedback with a view to addressing causes of student dissatisfaction. One of the
aims was to gain an overview of the existing assessment and feedback practices in the
College across its ve Schools. Another aim was to gain insight into the student experience
through an in-depth exploration of their perceptions of how and in what ways assessment
and feedback impactedon their learning, and whether they felt engaged and motivated. The
review also involved gleaning additional data from written course documentation, teaching
awards, periodic subject disciplinary reviews, and other assessment and feedback innova-
tions and projects across the University, in addition to seeking the views of a small selection
of sta. The objectives of the College review were to:
Identify the varieties and types of assessment and feedback used across the
Colleges undergraduate programmes;
Identify examples of good and innovative assessment and feedback practice that
might be adopted more widely across the Schools in the College;
Investigate the rationale for using specic assessment and feedback methods;
Explore the studentsperceptions of the eects of assessment and feedback meth-
ods on their learning;
Examine the ndings of the study in light of the literature on assessment and
feedback;
Make recommendations for enhancing assessment and feedback practice.
Research methods
The review was conducted by a team of three academics, two research assistants,
asenior administrator, and included all ve Schools in the College of Social Sciences.
With help from administrative sta, information was gathered from relevant University
policies and documentation, for example, Feedback following Summative
Examinationsguidance; Periodic Subject Reviews and Summary of Good Practice
2014-15ʹ; course documentation for the relevant programmes; and the Colleges
Action Planin response to the NSS results.
Qualitative research methods were chosen to explore the studentsperceptions of the
eects of assessment and feedback methods on their learning. Qualitative methods are
useful for obtaining authentic and nuanced accounts of the Who, What, Where and
Whyof the experience of interest (Neergaard, Olesen, Andersen, & Sondergaard, 2009,
p. 54). In education research, qualitative methods have been instrumental in generating
novel recommendations in applied settings (Johnson & Christensen, 2008).
A qualitative descriptive design was used (Sandelowski, 2010). Qualitative description
HIGHER EDUCATION PEDAGOGIES 389
is a useful technique for obtaining in-depth, multi-faceted and contextualised accounts
of complex, multi-determined phenomena (Sandelowski, 2010).
Qualitative research methods were used by conducting focus groups and interviews
with students from each of the Schools within the College. To extend the data collec-
tion, an online questionnaire was used to capture studentsviews. The aim was to
garner their perspectives of assessment and feedback by asking them about:
Formative and summative assessment methods;
The purposes of assessment and feedback
The eects of assessment and feedback on their learning
Using technology in assessment and feedback
Innovative assessment and feedback
The language used in assessment and feedback
Self-assessment
Peer assessment
Co-assessment
Good practice in assessment and feedback
Clarity of feedback
Timeliness of feedback
How they used feedback to improve their learning.
Participants
The undergraduate students were chosen through a process of purposive and convenience
sampling and recruited through theStudentsRepresentative Council (SRC), course forums
and social media. Care was taken to ensure that each of the ve Schools in the College of
Social Sciences was represented in some way. Altogether, 44 students participated in the
review, which includes the questionnaire responses. In two of the Schools, two focus groups
were conducted with twelve students, with six in each focus group. Each group represented
a mix of single and joint, junior and senior Honours students. There was also one pre-
honours student in one of the focus groups. In a third School, three students were
interviewed individually as the lack of response made organising a focus group impossible.
In a fourth School, representation was made through a discussion group of ten students.
The group consisted of class representatives from the School, and the interviewer joined
their discussion to gather group responses related to assessment and feedback. However,
the small data set from this discussion was omitted from the overall review as it was
obtained under dierent conditions from the focus groups. Finally, representation from the
remaining fth School was achieved through information available from a written report
on an investigation made previously into assessment and feedback by stain that School.
Data analysis
The individual interview and focus group data were analysed using content analysis and
qualitative description (Sandelowski, 2010). Content analysis has been deemed the
analytic technique of choice for qualitative studies aiming to generate detailed, rich
descriptive accounts that remain rmly grounded in the original text (Kim, Sefcik, &
390 S. J. DEELEY ET AL.
Bradway, 2017; Sandelowski, 2000). The data were collected and analysed sequen-
tially. The interview and focus group recordings were transcribed verbatim and coded
according to the research objectives, as referred to earlier. The approach to coding
reected the researchersaim to represent the participantsaccounts authentically
while aiming to yield contextualised generalisations (Neergaard et al., 2009). Each
transcript was read independently by two members of the research team and then
re-read while listening to the audio recording in order to capture the full nuances of
the participantsresponses. All the transcripts were then highlighted for recurring
ideas and similar response patterns, which were then categorised according to the
research questions. One of the researchers categorised the data within tables, while
the other researcher used concept maps (Hay & Kinchin, 2006) as a method of
categorising the data. The researchers then used these two methods to cross refer-
ence the ndings. This led to further scrutiny and revision of the categories for
internal consistency, distinctiveness and signicance (Sandelowski, 2000). This pro-
cess allowed the emerging themes to become visible. The results were reviewed and
validated by the research team, ensuring greater reliability and minimising any
potential bias (Neergaard et al., 2009). The overarching themes reect the objectives
of the study in that they relate to a) studentsunderstanding of the functions of
assessment, b) their attitude to feedback, c) their views of the perceived problems of
assessment and feedback, and d) what students believe constitutes good practice.
Ethics
The University requires all non-clinical research involving human subjects to be scrutinised
by the appropriate ethics committee. Each College has trained ethical reviewers who follow
guidance that is consistent with the requirements of the Research Funding Councils and the
Vitae Research Concordat (https://www.vitae.ac.uk/policy/concordat-to-support-the-
career-development-of-researchers). Each College Committee is overseen by a University
Committee which ensures consistency of approach, delivers training for reviewers, and
updates theprocess in light of sector guidelines andbest practice. Applicantsare required to
identify and mitigate against risks to researchers and participants, and to submit all
documentation that would be used in relation to consent, data usage, data storage, and
dissemination. In keeping with those requirements, the research team identied that there
were potential ethical risks in this project in that some of the student participants were in
a dependent relationship with the review team. To mitigate this risk, postgraduate students
were employed as research assistants to conduct the focus groups and interviews. The
participants were assured of condentiality, as far as is possible within a focus group, and
that theirtaking part in the project would not aect their coursework, grades, or outcome of
their degree. For the purpose of analysis of the ndings, individual participants remained
anonymous to the rest of the review team. Participants were assured that their involvement
was entirely voluntary and that they did not need to answer any question they did not wish
to, and that they could withdraw from the study at any time and without question or
consequence. To ensure anonymity of the participants, all references to them were femin-
ised. All the participants gave their informed consent and the study was approved by the
College of Social Sciences Ethics Committee.
HIGHER EDUCATION PEDAGOGIES 391
Limitations
It was dicult to recruit large numbers of student participation for this study. Whether
this was due to our ill-timing for students with assessment deadlines taking precedence,
a lack of incentives, or a disinterest in the topic, is dicult to ascertain. Our study
therefore focused mainly on students in their two nal years of study (Honours). Thus,
we capture perhaps more mature reections on assessment and feedback as the study
does not include the views of students new to the University and the expectations they
bring. Participation in the online questionnaire was limited and although it provided
useful qualitative data, it was insucient for us to undertake rigorous quantitative
analysis. With the limited time and resources available to the project team, we were
unable to conduct a full mapping of assessment and feedback practices in the College.
The assessment and feedback documentation alone across all 12 degrees amounted to
500 pages which was too extensive for a project of this scale.
Other limitations pertaining to the data analytic process must also be noted. First,
the possibility of investigator bias could not be eliminated since all analysts, including
the researchers who designed and conducted the focus groups, were part of the Review
Team for this project. Second, due to lack of time informantsvalidation was not
conducted, thus missing the opportunity to further enhance the authenticity and
credibility of the ndings (Neergaard et al., 2009). Third, since the analytic method of
choice was qualitative description through the use of content analysis, the data analysis
was limited to the coding and theming of manifest content only, thus potentially
omitting insights from any latent content as oered, for instance, by grounded theory
and phenomenological approaches.
Findings
Summary of assessment methods used in the college of social sciences
The overview of current assessment practice began by extracting information from the
UniversitysCourse Cataloguewhich contains the minimum required course informa-
tion. This template-driven documentation identies pre-designated types of summative
assessment as follows: a written examination, essay, report, dissertation, portfolio,
project, oral assessment and presentation, practical skills assessment or other set
exercise. Specic details about the assessment strategies are provided in supplementary
narrative text but vary in form, length, specicity and rationale. Because more detailed
accounts of assessment are typically located in course handbooks on Moodle (the
Universitys Virtual Learning Environment), there was limited granularity in the data
available to the project team. Nonetheless, it was clear that written summative exam-
inations and written summative assignments, such as essays, were found to be the most
commonly adopted and heavily weighted assessment methods across all ve Schools in
the College. Indeed, written examinations and essays accounted an average of 80.41% of
the course grade, ranging from 67.85% to 88.79%. Variety in assessment (Evans, 2013)
was not in evidence and where alternative assessments existed, they typically repre-
sented a low percentage of the overall course grade. It was also apparent that formative
assessment practices diered from School to School.
392 S. J. DEELEY ET AL.
Studentsperspectives on assessment
Overall, students believed that the main purpose of assessment was to test disciplinary
knowledge. Indicative of the conventional use of written examinations, one student
asserted that they were used basically to see how much knowledge you can regurgitate
at a single point. So, examinations were perceived as little more than memory tests, as
expressed by one student who considered that it was a very narrow or limited pool that
you need to study. This encouraged students to take a strategic and surface approach to
learning, yet they generally agreed that assessment ought to involve an opportunity to
demonstrate their understanding of the subject material and to engage with it criti-
cally. Some students went on to say that assessment should give them a chance to be
creative and to demonstrate innovative thinking, as one student explained, I want to
write a piece of assessment that brings something new to (the markers) eyes and
surprises them. Similarly, another student believed that it should be the thought
process that is assessed, not the conclusions.
Formative assessment, on the other hand, was generally regarded by students as being
less important than summative assessment because it did not contribute directly to their
course grade. However, most of the students we spoke to considered it would be useful to
receive stafeedback on written drafts of their work before it was nally submitted. This
suggests that formative learning and practising dierent types of assessment are valuable
exercises that can help students in their learning as well as in building their condence.
Also, some students gave examples where formative assessment was particularly benecial
to their learning, such as self-assessment, peer assessment, and practical work.Shrewdly,
several students measured the value of assessment in terms of its relevanceto their potential
future employment. They perceived work experience, such as internships or placements, to
be of value because they increased their employability. Moreover, assessment could also
provide them with transferable skills and help them to develop attributes that would also be
useful in the workplace. For example, time management was a skill that students developed
in having to meet assessment deadlines. More importantly, assessment was useful if it
involved actual skills that are required in a future professional life .. . actually applying
theory to a problem and bringing it into real life. Here, exampleswere given of theMoot (in
Law) which a student explained as, solving a ctional court caseand elsewhere, making
a business case competition mandatory(in Business). One student added that variation
supports the development of multiple skills, for example, portfolio work prepares students
for their teaching career through planning, research and application skills. Assessed group
presentations were also noted as being useful in developing communication and leadership
skills. As a student explained, thats basically (what) every single employer wants.
However, they were acutely aware of the associated problems with group assessment, as
there could be varying levels of individual contributions to them. A student remarked,
I can see the value of (group assessment) if everyone pulls their weight and does their part
of the work, but overall, group assessment was perceived as challenging and not entirely
satisfactory for all students.
Nevertheless, it was clear that students appreciated opportunities to develop their
skills through diverse assessment methods. They shared the view that having multiple
pieces of coursework improves dierent kinds of skills, such as writing, time manage-
ment, presentation and organisational skills. Traditional types of assessment, such as
HIGHER EDUCATION PEDAGOGIES 393
essays and examinations, were not generally perceived as providing useful employability
skills, although they conceded that critical thinking skills could be developed through
these conventional methods. One student suggested that there could be dierent forms
of exams maybe you could make them longer or just one question so you can really
develop and plan your argument rather than having to scribble down really fast and . ..
just facts and kinda . . . .spill them out on a paper without having the chance to really
polish it.There is no doubt that the students favoured dierent and innovative
assessment methods but were clear that any new assessment method should be
explained to them rst. It is interesting to note here that studentsconcerns were
about understanding the purpose of the assessment, as well as about approaching the
task itself.
Some students were enthusiastic about being more actively involved in assessment,
for example, having more choice in their essay topic. One student explained that she
would be way more engaged because they could actually do something they were more
interested in.Strikingly, students expressed their dismay at having several examinations
clustered at the end of a semester, saying they preferred continuous assessment instead.
Studentsperceptions of problems with assessment practice
Overall, there were two closely linked themes relating to the perceived problems with
assessment. Overwhelmingly important to students was fairness in assessment and
transparency in its processes. In terms of fairness, the marking criteria for dierent
assignments in some cases were unclear. An example of this was what students saw as
ambiguity or vagueness in terms such as critical thinking, which is invariably part of
the marking criteria for pieces of coursework such as essays. In addition, the extent of
how critical a piece of written work should be, did not seem to be understood by the
students or clearly and consistently communicated by sta. If and how criteria were
explained depended on individual sta, which led students to allude to serendipity,
asserting that many students would just say that it depends on how lucky you are.
What they perceived as a lack of consistency in assessment practices was indicative of
a mismatch between the studentsinterpretations of marking criteria and the way
criteria were applied by stain marking studentsassignments. Some students believed
that their work had to conform to the markersbeliefs and expectations rather than
being assessed on its own merit. Studentsperceptions of inconsistency in marking were
a prevalent issue as students were concerned that there was insucient transparency in
marking and grading across the dierent courses. This was most notable in examination
marking where students rarely saw their marked scripts afterwards (although they could
ask to see the scripts), prompting one student to comment that she had generally
learned the most (by) writing papers for classes rather than written exams.
Studentsperspectives on feedback
The functions and rationale of feedback were clear to most students. They shared the
belief that feedback is important, rstly, to enable them to identify the strengths and
weaknesses of their work. Secondly, students believed that feedback should provide
them with specic advice they can apply to subsequent assessments. Certainly, the
feedback that helped students in their future assessments appeared to have the most
394 S. J. DEELEY ET AL.
impact on them. They noted that feedback worked well when it helped you understand
where you went wrong and how to improve. The rationale underpinning feedback,
they believed, is that it contributes to a continuous process of learning and improving
performance. Several students said that completing an assessment, having it marked
and returned relatively early in a course was very important to them. It meant that they
could have the opportunity of using the feedback to improve their next assessment.
Students placed a considerable emphasis on being able to compare past feedback with
feedback on subsequent assessments. They saw this as a vital indicator of their progress.
In this respect, timely and analogous feedback is imperative.
Several students identied individual written feedback as the most benecial to their
learning, but it was acknowledged that other forms of feedback were also benecial such
as feedback on formative assessment from peers. They explained that working together
through peer review was benecial because it challenges your beliefs. Additionally,
students acknowledged that there were benets to receiving whole-groupor generic
feedback. Although generic feedback served various functions, for example, informing
students of the grade distribution within a class and oering them general advice on
how standards could be improved, it was regarded as less eective than individualised
feedback. Eective feedback was described by students as containing detailed comments
about their work and an explanation for its grade. They also wanted advice about how
to improve their work in future assessments. Therefore, feedback was of limited value if
it was too specic to the one current piece of work. They regarded markerscomments
as constructive if they contained concrete points and clear explanations because these
were useful in understanding exactly how their written work could be improved.
Additionally, students admitted to becoming highly motivated if they perceived
a tone of encouragement from the marker. They valued supportive stawho provided
comprehensive and constructive feedback and who were accessible and approachable
for advice or further feedback. One student averred that the best feedback they received
was when it involved an individual discussion with a member of sta. It appears that
eective feedback is not just about content, but also about how it is conveyed. Despite
this positive view, some problems with feedback were uncovered.
Studentsperceptions of problems with feedback
Dishearteningly, a few students commented that feedback had little or no impact on
their learning. Barriers to eective feedback were identied as inconsistency, lack of
detail, late return, and negative comments in the feedback. A commonly held belief was
that examinations do not enable useful, if any, feedback to be given. Despite the
Universitys policy on giving feedback on summative examinations in a timely manner,
this belief prevailed because most examinations are held at the end of courses. If this
feedback cannot be used to improve studentsfuture work, then it becomes redundant
and futile. Similarly, late feedback from assessments held during the course was
perceived by students as a signicant problem as it hindered any potential positive
learning or improved performance. Poignantly, a student explained this as a Catch-22.
I mean-if you are getting feedback after youve completed your course, then it is again
too late. I have a course where we handed in an essay two weeks ago, which has not
been marked. Now we dont have class any more. By the time it comes out, we will have
sat our exams.
HIGHER EDUCATION PEDAGOGIES 395
As well as a paucity of feedback on examinations, students noted the lack of
consistency in the way feedback was given by dierent staon other assessment. One
student explained, I have no way of understanding what I have actually done wrong
and I have no way of improving. Its always about rolling the dice and trying to gure
out what the (marker) is looking for. This implies that it is not the amount of feedback
that is important, but its quality. Students dened low-quality feedback as containing
just . . . ticks, and they pointed out that some markers (do) not even (do) that.
Students were acutely aware of the mismatch between their expectations of feedback
and what they received. They were also clear about what they wanted from feedback,
which was an understanding of how they could improve their work in future and
observed that brief feedback rarely explained this to them. Another problem that
students highlighted was receiving negative comments from staif they requested
clarication of their feedback or wanted additional feedback. Some students thought
that this may be due to some stainterpreting the requests as challenging their
academic expertise or authority.
Discussion
Drawing from the College review, the ndings in this paper are time-specic and relate
to the data that were collected from a relatively small group of students and the ndings
that emerged are therefore not fully representative of all assessment and feedback
activities within the College or the wider University. Nevertheless, the ndings are
signicant in that they resonate with previously published studies. It is also the case that
the study examined areas that need to be improved and so this paper does not oer an
equivalent account of the successes, strengths of approach in the College, or areas
students recognise as good practice.
The role of assessment
It is clear from this project that students in the College perceived assessment to be
important for dierent reasons. These reasons include assessment to gain academic
coursework credit and professional accreditation. The main purpose of assessment was
perceived as testing studentsknowledge and understanding academic course material.
However, assessment was also seen to be important for gaining more than subject-
specic knowledge, as students recognised that the acquisition of transferable skills
could also be made through assessment and students appreciated that this was valuable
for their future employability. Students asserted that developing their critical thinking
skills was paramount and that this could be achieved through assessment. They believed
that critical thinking could lead to being creative and innovative, which would also be
highly valued by prospective employers.
Although summative assessment was a strategic concern and focus for students in terms
of passing courses, gaining academic credit, and ultimately obtaining a degree, it was also
seen by many as an opportunity to learn. Students acknowledged the potential for sum-
mative assessment to function also as assessment for learning and were aware of the value of
their active engagement in assessment. This resonates with ndings from the literature that
indicate assessment as a tool for learning (Carless, 2006; McDowell et al., 2011;Pitt&
396 S. J. DEELEY ET AL.
Norton, 2016;Sambelletal.,2013; Taras, 2002), which encourages student engagement
(Higher Education Academy, 2012,2014). Interestingly, some students claimed they felt
actively engaged when they were involved in assessment. Such engagement may give rise to
deep learning, as referred to in the literature (author 1, 2014). Students referred to their
engagement in learning as curiosity, a desire to learn, and intellectual hunger. Given the
right circumstances, it seems that assessment can inspire intrinsic motivation. Assessment
can capture an inherent desire to learn if students believe that it allows them to express
a personal interest in, or passion for, a topic. The co-design of the curriculum and/or
assessment between staand students may be ways to facilitate this (Author 1 et al., 2017;
Bovill & Bulley, 2011).
Towards a shared understanding of assessment and its processes
It is helpful to students to understand the purpose of assessment, which implies that
clear communication from staabout its rationale is essential. And if new methods
are introduced, students need to know in advance what is entailed, preferably with
a chance to practice beforehand, through formative assessment. Students also noted
that clear marking criteria, transparency and fairness in assessment and feedback
processes are also vital to their learning and good performance in assessment. This
can be facilitated through co-design as mentioned above, or by embedding assess-
ment and feedback literacies into coursework (Author 1 et al., 2017;Higher
Education Academy, 2012; Lea & Street, 1998;ODonovan et al., 2015;Priceetal.,
2012;Smithetal.,2013).
Students did not believe that there was always a consistency in the marking and
sometimes perceived it to be a matter of luck as to who marked their coursework. Given
the rigour around second marking and moderation of marks, this points to a wider, and
more complex wicked, problem where perceptions of fairness and consistency of
marking are misaligned between students and sta. Naturally, dierent stamay
adopt various styles of giving feedback and some may assume that students understand
the academic discourse used in feedback. This reinforces the need for clear commu-
nication, understanding, and agreement of expectations between staand students. It
also reects a potential dilemma in reconciling what students want and what sta
provide, which is a tension highlighted by Carless (2006) and Adcroft (2011). Again,
this suggests that student dissatisfaction is not an individual problem but one that
belongs to a system, in other words, a wickedproblem for which no-one has the
solution in isolation(Grint, 2008, p. 11).
Positive aspects of assessment and feedback
The positive aspects of assessment and feedback raised by our participants resonate
with previous studies (Biggs & Tang, 2011; Bloxham & West, 2004; Nicol & Macfarlane-
Dick, 2006; Price et al., 2010; Weaver, 2006). Crucial for studentsgood performance is
their clear understanding of what is expected in assessment and this was apparent in the
study. As stated in the literature, an eective way of helping students to understand
what is expected of them is by embedding assessment and feedback literacies within the
curriculum (Author 1 et al., 2017; Higher Education Academy, 2012; Lea & Street, 1998;
HIGHER EDUCATION PEDAGOGIES 397
ODonovan et al., 2015; Price et al., 2012; Smith et al., 2013; Sutton, 2012) in addition to
authentic assurance from stathat stringent moderation or second marking policies
and procedures are in place.
Students in the study believed that diverse assessment would lead to students
increased motivation. Diversication necessitates a more exible approach and includes
innovative assessment methods, in addition to students being actively involved in
making choices about their assessment. The issue of student dissatisfaction must there-
fore be contextualised as a wickedproblem that reaches beyond individual teaching
staand is a function of a range of factors including, but not limited to, the assessment
methods across a programme of study, the consistency of dialogue around assessment
and feedback, disciplinary inuences, and institutional custom and practice. This is not
to say that students like, or are receptive to, all kinds of alternative forms of assessment.
Indeed, some may be resistant (author 1, 2018) and prefer more conventional modes of
assessment such as essays and examinations. Although many students in the study did
not favour group assessment, such as presentations, some clearly did. Inevitably, there
can be problems with uneven contributions to group presentations, but there are ways
in which this can be managed, for example, by requiring each student to produce
written evidence of their contribution to the presentation. Another alternative to
conventional end of course examinations is to introduce continuous assessment,
which is recommended by Smith, Pearson, and Hennes (2016). Students believed that
continuous assessment would counteract the stressful demand of sitting several exam-
inations close together at the end of courses. Continuous assessment may also be more
conducive to introducing exercises for students that develop their employability skills
and attributes.
Problems of feedback
For learning to occur through assessment, it was clear that all participants felt that
eective feedback is essential. However, eective feedback may present a conundrum.
As referred to earlier, stamay spend a large amount of time writing comments on
studentswork, yet this feedback often remains a source of dissatisfaction for students.
This discord can arise if feedback does not provide detailed and clear scaolded support
to students (Sadler, 2013). Examples of low-quality feedback included merely ticking
a students essay without any accompanying comments, or negative comments written on
coursework. Students claimed that this depersonalised manner of feedback demotivated
them and made them feel disengaged. A sense of their work and eorts being valued by
stais important for building and maintaining studentscondence and further engage-
ment. Perhaps it is not surprising that audio-visual-recorded feedback, even on anon-
ymised coursework and examination scripts, is favoured by many students as it is more
personalised (Kerr, Dudau, Deeley, Kominis, & Song, 2016), as referred to below.
Use of technology
Although there is limited information gathered in this study about the extent and
diversity of uses of technology in assessment and feedback within the College, there is
no doubt that technology is and can be used in a variety of ways. For example, there is
398 S. J. DEELEY ET AL.
ample scope for further exploration of the use of technology in assessment and feedback
(Hepplestone & Chikwa, 2016; Parkin, Hepplestone, Holden, Irwin, & Thorpe, 2012),
especially given the opportunities oered by innovative technology enhanced active
learning spaces. Technology can help to deliver feedback quite quickly and, of course,
timeliness of feedback is an important factor. Returning marked work to students
within a short time period can lead to increasing tension for staand dissatisfaction
for students if it is not returned on time. End of course assessments allow little time for
providing feedback, especially in large classes, and can add stressful demands on sta.
Moreover, feedback that is given after students have nished an academic course may
render the feedback redundant as it may lose its relevance (Jonsson, 2012) and thus lead
to student dissatisfaction. This is inextricably bound to a structural institutional system
and is part of the wickedproblem.
Eective feedback
Nevertheless, students heartily agreed that eective feedback can be a means to improv-
ing their work, or, in other words, it is a useful learning tool(Pitt & Norton, 2016,
p. 1). They appreciated receiving regular feedback, such as comments on drafts of their
work in progress. Unfortunately, this can be problematic if not impossible, to sustain if
there is a large student cohort, although peer review may assist in some ways here. But
students were adamant that helpful feedback was something they could use to improve
their work. They explained this as containing specic and constructive comments on
their work, as justifying the mark that was awarded, and being returned to them within
a few weeks. These factors mirror the ndings of Smith et al. (2016, p. 4) who reported
succinctly that students described eective feedback as timely, detailed and actionable.
It also echoes good feedback practice advocated by others (Hattie & Timperley, 2007;
Jonsson, 2012; Nicol & Macfarlane-Dick, 2006;ODonovan et al., 2015; Sadler, 2010).
Personalisation
Signicantly, from this study it appears that students respond positively to stawho are
approachable, and willing to oer help, support and encouragement (Pitt & Norton,
2016). Moreover, students may be more inclined to act on their feedback if they
perceive it to be individualised. As mentioned above, the eects of one-to-one feedback
can be perceived through online audio-visual feedback (Kerr et al., 2016). This personal
approach to feedback reiterates what Sutton (2012, p. 39) refers to as an ethos of care
which, he asserts, is conducive to enhancing student learning. As with learning and
teaching, feedback can be most eective if it is part of a social process that actively
engages students in dialogue (Ajjawi & Boud, 2017; Carless et al., 2011;ODonovan
et al., 2015). The idea that dialogue and collaborative support can also be achieved
through peer review was recognised by students as eectively developing their learning,
which is echoed by Hamer, Purchase, Luxton-Reilly, and Denny (2015). We should not
assume that students inherently know how to review, assess and give feedback eec-
tively, but working with stain partnership can help to develop studentsskills,
improve their work and become self-regulated learners. Indeed, there was little infor-
mation from students in the study about the process by which they apply feedback to
HIGHER EDUCATION PEDAGOGIES 399
their future assignments. This feedback loopsignposts ways in which students can
improve their work (QAA, 2006,p.1011) and creates a space for dialogic feedback
(Boud & Molloy, 2013; Boud & Soler, 2016; Carless et al., 2011; Yang & Carless, 2013).
However, using metaphor to refer to a more sustainable approach to learning through
dialogic feedback beyond the limits of a particular course of study, a feedback coil may
be more apt as this infers innite development, rather than being conned to a nite
loop.
Summary
In sum, this study reveals studentsperceptions of assessment and feedback, which
interestingly resonate clearly with previous studies. From our study, it is evident that
examples of excellent practice in assessment and feedback exist within the College that
also reect recommendations made in the literature. In seeking the views of students,
our College-wide overview has allowed us to gain an insight into individualsviews in
combination with an institutional perspective. Student dissatisfaction with assessment
and feedback is a multi-faceted issue based within and inextricably bound to a specic
context of an institution and its culture. This issue is not a simple or tameproblem,
with a simple or elegantsolution. On the contrary, student dissatisfaction is a wicked
problem that is complex and, being contiguous with the multifarious activities within
the university, cannot be addressed in isolation with a one size ts all, quick x, or
denitive solution. In research-intensive universities, where learning and teaching
frequently struggles to compete with research in terms of resources, time, and esteem,
tackling this wickedproblem initially calls for an acknowledgement and acceptance
that responsibility for potential solutions lie collaboratively within the institutional
structures, culture, and communities of practice. Far from being elegant, approaches
to wickedproblems are complex and holistic.
Holistic approaches to the wickedproblem of student dissatisfaction with
assessment and feedback
The work reported here was undertaken as one of the several initiatives within the
University that are designed to: encourage discussion about assessment and feedback,
challenge existing practice, support new approaches, emphasise the value of peer
assessment, and share practices across subject disciplines. At the same time as introdu-
cing changes, however, and as this research reinforced, there is a need to ensure
consistent and authentic approach to assessment and feedback and a need to make
changes in dialogue with students so that the rationale for policy and practice is
transparent. Where this can be achieved, these factors may engender higher levels of
student engagement, learning, and may ultimately contribute to student satisfaction.
From this study, and the literature reviewed here, there are several interventions that
we recommend. They are interconnected, do not present a list of priorities, and have
been subdivided in terms of the structural levels from which they might be approached.
Like many UK Higher Education Institutions, the University in this study is already
engaged in College-specic and cross-institutional dialogues about many of these
interventions and they feature in early career development programmes, continuing
400 S. J. DEELEY ET AL.
professional development and working group activities. However, given the current
NSS results across the sector, many universities still have some way to go in terms of
successfully implementing these kinds of interventions.
We assert that a strategy is necessary which involves an engaged community of sta
and students, facilitated and supported by institutional leadership, and informed by
empirical evidence from the literature. The outcome is potentially transformative, but it
requires concerted and related initiatives at all levels and fuller engagement within
communities of practice. We recommend holistic approaches to the wickedproblem
of student dissatisfaction through multiple interventions. These interventions range
from broad approaches such as curriculum mapping, integrating assessment and feed-
back literacies into courses and supporting the use of technology in assessment and
feedback, to localised interventions such as facilitating studentsactive participation,
enabling more eective communication between staand students, and providing
opportunities to cultivate stastudent partnerships. Ultimately, by using this multi-
faceted strategy of enhancing communities of practice, the wickedproblem of assess-
ment and feedback can be transformed into a source of student learning, engagement
and motivation. These interventions are noted below.
University/college level interventions to:
Explicitly align assessment and feedback with the aims and intended learning
outcomes of degree programmes and courses (e.g. curriculum mapping);
Encourage and support more exible, diverse and innovative approaches to assess-
ment and feedback (e.g. opportunities for continuous assessment, self- and peer
assessment), to complement extant conventional methods;
Explore further and share widely the use of technology in assessment and
feedback;
Ensure that course documentation is accurately completed, suciently detailed,
and up to date.
School/subject discipline level interventions to:
Design assessment and feedback practices that are more relevant to the real world
where feasible;
Integrate metacognitive skills into learning outcomes and assessments within
courses;
Introduce assessment and feedback literacies into courses;
Oer opportunities for studentsactive engagement and choices in assessment and
feedback where appropriate;
Optimise the timing of assessments and feedback provision (e.g. using assessment
and feedback calendars for planning as well as for reporting).
Programme/course level interventions to:
Ensure communication about assessment and feedback between staand students
is clear, timely and regular;
Ensure that the rationale for using the assessment and feedback methods is explicit
and transparent to students;
Clarify at a very early stage, the assessment criteria and where possible, allow
students to identify areas on which they would welcome feedback;
HIGHER EDUCATION PEDAGOGIES 401
Make opportunities for learning through assessment explicit to students (e.g. focus
on critical thinking skills; employability);
Ensure that feedback is timely and adequate;
Give students opportunities to engage in dialogic feedback;
Nurture an ethos of care through staapproachability and support for students.
Conclusion
Despite the limitations to this study, the data gathered from the qualitative research
methods provide ample material with which to paint with some depth and perspective,
a picture of assessment and feedback within one College of a research-intensive university.
The contours of this assessment and feedback landscape depict terrain that is familiar and
resonates with the literature. Clearly, there are areas of excellent practice in the College
which are recognised by students, but equally there are areas of assessment and feedback
that can be improved. What emerges is that engaged staand studentsactive engagement
are key motivating factors that can lead to studentssatisfaction. In any institution, poor
assessment and feedback practice tends to disengage and demotivate students, which
inevitably leads to their dissatisfaction and means that any excellent learning outcomes
may be achieved despite current practice, rather than because of it. Overall, the study
revealed many sources of studentssatisfaction as well as dissatisfaction with the assessment
and feedback processes they had experienced. While these issues may or may not be
indicative of a wider consensus among students in general, the studentsresponses still
oer us insight into how assessment and feedback practice can be improved.
The data representing the studentsviews concur with ndings from the literature,
prompting a series of suggestions to improve practice. These are not solutions per se,
but rather, complex and multi-faceted approaches, requiring eort and action at
dierent structural levels and an engagement of dierent communities of practice
within the institution. Implementation of the recommendations therefore depend
largely on a genuine thirst and sustained commitment among institutional leaders
and stato eect change. Only then will we, and other universities be able to measure
the success of a concerted attempt to solve the wickedproblem of student dissatisfac-
tion with assessment and feedback. Where changes are piecemeal, this can only actually
serve to be counterproductive highlighting disparities in the eyes of students and
further embedding dissatisfaction. The challenge we face in the sector is to introduce
a raft of integrated, mutually reinforcing approaches to support staand students, and
to engage in the cultural change that often underpins such developments. This is a long-
term commitment that requires sustained leadership and authentic dialogue. This study
suggests that eorts directed towards enhancing assessment and feedback practices,
whilst they demand considerable investment in people, systems and processes will
ultimately reect brightly on the student experience.
Acknowledgments
This study was funded by the College of Social Sciences at the University of Glasgow.
We thank all our participants and contributors to the College Review and the reviewers for
their constructive comments on our paper.
402 S. J. DEELEY ET AL.
Disclosure statement
No potential conict of interest was reported by the authors.
Funding
This work was supported by the University of Glasgow [113040-01];
ORCID
Dimitar Karadzhov http://orcid.org/0000-0001-8756-6848
References
Adcroft, A. (2011). The mythology of feedback. Higher Education Research & Development,30
(4), 405419. doi:10.1080/07294360.2010.526096
Ajjawi, R., & Boud, D. (2017). Researching feedback dialogue: An interactional analysis approach.
Assessment & Evaluation in Higher Education,42(2), 252265. doi:10.1080/02602938.2015.1102863
Deeley, S.J. (2014). Summative co-assessment: a deep learning approach to enhancing employ-
ability skills and attributes, Active Learning in Higher Education 15(1), 3951. http://alh.
sagepub.com/content/15/1/39.
Deeley, S.J. (2015). Critical Perspectives on Service-Learning in Higher Education Basingstoke:
Palgrave Macmillan.
Deeley, S.J. and Bovill, C. (2017) Stastudent partnership in assessment: Enhancing assessment
literacy through democratic practices, Assessment & Evaluation in Higher Education 42(3),
463-477 doi: 10.1080/02602938.2015.1126551.
Deeley, S.J. (2018). Using technology to facilitate eective assessment for learning and feedback
in higher education, Assessment & Evaluation in Higher Education 43(3), 439-448. doi:
10.1080/02602938.2017.1356906.
Biggs, J., & Tang, C. (2011). Teaching for quality learning at university. Maidenhead: Open
University Press/McGraw-Hill Education 4th edition.
Blair, A., & McGinty, S. (2013). Feedback-dialogues: Exploring the student perspective. Assessment &
Evaluation in Higher Education,38(4), 466476. doi:10.1080/02602938.2011.649244
Bloxham, S., & West, A. (2004). Understanding the rules of the game: Peer assessment as
a medium for developing studentsconceptions of assessment. Assessment and Evaluation in
Higher Education,29(6), 721733. doi:10.1080/0260293042000227254
Boud, D. (2007). Reframing assessment as if learning were important. In D. Boud & N. Falchikov
(Eds.), Rethinking assessment in higher education (pp. 1425). Abingdon: Routledge Chapter 2.
Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment &
Evaluation in Higher Education,24(4), 413426. doi:10.1080/0260293990240405
Boud, D., & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment &
Evaluation in Higher Education,31(4), 399413. doi:10.1080/02602930600679050
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design.
Assessment & Evaluation in Higher Education,38(6), 698712. doi:10.1080/02602938.2012.691462
Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in Higher
Education,41(3), 400413. doi:10.1080/02602938.2015.1018133
Bovill, C., & Bulley, C.J. (2011). A model of active student participation in curriculum design:
Exploring desirability and possibility. In C. Rust (Ed.), Improving student learning (18) global
theories and local practices: Institutional, disciplinary and cultural variations (pp. 176188).
Oxford: Oxford Centre for Staand Educational Development.
Brown, S. (2015). Learning, teaching and assessment in higher education. GlobalPerspectives.
London: Palgrave Macmillan.
HIGHER EDUCATION PEDAGOGIES 403
Carless, D. (2006). Diering perceptions in the feedback process. Studies in Higher Education,31
(2), 219233. doi:10.1080/03075070600572132
Carless, D. (2015). Excellence in university assessment. London: Routledge.
Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices.
Studies in Higher Education,36(4), 395407. doi:10.1080/03075071003642449
Evans, C. (2013). Making sense of assessment feedback in higher education. Review of
Educational Research,83(1), 70120. doi:10.3102/0034654312474350
Grint, K. (2008)Wicked problems and clumsy solutions: The role of leadership,Clinical
Leader1(2) http://leadershipforchange.org.uk/wp-content/uploads/Keith-Grint-Wicked-
Problems-handout.pdf
Hamer, J., Purchase, H., Luxton-Reilly, A., & Denny, P. (2015). A comparison of peer and tutor
feedback. Assessment & Evaluation in Higher Education,40(1), 151164. doi:10.1080/
02602938.2014.893418
Harden, R.M. (2001). Curriculum mapping: A tool for transparent and authentic teaching and
learning. Medical Teacher,23(2), 123137. doi:10.1080/01421590120036547
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of EducationalResearch,77(1), 81112.
Hay, D.B., & Kinchin, I.M. (2006). Using concept maps to reveal conceptual typologies.
Education and Training,48(2/3), 127142. doi:10.1108/00400910610651764
Hepplestone, S., & Chikwa, G. (2016). Exploring the processes used by students to apply
feedback. Student Engagement and Experience Journal. ISSN (online) 2047-9476. doi:
10.7190/seej.v5i1.104.
Higher Education Academy (2012)Amarked improvement: Transforming assessment in higher
education York: The higher education academy https://www.heacademy.ac.uk/sites/default/
les/A_Marked_Improvement.pdf doi: 10.1094/PDIS-11-11-0999-PDN
Higher Education Academy. (2014). Framework for partnership in learning and teaching in higher
education. York: Author.
Johnson, B., & Christensen, L. (2008). Educational research: Quantitative, qualitative, and mixed
approaches. London: Sage.
Jonsson, A. (2012). Facilitating productive use of feedback in higher education. Active Learning
in Higher Education,14(1), 6376. doi:10.1177/1469787412467125
Kerr, J., Dudau, A., Deeley, S., Kominis, G., & Song, Y. (2016)Audio-visual feedback: Student
attainment and student and staperceptions. University of Glasgow: unpublished report.
Kim, H., Sefcik, J.S., & Bradway, C. (2017). Characteristics of qualitative descriptive studies:
A systematic review. Research in Nursing & Health,40(1), 2342.
Lea, M.R., & Street, B.V. (1998). Student writing in higher education: An academic literacies
approach. Studies in Higher Education,23(2), 157172.
Lunt, T., & Curran, J. (2010). Are you listening please?The advantages of electronic audio feedback
compared to written feedback. Assessment & Evaluation in Higher Education,35(7), 759769.
McDowell, L., Wakelin, D., Montgomery, C., & King, S. (2011). Does assessment for learning
make a dierence? The development of a questionnaire to explore the student response.
Assessment & Evaluation in Higher Education,36(7), 749765.
Mulder, R.A., Pearce, J.M., & Baik, C. (2014). Peer review in higher education: Student percep-
tions before and after participation. Active Learning in Higher Education,15(2), 157171.
Neergaard, M.A., Olesen, F., Andersen, R.S., & Sondergaard, J. (2009). Qualitative description
The poor cousin of health research? BMC Medical Research Methodology,9(1), 52.
Nicol, D.J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model
and seven principles of good feedback practice. Studies in Higher Education,31(2), 199218.
ODonovan, B., Rust, C., & Price, M. (2015). A scholarly approach to solving the feedback
dilemma in practice. Assessment & Evaluation in Higher Education. doi:10.1080/
02602938.2015.1052774
Parkin, H.J., Hepplestone, S., Holden, G., Irwin, B., & Thorpe, L. (2012). A role for Technology
in enhancing studentsengagement with feedback. Assessment & Evaluation in Higher
Education,37(8), 963973.
404 S. J. DEELEY ET AL.
Pitt, E., & Norton, L. (2016). Now thats the feedback I want!Studentsreactions to feedback on
graded work and what they do with it. Assessment & Evaluation in Higher Education.
doi:10.1080/020602938.2016.1142500
Price, M., Handley, K., & Millar, J. (2010). Feedback: Focusing attention on engagement. Studies
in Higher Education,36(8), 879896.
Price, M., Rust, C., ODonovan, B., Handley, K., & Bryant, R. (2012). Assessment literacy. Oxford:
Oxford Brookes University.
QAA. (2006). Code of Practice for the assurance of academic quality and standards in higher
education: Section 6: Assessment of students. Gloucester: Quality Assurance Agency for Higher
Education.
Rust, C. (2007). Towards a scholarship of assessment. Assessment & Evaluation in Higher
Education,32(2), 229237.
Sadler, D.R. (1989). Formative assessment and the design of instructional systems. Instructional
Science,18, 119144.
Sadler, D.R. (2010). Beyond feedback: Developing student capability in complex appraisal.
Assessment & Evaluation in Higher Education,35(5), 535550.
Sadler, D.R. (2013). Opening up feedback. Teaching learners to see. In S. Merry, M. Price,
D. Carless, & M. Taras (Eds.), Reconceptualising feedback in higher education (pp. 5463).
London: Routledge Chapter 5.
Sambell, K., McDowell, L., & Montgomery, C. (2013). Assessment for learning in higher educa-
tion. London: Routledge.
Sandelowski, M. (2000). Whatever happened to qualitative description? Research in Nursing &
Health,23(4), 334340.
Sandelowski, M. (2010). Whats in a name? Qualitative description revisited. Research in Nursing
& Health,33(1), 7784.
Smith, A., Pearson, E., & Hennes, L. (2016)Assessment and feedback toolkit: Student resources
University of Glasgow: unpublished report.
Smith, C.D., Worsfold, K., Davies, L., Fisher, R., & McPhail, R. (2013). Assessment literacy and
student learning: The case for explicitly developing students’‘assessment literacy.Assessment
& Evaluation in Higher Education,38(1), 4460.
Stefani, L.A.J. (1998). Assessment in partnership with learners. Assessment & Evaluation in
Higher Education,23(4), 339350.
Sutton, P. (2012). Conceptualizing feedback literacy: Knowing, being, and acting. Innovations in
Education and Teaching International,49(1), 3140.
Taras, M. (2002). Using assessment for learning and learning from assessment. Assessment &
Evaluation in Higher Education,27(6), 501510.
Weaver, M.R. (2006). Do students value feedback? Student perceptions of tutorswritten
responses. Assessment & Evaluation in Higher Education,31(3), 379394.
Yang, M., & Carless, D. (2013). The feedback triangle and the enhancement of dialogic feedback
processes. Teaching in Higher Education,18(3), 285297.
HIGHER EDUCATION PEDAGOGIES 405
... suggest a fundamental 'rethinking of the place of assessment and feedback within the curriculum is needed ' (2013, p.700). The National Student Survey (NSS) in the United Kingdom (UK) consistently shows students are relatively dissatisfied with assessment and feedback (NSS, 2010-24) even where most other aspects of their studies are highly rated. Deeley et. al. (2019) found that student dissatisfaction with feedback stemmed from the belief it had little impact on their learning, or was inconsistent, while Marriott and Teoh, (2012) noted a lack of clarity and promptness. Deeley et al. (2019) present these negative attitudes to assessment and feedback as a significant challenge lacking an easy solution ...
... al. (2019) found that student dissatisfaction with feedback stemmed from the belief it had little impact on their learning, or was inconsistent, while Marriott and Teoh, (2012) noted a lack of clarity and promptness. Deeley et al. (2019) present these negative attitudes to assessment and feedback as a significant challenge lacking an easy solution. Students have reported that written feedback is often too general, lacking in guidance, unrelated to assessment criteria and of poor quality (Crook et al., 2012). ...
Article
Full-text available
Technology enhanced feedback methods such as video feedback are increasingly employed with a notable shift towards embracing remote means of communication during and after the Covid-19 pandemic. Greater use of distance and electronic learning can result in a lack of personalisation which students find challenging. Feedback which uses the spoken word has been found to be more personalised and supportive, helping to establish connection and engagement with students. Mixed methods were used to collate student views on written, audio, and audiovisual feedback with a focus on the evaluation of video feedback. Focus groups, questionnaires, and course evaluations across multiple cohorts of students showed a strong preference for video feedback as comparatively more detailed, personalised, and supportive than purely written or purely audio feedback. This study adds to insights into the impact of video feedback on learner confidence and its potential to enhance learning in higher education. Developing and deploying video feedback is proposed as a strategy to enrich student support, offer greater personalisation, improve feedback engagement, and optimise learning. Journal of Learning Development in Higher Education (CC-BY 4.0)
... The anxiety of deadlines, of not getting a desired grade, and simply of having to grade a pile of essays are the phenomena that we likely envision when the topic of assessment arises. Furthermore, students often are frustrated with assessment norms and desire more authentic and engaging forms of assessment (Deeley et al. 2019). Getting assessment "right" is a vital undertaking in the teaching and learning of any field. ...
Article
Full-text available
Giving students choice in how they are assessed, known as assessment optionality, is an innovation in assessment that has gained attention in recent years for its potential for improving inclusivity and engagement in teaching and learning. However, although assessment innovations can be beneficial, the process of changing long-established norms can be difficult. Political science is a field in which the Scholarship of Teaching and Learning literature has long noted that there is limited pedagogical innovation. This resistance to innovation creates a challenge for those working in the field—stronger and specific arguments must be developed to justify changes to curricula. This article presents the results of participatory research with political science students to explore whether a subject-specific case can be made for the use of assessment optionality in the discipline. The study explores political science students’ understandings of costs and benefits of assessment optionality and shares co-created recommendations for the potential application of the practice.
... as the challenge of defining what constitutes a "quality" education (Krause 2012) and student dissatisfaction with assessments and feedback (Deeley et al. 2019). The second can be described as a pedagogical approach to teaching especially in higher education. ...
Preprint
The concept of "wicked problems" originated in policy studies to describe complex issues with no clear solutions or boundaries, and has since been applied to higher education. Much of the literature on wicked competencies emphasizes intellectual skills, such as consensus-building and interdisciplinary thinking. This paper, however, focuses on the emotional competencies needed to engage with wicked problems. Through a discussion of iterative course design in Political Science at the National University of Singapore, the paper argues that an exclusive focus on intellectual skills can leave students feeling overwhelmed by seemingly insurmountable issues. It suggests that instructors must also address emotional responses to these problems by creating space for emotional expression and redirecting those emotions toward hope and action. This approach helps students engage more effectively with the challenges of wicked problems.
... Historically, and according to a long study in Australian universities spanning over two decades, feedback has been an area of low student satisfaction (Baik et al.,2015). According to a major national poll in the UK, student discontent with feedback is a major concern for most UK higher education institutions (Deeley et al. 2019). Similarly, in the UK, Murphy and Cornell (2019) and Evans (2013) found a misalliance between what students expect and what is provided in feedback. ...
Article
Full-text available
Feedback is an essential component of good teaching that can help students to understand how they can improve their performance (Sadler, 2009). However, feedback often reaches students too late in their learning process to make such guidance actionable, failing to promote dialogue between students and teachers. Historically, and according to a long study in Australian universities spanning over two decades, feedback has been an area of low student satisfaction (Baik et al.,2015). According to a major national poll in the UK, student discontent with feedback is a major concern for most UK higher education institutions (Deeley et al. 2019). Similarly, in the UK, Murphy and Cornell (2019) and Evans (2013) found a misalliance between what students expect and what is provided in feedback. In this study, I explore the effect of an innovative synchronous approach to delivering students’ timely, actionable feedback called ‘FastFeedback Questions’ (FFQs) (Elnashar, 2018). FFQs are a formative feedback tool that places focus questions within each PowerPoint slide, which are projected but not in slide show mode, so students can see the questions beside the content. Students receive a version of the slides without answers before the lecture. During the lecture, the instructor engages students interactively with the questions, verifying answers in real-time on the slides. Through a structured, guided process, students engage in critical reflection on the knowledge and skills they have acquired. They are encouraged to assess the quality of their current understanding of the material introduced in lectures and are shown how their progress aligns with expectations. By identifying gaps between their present state and where they need to be, students are supported in developing their evaluative judgment, equipping them for future learning opportunities. (Tai et al. 2018). Integrated Systems Anatomy and Physiology (ISAP) is a complex subject delivered across a large, multi-campus, number of students (n) = 650, where students must comprehend and remember concepts and techniques within a mass of detailed content (Hattie, 2012). FFQS were used in ISAP after modification to suit this cognitive recall unit where a slide of information was followed by a slide of focus questions. This study used a cross-sectional survey design (Creswell, 2019) to understand whether the FFQ method is effective in an online lecture. Measures include pre and post student satisfaction data (from the university-wide standard survey; questions relating to feedback), pre and post exam scores, a student questionnaire (n = 79) with supplementary three focus groups. Ethical approval for the study was granted by the participating University’s HREC. Findings indicate that the students were more engaged and motivated to answer the FFQs. Furthermore, FFQs increased the student final exam mark by 10% on average. From the questionnaires, students reported that the FFQs assisted their understanding, and motivation to reflect on learning. From the focus group, students reported that FFQs increased their self-confidence when their responses to the FFQs were correct, and they recommended FFQs be used in other units. Future studies on FFQs in additional units are planned over several years to understand if they are relevant to other STEM content, and to explore if FFQs produce similar outcomes.
... The benefits, challenges and future implications of using digital health technologies to enhance assessment feedback in midwifery education were explored as part of an evaluation of a new communication strategy implemented in a midwifery programme in northwest England. Learners often report dissatisfaction with written feedback (Deeley et al, 2019) stating concerns with content and ambiguous and unclear comments. Timeliness is also a contentious issue with learners, who have stated that feedback often occurs too late in a project to be used (Race, 2020). ...
Article
Full-text available
Background/Aims Feedback plays a pivotal role in learning, but traditional written feedback often lack engagement and specificity, hindering learners' ability to effectively apply feedback. In midwifery education, the need for innovative feedback delivery mechanisms is pronounced. The aim of this study was to evaluate the integration of digital health technologies in feedback delivery in a midwifery programme. Methods A novel communication strategy was implemented in a midwifery programme in northwest England, where the benefits, challenges and future implications of leveraging digital health technologies for assessment feedback were assessed. Results Preferences were mixed, with 45.1% of learners favouring written feedback. Verbal feedback was perceived as more personal and motivating, and valued for its nuance, tone and ability to clarify complex points, although written feedback provided clearer, detailed information for future reference. Less experienced markers struggled with verbal feedback, while more experienced markers appreciated the quicker, more refreshing process. All markers found feedback templates helpful for ensuring equitable feedback. Conclusions This study scrutinised the significance of rethinking feedback delivery in midwifery education and indicates that digital health technologies present promising opportunities for reshaping the feedback landscape. Implications for practice Developing student confidence and competence in digital literacy remains a significant challenge. Higher education institutions can collaborate with healthcare providers to offer training in digital health technologies, helping midwives adapt to modern clinical environments.
... This type of negative feedback might increase dissatisfaction with feedback and employees could react strongly against it. Feedback can often be misunderstood, which could affect how others react to it (Deeley et al., 2019). Thus, future scholars could examine employee reactions by eliminating normative feedback, referring to one's performance alone. ...
Article
Full-text available
Feedback is a vital human resource development (HRD) practice, extensively researched and used to regulate employee behavior and performance. However, despite a century of research and immense significance and use, we still do not fully know why some accept feedback while others reject it. Critics blame both providers and recipients, as well as feedback message format, for this failure. In this study, I investigated whether the focus of the supervisory feedback (negative vs. negative and facilitative) could enhance employees' responses to feedback (e.g., acceptance and use). I also examined whether employees' mindset (i.e., fixed vs. growth) would moderate these relationships. I proposed that employee coaching (i.e., negative and facilitative) would be more accepted than negative feedback alone. In addition, I expected a positive moderating role of the growth mindset between supervisory feedback and employees' responses. To test these assumptions, I conducted a laboratory experimental vignette study (N = 69). In line with propositions, employee coaching had a larger effect on the employees' responses to feedback (e.g., feedback acceptance; M = 4.95, SD = 1.24) than negative feedback alone (M = 4.08, SD = 1.35). In addition, simple slope results showed that employee coaching was significantly higher than negative feedback for growth mindset (i.e., +1 SD). Finally, path analysis revealed that the interaction between negative feedback, employee coaching, and mindset yielded the strongest positive effect on employees' responses to feedback. Overall, findings add to and endorse calls for more future‐focused HRD practices during feedback interventions. In addition, for effective feedback, this study calls for HRD practitioners to account for all critical factors involved in feedback exchanges, from provider to recipient and feedback message.
... Therefore, journals such as 'Assessment and Evaluation in Higher Education' and 'Teaching in Higher Education' have extensively published on the topic of feedback literacy. This finding is expected, given that feedback has long been acknowledged as one of the most challenging aspects in teaching and learning in higher education (Deeley et al. 2019;Carless and Winstone 2023). Furthermore, our analysis highlights the important role these journals play in the dissemination of knowledge on feedback literacy. ...
Article
Background: In an undergraduate Bachelor of Nursing course, students enrol in an evidence-based Practice (EBP) subject. Three scaffolded tasks assess students' ability to find, summarise and synthesise professional literature. For each assessment task, students are provided feedback that informs subsequent assessments. It is unclear how students use the feedback, and what elements of feedback are perceived as being most useful. Aim: This study aimed to examine nursing students' perspectives of receiving feedback from scaffolded assessments and how feedback received influenced the development of the final assessment task. Design: A mixed-methods approach was used with a cross-sectional survey and online qualitative interviews. Setting: This research was conducted at Deakin University, School of Nursing and Midwifery in Melbourne, Australia. Participants: One hundred forty-eight students (17.4%, n = 851) participated in the cross-sectional survey. Seven students participated in the online qualitative interviews. Methods: Students enrolled in the EBP subject in Trimester, 2023 were invited to participate in a survey where they rated their experience of assessment feedback using a Likert scale. Students were also invited to participate in an online qualitative interview that further explored their perceptions. Results: Assessment exemplars were highly beneficial to understanding the assessment task (87.8% agree/strongly agree, n = 107). Responding to feedback was challenging (38.5%, n = 47). Qualitative themes identified were engagement with assessments, appropriateness of feedback, and use of scaffolded feedback. Conclusions: This study highlights that scaffolded feedback is valuable for student learning. Feedback in each rubric criterion helps with the alignment of learning outcomes. Resources that support students in how to respond to feedback are important.
Article
Full-text available
The aims of this paper are to examine and critically evaluate a selection of different technological methods that was specifically chosen for the alignment with, and potential to enhance extant assessment for learning practice. The underpinning perspectives are that (a) both formative and summative assessment are valuable opportunities for learning and (b) using technology may enhance learning in assessment and feedback processes. Drawing on the literature and empirical evidence from a research study in a Scottish university, the advantages and drawbacks of using technology are examined. It is asserted that by adopting a flexible approach and taking small incremental steps, the use of different types of technology can be beneficial in facilitating effective assessment for learning and feedback in higher education.
Article
Full-text available
In recent years, research and practice focused on staff and students working in partnership to co-design learning and teaching in higher education has increased. However, within staff–student partnerships a focus on assessment is relatively uncommon, with fewer examples evident in the literature. In this paper, we take the stance that all assessment can be oriented for learning, and that students’ learning is enhanced by improving their level of assessment literacy. A small study in a Scottish university was undertaken that involved a range of different adaptations to assessment and feedback, in which students were invited to become partners in assessment. We argue that a partnership approach, designed to democratise the assessment process, not only offered students greater agency in their own and their peers’ learning, but also helped students to enhance their assessment literacy. Although staff and students reported experiencing a sense of risk, there was immense compensation through increased motivation, and a sense of being part of an engaged learning community. Implications for partnership in assessment are discussed and explored further. We assert that adopting staff–student partnership in assessment and more democratic classroom practices can have a wide range of positive benefits.
Article
Qualitative description (QD) is a term that is widely used to describe qualitative studies of health care and nursing-related phenomena. However, limited discussions regarding QD are found in the existing literature. In this systematic review, we identified characteristics of methods and findings reported in research articles published in 2014 whose authors identified the work as QD. After searching and screening, data were extracted from the sample of 55 QD articles and examined to characterize research objectives, design justification, theoretical/philosophical frameworks, sampling and sample size, data collection and sources, data analysis, and presentation of findings. In this review, three primary findings were identified. First, although there were some inconsistencies, most articles included characteristics consistent with the limited available QD definitions and descriptions. Next, flexibility or variability of methods was common and effective for obtaining rich data and achieving understanding of a phenomenon. Finally, justification for how a QD approach was chosen and why it would be an appropriate fit for a particular study was limited in the sample and, therefore, in need of increased attention. Based on these findings, recommendations include encouragement to researchers to provide as many details as possible regarding the methods of their QD studies so that readers can determine whether the methods used were reasonable and effective in producing useful findings. © 2016 Wiley Periodicals, Inc.
Article
Despite the recognised importance of feedback, and the effort that academic staff put into providing it, little is known about how students make use of their feedback to improve their future learning. This paper explores the processes that students attempt to use to feed-forward, and whether the use of technology might support and enhance these processes. In a small-scale, in-depth, qualitative study it was found that despite student preferences to initially engage with certain forms of feedback, efforts were made to apply any feedback regardless of format when clear links could be made to future learning. However, this was limited to superficial connections and there was no clear evidence that students attempted to make deeper connections; students generally expressed that content- or assignment-specific feedback was difficult to apply. The study established the need for further investigation into how tutors construct the feedback given and how students deconstruct that feedback. Despite the recognised importance of feedback, and the effort that academic staff put into providing it, little is known about how students make use of their feedback to improve their future learning. This paper explores the processes that students attempt to use to feed-forward, and whether the use of technology might support and enhance these processes. In a small-scale, in-depth, qualitative study it was found that despite student preferences to initially engage with certain forms of feedback, efforts were made to apply any feedback regardless of format when clear links could be made to future learning. However, this was limited to superficial connections and there was no clear evidence that students attempted to make deeper connections; students generally expressed that content- or assignment-specific feedback was difficult to apply. The study established the need for further investigation into how tutors construct the feedback given and how students deconstruct that feedback.
Article
The general view of descriptive research as a lower level form of inquiry has influenced some researchers conducting qualitative research to claim methods they are really not using and not to claim the method they are using: namely, qualitative description. Qualitative descriptive studies have as their goal a comprehensive summary of events in the everyday terms of those events. Researchers conducting qualitative descriptive studies stay close to their data and to the surface of words and events. Qualitative descriptive designs typically are an eclectic but reasonable combination of sampling, and data collection, analysis, and re-presentation techniques. Qualitative descriptive study is the method of choice when straight descriptions of phenomena are desired.
Article
A variety of understandings of feedback exist in the literature, which can broadly be categorised as cognitivist information transmission and socio-constructivist. Understanding feedback as information transmission or ‘telling’ has until recently been dominant. However, a socio-constructivist perspective of feedback posits that feedback should be dialogic and help to develop students’ ability to monitor, evaluate and regulate their learning. This paper is positioned as part of the shift away from seeing feedback as input, to exploring feedback as a dialogical process focusing on effects, through presenting an innovative methodological approach to analysing feedback dialogues in situ. Interactional analysis adopts the premise that artefacts and technologies set up a social field, where understanding human–human and human–material activities and interactions is important. The paper suggests that this systematic approach to analysing dialogic feedback can enable insight into previously undocumented aspects of feedback, such as the interactional features that promote and sustain feedback dialogue. The paper discusses methodological issues in such analyses and implications for research on feedback.