Content uploaded by Robin Holding Kay
Author content
All content in this area was uploaded by Robin Holding Kay on Mar 17, 2021
Content may be subject to copyright.
Content uploaded by Timothy Bahula
Author content
All content in this area was uploaded by Timothy Bahula on Mar 16, 2021
Content may be subject to copyright.
EXPLORING THE QUALITIES OF VIDEO FEEDBACK ARTEFACTS IN
HIGHER EDUCATION: A REVIEW OF THE LITERATURE
T. Bahula, R. Kay
Ontario Tech University (CANADA)
Abstract
Feedback is essential for learning and identifies perceived gaps between students’ observed
performance and desired outcomes. In higher education, feedback is often text-based, despite
significant advances in the ease of recording and distributing video in digital learning environments.
While recent studies have investigated student and instructor perceptions of video-based feedback, the
characteristics of the videos created are only beginning to be understood. The purpose of this study was
to conduct a systematic literature review (2009-2019) of research on the qualities of videos created to
provide feedback to higher education students. Sixty-seven peer-reviewed articles on the use of video-
based feedback, selected from a systematic search of electronic databases, were organized and
examined. While most articles described the video feedback provided, only seven systematically
researched the content of videos received by students. Analysis of the literature revealed that video-
based feedback included more comments on thesis development, structure, and conceptual
engagement. Language choices tended toward praise, growth, and relationship building. Further, the
feedback was more conversational and featured more expanding language, fewer imperatives, and less
proclaiming language. This paper concludes with recommendations for the provision of video-based
feedback arising from the analysis of feedback artefacts and a discussion of research opportunities.
Keywords: Video feedback, screencast feedback, assessment, higher education, systematic review.
1 INTRODUCTION
Feedback is an essential part of the teaching and learning process, and research has confirmed its
importance. An evaluation of 500 meta-analyses confirmed that feedback could have a critical role in
improving student outcomes [1]. However, the synthesis also found that the effect size had a high degree
of variance [1]. Further, some feedback interventions had a negative effect [2], highlighting the
importance for educators to design the feedback process and artefacts carefully. Narrowly interpreted,
feedback is a monologic transmission identifying a gap between actual performance and desired
outcomes [3]. Such information may serve as evidence for an assigned grade; but, this sort of feedback
limits student engagement [4], [5]. Dialogic feedback communication, a broader conception of feedback,
seeks to develop students' ability to monitor, evaluate, and regulate their learning and facilitate their
understanding and future performance [6]. Consequently, one of the primary roles of instructors in higher
education is providing feedback that engages students and sparks high-quality dialogue [7].
One-on-one tutorial instruction was found to yield a significant increase in educational achievement
leading to a search for methods that could deliver similar results without the high cost [8]. Likewise, face-
to-face conferences have been considered to be the “gold standard” for feedback [9], creating an
opportunity for dialogue [10] and clarification of written feedback [11]. Nevertheless, text-based
feedback has been the norm in higher education, stereotypically with instructors handwriting comments
and codes on students’ submissions in red ink [12]. Extensive corrections and comments written with a
red pen evoked disappointment and discouragement [13]. As a result, some have recommended that
instructors use a neutral colour of ink for marking [14]. However, the colour of the ink was not the only
difficulty with handwritten feedback. Instructors lacked pedagogical training to provide high-quality
feedback [15]. Feedback lacked specificity and guidance, focused on the negative, and was misaligned
with learning goals [16]–[18]. Students had difficulty making connections between grades, feedback,
and assessment criteria [19] and still experienced negative emotional responses [20]. With the rise of
digital submissions, text-based feedback shifted to comments typed in the digital margins [10], [21].
While this change removed the challenge of deciphering illegible scratches [4], [19], [22], the other
problems remained.
Students expect feedback that is timely, personal, explicable, criteria-referenced, objective, and useful
for improvement in future work, according to a review of 37 empirical studies on assessment feedback
in higher education [23]. While improving the content of text-based feedback could address some of
Proceedings of INTED2021 Conference
8th-9th March 2021
ISBN: 978-84-09-27666-0
8125
these expectations, the media's constraints make meeting students’ expectations challenging. Since at
least the reel-to-reel tape days, instructors have experimented with other feedback media [24]. Most
recently, video-based feedback, including screencast and webcam video, has been used by some
instructors. The purpose of the current study was to explore the qualities of the feedback artefacts
provided by reviewing the literature about the use of video-based feedback in higher education.
2 METHODOLOGY
2.1 Overview
We conducted a systematic literature review on the use of video-based feedback in higher education
using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework
[25]. The PRISMA process attempts to create a reproducible, comprehensive, and reliable overview of
a topic by identifying, screening, analyzing, and synthesizing primary research sources [26]. The
identification and screening phases were conducted iteratively. We established selection criteria, tested
search terms, and used those terms to search targeted databases. We extended the search to high-
quality educational journals and scanned articles that met eligibility requirements for additional sources.
Articles that met the eligibility criteria were analyzed through careful reading, extracting characteristics,
describing methodologies, and coding emergent themes. Results were synthesized by aggregating
quantitative data and configuring qualitative results [26]. Using the PRISMA framework, we found 67
peer-reviewed articles on the use of video-based feedback in higher education. Most of the research
focused on student perceptions and a smaller number on instructor perceptions. While most articles
described the feedback in general terms, only seven articles systematically researched the content and
reported on the qualities of the videos received by students.
2.2 Data Analysis
We analyzed each article's key characteristics in an attempt to understand the context of the use of
video-based feedback. Data items included the year of publication, country, academic level, academic
discipline, assessment type, media used, and feedback length. We then calculated descriptive
frequency statistics for each item. Further, we discovered emerging themes by carefully reading the
results and discussion sections and recording key findings. We employed a constant comparative
method [27] to code the articles, promoting consistency and alignment with emerging themes.
2.3 Context
The seven articles reported on in this study were published between 2012 and 2019, with all but one
published since 2015. All but one of the studies were conducted in the United States, with the other set
in Australia. The studies focused on undergraduate students and pre-service teachers, with one that
also included graduate students. Students were participating in a classroom or blended learning format.
Video-based feedback, both formative and summative, was received in the academic disciplines of
education, language learning, and humanities. About half of the instructors provided webcam video,
while the others provided screencast video. Video length, where reported, ranged between 5 and 15
minutes.
3 RESULTS
3.1 Overview
Overall, the seven articles that analysed video-based feedback artefacts found more cognitive and social
presence indicators in the feedback received by students. Video-based feedback artefacts promoted
cognitive presence by making more comments on thesis development, structure, and conceptual
engagement. The language choices, which tended toward praise, growth, and relationship building,
indicated instructors' increased social presence. Further, the feedback received was more
conversational and featured more expanding language, fewer imperatives, and less proclaiming
language.
8126
3.2 Cognitive Presence
Cognitive presence is a construct of the Community of Inquiry framework. It is defined as “the extent to
which participants in any particular configuration of a community of inquiry are able to construct meaning
through sustained communication” [28, p. 89] and is central to the learning process [29]. Analysis of
feedback artefacts provided evidence that video-based feedback had features that promoted the
development of cognitive presence.
Henderson and Phillips [30] reported positive results after using video feedback with teacher education
students. Their analysis of 30 feedback artefacts indicated that video feedback emphasized conceptual
engagement, growth, and relationship building. In contrast, text-based feedback was focused on textual
and structural issues.
Moore and Filling [31] reported on the use of video and screencast feedback with undergraduate
students in the humanities. The feedback artefacts analyzed were found to include a majority (>68%) of
comments on higher-level cognitive areas such as thesis statements and organization. Further, the
analysis revealed that video-based feedback had more suggestions for improvements and more
elaborations than corrections.
Elola and Oskoz [32] examined the screencast feedback provided to four SFL students. Textual analysis
of the feedback artefacts revealed that the instructor made more frequent comments on content,
structure, and organization when providing screencast feedback than with text-based feedback. On the
other hand, the instructor provided a more consistent and frequent indication of errors in form when
using text-based feedback. The indirect comments used in screencast feedback were less precise than
the coding system used in digital markup.
3.3 Social Presence
The concept of social presence is also a construct included in the Community of Inquiry framework. It
refers to the perception of another individual as a real person in mediated online communication [33].
Social presence consists of affective expression, group cohesion, and interaction [34]. These aspects
of social presence were evident in the artefacts of video-based feedback received by students. In
contrast to the overwhelmingly positive perceptions of social presence in video-based feedback on the
part of many instructors and students [35], one of the studies that investigated artefacts of video-based
feedback found no significant difference in indicators [36], while three found positive results [37]–[39].
Borup et al. [37] sought to determine how the content of video feedback differed from digital markup. At
the end of the study, the comments from feedback samples of both types were collected and analyzed.
Videos were transcribed, and feedback comments were coded. The codes used were loosely related to
social presence categories. The average frequencies of the indicators for relationship building, praise,
support, and general correction were higher for video feedback than digital markup. On the other hand,
digital markup had more frequent indicators for specific correction.
The same researchers investigated a similar question a few years later [36]. In their second study, the
comparison between video and digital markup feedback and the method of transcribing and coding
feedback comments were the same. However, the researchers aligned the coding more closely to the
social presence construct. This study found no significant difference in the frequency of social presence
indicators between video feedback and text-based feedback. The frequency of the indicators for
cohesive expressions of small talk and complimenting was marginally higher in video feedback. On the
other hand, indicators for interactive expressions of asking questions and referring to previous
conversations were minimally higher in digital markup feedback. The authors acknowledged that coding
for social presence indicators in audio and video compared frequency, not quality. As such, the analysis
may not adequately account for all that the media communicates (e.g., tone of voice, visual self-
disclosure).
Cunningham [38], [39] also undertook two studies that analyzed the content of video-based feedback
compared to text-based feedback. The first study examined a small sample (n = 32) of artefacts using
the Systematic Functional Linguistics framework of Appraisal, which included categories for gradation,
appreciation, and engagement in language [38]. The analysis indicated that screencast feedback
contained higher praise and criticism levels and was more likely to be softened with words like “a little.”
In contrast, digital markup was more critical and less likely to be hedged. Also, in the engagement
category, screencast feedback was found to contain significantly more interpersonal and conversational
language as indicated by much higher frequency of expanding language (95% vs. 62% for digital
8127
markup) and much lower frequencies of imperatives (21% vs. 83%) and proclaiming/disclaiming
language (5% vs. 38%).
The second study used the same method but included a much larger sample size (n = 136) and focused
on the Appraisal framework's engagement category [39]. This study reported that clauses taken from
screencast feedback comments were 4.7 times more likely to use expanding language than those from
digital markup (p < .001). The use of expanding vocabulary invites a student into a conversation and
gives space for other perspectives. On the other hand, contracting language was significantly more
prevalent in text-based feedback and diminished interpersonal communication aspects and positioned
the instructor as an authority.
4 CONCLUSIONS
In this study, we conducted a systematic review of 7 articles that investigated artefacts of video-based
feedback in higher education. While the perspectives of instructors and students are important to
consider, the artefacts may be more revealing of the affordances and influences of video-based
feedback. Analyzing feedback artefacts provides a different vantage point than surveys of student
perceptions, which are prone to acquiescence and novelty bias on the part of respondents. Video-based
feedback artefacts were found to contain high levels of social and cognitive presence. The language
used by instructors providing video-based feedback promoted the perception of the instructor as a real
person whose feedback invited dialogue and prompted student response. Additionally, the audio-visual
indicators reinforced this message.
Based on this review, several research questions on the use of video-based feedback need to be
answered more fully. First, the question of the differences between video-based feedback and text-
based feedback artefacts has not been well-researched. Few studies were found that examined video-
based feedback artefacts and the sample size in most was small. Second, more research is needed on
the extent to which individual differences, pedagogical awareness, and feedback literacy influence the
artefacts of video-based feedback. Third, the influence of the increased social and cognitive presence
of video-based feedback on the learning outcomes of students has received little attention.
REFERENCES
[1] J. Hattie and H. Timperley, “The power of feedback,” Review of Educational Research, vol. 77, no.
1, pp. 81–112, 2007, doi: 10/bf4d36.
[2] A. N. Kluger and A. DeNisi, “The effects of feedback interventions on performance: A historical
review, a meta-analysis, and a preliminary feedback intervention theory.,” Psychological Bulletin,
vol. 119, no. 2, pp. 254–284, 1996, doi: 10/gtw.
[3] D. Boud and E. Molloy, “Rethinking models of feedback for learning: The challenge of design,”
Assessment and Evaluation in Higher Education, vol. 38, no. 6, pp. 698–712, 2013, doi: 10/gcphxw.
[4] M. Price, K. Handley, J. Millar, and B. O’Donovan, “Feedback : all that effort, but what is the effect?,”
Assessment & Evaluation in Higher Education, vol. 35, no. 3, pp. 277–289, May 2010, doi:
10/drrnc3.
[5] A. M. Rae and D. K. Cochrane, “Listening to students: How to make written assessment feedback
useful,” Active Learning in Higher Education, vol. 9, no. 3, pp. 217–230, 2008, doi: 10/dhjczz.
[6] R. Ajjawi and D. Boud, “Researching feedback dialogue: an interactional analysis approach,”
Assessment & Evaluation in Higher Education, vol. 42, no. 2, pp. 252–265, Feb. 2017, doi:
10/gcph6n.
[7] C. Evans, “Making sense of assessment feedback in higher education,” Review of Educational
Research, vol. 83, no. 1, pp. 70–120, 2013, doi: 10/gf82tm.
[8] B. S. Bloom, “The 2 sigma problem: The search for methods of group instruction as effective as
one-to-one tutoring,” Educational Researcher, vol. 13, no. 6, pp. 4–16, Jun. 1984, doi: 10/ddj5p7.
[9] C. M. Anson, D. P. Dannels, J. I. Laboy, and L. Carneiro, “Students’ perceptions of oral screencast
responses to their writing: Exploring digitally mediated identities,” Journal of Business and Technical
Communication, vol. 30, no. 3, pp. 378–411, Mar. 2016, doi: 10/gg57hm.
8128
[10] T. Ryan, M. Henderson, and M. Phillips, “Feedback modes matter: Comparing student perceptions
of digital and non-digital feedback modes in higher education,” British Journal of Educational
Technology, vol. 50, no. 3, pp. 1507–1523, 2019, doi: 10/gg57hg.
[11] J. Sommers, “The effects of tape-recorded commentary on student revision: A case study,” Journal
of Teaching Writing, vol. 8, no. 2, pp. 49–76, 1989.
[12] N. Sommers, “Responding to student writing,” College Composition and Communication, vol. 33,
no. 2, pp. 148–156, 1982, doi: 10/cz9brj.
[13] H. D. Semke, “Effects of the red pen,” Foreign Language Annals, vol. 17, no. 3, pp. 195–202, 1984,
doi: 10/fnqggc.
[14] R. L. Dukes and H. Albanesi, “Seeing red: Quality of an essay, color of the grading pen, and student
reactions to the grading process,” The Social Science Journal, vol. 50, no. 1, pp. 96–100, Mar. 2013,
doi: 10/f4r7rf.
[15] K. Richards, T. Bell, and A. Dwyer, “Training sessional academic staff to provide quality feedback
on university students’ assessment: Lessons from a faculty of law learning and teaching project,”
The Journal of Continuing Higher Education, vol. 65, no. 1, pp. 25–34, Jan. 2017, doi: 10/gg57fr.
[16] C. Glover and E. Brown, “Written feedback for students: too much, too detailed or too
incomprehensible to be effective?,” Bioscience Education, vol. 7, no. 1, pp. 1–16, May 2006, doi:
10/gg57bp.
[17] M. R. Weaver, “Do students value feedback? Student perceptions of tutors’ written responses,”
Assessment & Evaluation in Higher Education, vol. 31, no. 3, pp. 379–394, 2006, doi: 10/cjknpn.
[18] E. Pitt and L. Norton, “‘Now that’s the feedback I want!’ Students’ reactions to feedback on graded
work and what they do with it.,” Assessment & Evaluation in Higher Education, vol. 42, no. 4, pp.
499–516, Jun. 2017, doi: 10/gdqbvq.
[19] I. Glover, H. J. Parkin, S. Hepplestone, B. Irwin, and H. Rodger, “Making connections: technological
interventions to support students in using, and tutors in creating, assessment feedback,” Research
in Learning Technology, vol. 23, no. 1, p. 27078, 2015, doi: 10/ghsgdz.
[20] S. Shields, “‘My work is bleeding’: exploring students’ emotional responses to first-year assignment
feedback,” Teaching in Higher Education, vol. 20, no. 6, pp. 614–624, Aug. 2015, doi: 10/gf9k57.
[21] H. J. Parkin, S. Hepplestone, G. Holden, B. Irwin, and L. Thorpe, “A role for technology in enhancing
students’ engagement with feedback,” Assessment & Evaluation in Higher Education, vol. 37, no.
8, pp. 963–973, 2012, doi: 10/d8njhq.
[22] S. Hepplestone, G. Holden, B. Irwin, H. J. Parkin, and L. Thorpe, “Using technology to encourage
student engagement with feedback: a literature review,” Research in Learning Technology, vol. 19,
no. 2, pp. 117–127, 2011, doi: 10/fx6rbz.
[23] J. Li and R. De Luca, “Review of assessment feedback.,” Studies in Higher Education, vol. 39, no.
2, pp. 378–393, Mar. 2014, doi: 10/gfxd5q.
[24] J. B. Killoran, “Reel-to-reel tapes, cassettes, and digital audio media: Reverberations from a half-
century of recorded-audio response to student writing,” Computers and Composition, vol. 30, no. 1,
pp. 37–49, 2013, doi: 10/gcpgwb.
[25] A. Liberati et al., “The PRISMA statement for reporting systematic reviews and meta-analyses of
studies that evaluate health care interventions: Explanation and elaboration,” PLOS Medicine, vol.
6, no. 7, pp. 1–28, Jul. 2009, doi: 10/cw592j.
[26] D. Gough and J. Thomas, “Systematic reviews of research in education: Aims, myths and multiple
methods,” Review of Education, vol. 4, no. 1, pp. 84–102, 2016, doi: 10/gg57hx.
[27] J. M. Corbin and A. L. Strauss, Basics of Qualitative Research: Techniques and Procedures for
Developing Grounded Theory, 3rd ed. Los Angeles, CA: Sage Publications, Inc, 2008.
[28] D. R. Garrison, T. Anderson, and W. Archer, “Critical inquiry in a text-based environment: Computer
conferencing in higher education,” The Internet and Higher Education, vol. 2, no. 2, pp. 87–105,
2000, doi: 10/bxnpwj.
8129
[29] D. R. Garrison, M. Cleveland-Innes, and T. S. Fung, “Exploring causal relationships among
cognitive, social and teaching presence: Student perceptions of the community of inquiry
framework,” The Internet and higher education, vol. 13, no. 1–2, pp. 31–36, 2010, doi: 10/bm4xmk.
[30] M. Henderson and M. Phillips, “Video-based feedback on student assessment: Scarily personal,”
Australasian Journal of Educational Technology, vol. 31, no. 1, pp. 51–66, Jan. 2015, doi:
10/ghsgd2.
[31] N. S. Moore and M. Filling, “iFeedback: Using video technology for improving student writing,”
Journal of College Literacy & Learning, vol. 38, pp. 3–14, Jan. 2012, [Online]. Available: https://j-
cll.org/volume-38-2012.
[32] I. Elola and A. Oskoz, “Supporting second language writing using multimodal feedback,” Foreign
Language Annals, vol. 49, no. 1, pp. 58–74, Feb. 2016, doi: 10/gg57f5.
[33] D. R. Garrison and J. B. Arbaugh, “Researching the community of inquiry framework: Review,
issues, and future directions,” The Internet and Higher Education, vol. 10, no. 3, pp. 157–172, 2007,
doi: 10/fq3w8s.
[34] D. R. Garrison, T. Anderson, and W. Archer, “The first decade of the community of inquiry
framework: A retrospective,” The Internet and Higher Education, vol. 13, no. 1, pp. 5–9, 2010, doi:
10/cgsxxt.
[35] T. F. Bahula and R. H. Kay, “Exploring Student Perceptions of Video Feedback: A Review of the
Literature,” in ICERI2020 Proceedings, Nov. 2020, pp. 6535–6544, doi: 10/ghs38b.
[36] R. A. Thomas, R. E. West, and J. Borup, “An analysis of instructor social presence in online text
and asynchronous video feedback comments,” Internet and Higher Education, vol. 33, pp. 61–73,
Apr. 2017, doi: 10/f96nbn.
[37] J. Borup, R. E. West, and R. A. Thomas, “The impact of text versus video communication on
instructor feedback in blended courses,” Educational Technology Research and Development, vol.
63, no. 2, pp. 161–184, Feb. 2015, doi: 10/f65vp5.
[38] K. J. Cunningham, “APPRAISAL as a framework for understanding multimodal electronic feedback:
Positioning and purpose in screencast video and text feedback in ESL writing,” Writing & Pedagogy,
vol. 9, no. 3, pp. 457–485, 2017, doi: 10/gf9rfh.
[39] K. J. Cunningham, “How language choices in feedback change with technology: Engagement in
text and screencast feedback on ESL writing,” Computers & Education, vol. 135, pp. 91–99, Jul.
2019, doi: 10/gf9mk4.
8130