Conference PaperPDF Available

Exploring the Qualities of Video Feedback in Higher Education: A Review of the Literature

Authors:

Abstract

Feedback is essential for learning and identifies perceived gaps between students’ observed performance and desired outcomes. In higher education, feedback is often text-based, despite significant advances in the ease of recording and distributing video in digital learning environments. While recent studies have investigated student and instructor perceptions of video-based feedback, the characteristics of the videos created are only beginning to be understood. The purpose of this study was to conduct a systematic literature review (2009-2019) of research on the qualities of videos created to provide feedback to higher education students. Sixty-seven peer-reviewed articles on the use of video-based feedback, selected from a systematic search of electronic databases, were organized and examined. While most articles described the video feedback provided, only seven systematically researched the content of videos received by students. Analysis of the literature revealed that video-based feedback included more comments on thesis development, structure, and conceptual engagement. Language choices tended toward praise, growth, and relationship building. Further, the feedback was more conversational and featured more expanding language, fewer imperatives, and less proclaiming language. This paper concludes with recommendations for the provision of video-based feedback arising from the analysis of feedback artefacts and a discussion of research opportunities. Keywords: Video feedback, screencast feedback, assessment, higher education, systematic review.
EXPLORING THE QUALITIES OF VIDEO FEEDBACK ARTEFACTS IN
HIGHER EDUCATION: A REVIEW OF THE LITERATURE
T. Bahula, R. Kay
Ontario Tech University (CANADA)
Abstract
Feedback is essential for learning and identifies perceived gaps between students’ observed
performance and desired outcomes. In higher education, feedback is often text-based, despite
significant advances in the ease of recording and distributing video in digital learning environments.
While recent studies have investigated student and instructor perceptions of video-based feedback, the
characteristics of the videos created are only beginning to be understood. The purpose of this study was
to conduct a systematic literature review (2009-2019) of research on the qualities of videos created to
provide feedback to higher education students. Sixty-seven peer-reviewed articles on the use of video-
based feedback, selected from a systematic search of electronic databases, were organized and
examined. While most articles described the video feedback provided, only seven systematically
researched the content of videos received by students. Analysis of the literature revealed that video-
based feedback included more comments on thesis development, structure, and conceptual
engagement. Language choices tended toward praise, growth, and relationship building. Further, the
feedback was more conversational and featured more expanding language, fewer imperatives, and less
proclaiming language. This paper concludes with recommendations for the provision of video-based
feedback arising from the analysis of feedback artefacts and a discussion of research opportunities.
Keywords: Video feedback, screencast feedback, assessment, higher education, systematic review.
1 INTRODUCTION
Feedback is an essential part of the teaching and learning process, and research has confirmed its
importance. An evaluation of 500 meta-analyses confirmed that feedback could have a critical role in
improving student outcomes [1]. However, the synthesis also found that the effect size had a high degree
of variance [1]. Further, some feedback interventions had a negative effect [2], highlighting the
importance for educators to design the feedback process and artefacts carefully. Narrowly interpreted,
feedback is a monologic transmission identifying a gap between actual performance and desired
outcomes [3]. Such information may serve as evidence for an assigned grade; but, this sort of feedback
limits student engagement [4], [5]. Dialogic feedback communication, a broader conception of feedback,
seeks to develop students' ability to monitor, evaluate, and regulate their learning and facilitate their
understanding and future performance [6]. Consequently, one of the primary roles of instructors in higher
education is providing feedback that engages students and sparks high-quality dialogue [7].
One-on-one tutorial instruction was found to yield a significant increase in educational achievement
leading to a search for methods that could deliver similar results without the high cost [8]. Likewise, face-
to-face conferences have been considered to be the “gold standard” for feedback [9], creating an
opportunity for dialogue [10] and clarification of written feedback [11]. Nevertheless, text-based
feedback has been the norm in higher education, stereotypically with instructors handwriting comments
and codes on studentssubmissions in red ink [12]. Extensive corrections and comments written with a
red pen evoked disappointment and discouragement [13]. As a result, some have recommended that
instructors use a neutral colour of ink for marking [14]. However, the colour of the ink was not the only
difficulty with handwritten feedback. Instructors lacked pedagogical training to provide high-quality
feedback [15]. Feedback lacked specificity and guidance, focused on the negative, and was misaligned
with learning goals [16][18]. Students had difficulty making connections between grades, feedback,
and assessment criteria [19] and still experienced negative emotional responses [20]. With the rise of
digital submissions, text-based feedback shifted to comments typed in the digital margins [10], [21].
While this change removed the challenge of deciphering illegible scratches [4], [19], [22], the other
problems remained.
Students expect feedback that is timely, personal, explicable, criteria-referenced, objective, and useful
for improvement in future work, according to a review of 37 empirical studies on assessment feedback
in higher education [23]. While improving the content of text-based feedback could address some of
Proceedings of INTED2021 Conference
8th-9th March 2021
ISBN: 978-84-09-27666-0
8125
these expectations, the media's constraints make meeting students’ expectations challenging. Since at
least the reel-to-reel tape days, instructors have experimented with other feedback media [24]. Most
recently, video-based feedback, including screencast and webcam video, has been used by some
instructors. The purpose of the current study was to explore the qualities of the feedback artefacts
provided by reviewing the literature about the use of video-based feedback in higher education.
2 METHODOLOGY
2.1 Overview
We conducted a systematic literature review on the use of video-based feedback in higher education
using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework
[25]. The PRISMA process attempts to create a reproducible, comprehensive, and reliable overview of
a topic by identifying, screening, analyzing, and synthesizing primary research sources [26]. The
identification and screening phases were conducted iteratively. We established selection criteria, tested
search terms, and used those terms to search targeted databases. We extended the search to high-
quality educational journals and scanned articles that met eligibility requirements for additional sources.
Articles that met the eligibility criteria were analyzed through careful reading, extracting characteristics,
describing methodologies, and coding emergent themes. Results were synthesized by aggregating
quantitative data and configuring qualitative results [26]. Using the PRISMA framework, we found 67
peer-reviewed articles on the use of video-based feedback in higher education. Most of the research
focused on student perceptions and a smaller number on instructor perceptions. While most articles
described the feedback in general terms, only seven articles systematically researched the content and
reported on the qualities of the videos received by students.
2.2 Data Analysis
We analyzed each article's key characteristics in an attempt to understand the context of the use of
video-based feedback. Data items included the year of publication, country, academic level, academic
discipline, assessment type, media used, and feedback length. We then calculated descriptive
frequency statistics for each item. Further, we discovered emerging themes by carefully reading the
results and discussion sections and recording key findings. We employed a constant comparative
method [27] to code the articles, promoting consistency and alignment with emerging themes.
2.3 Context
The seven articles reported on in this study were published between 2012 and 2019, with all but one
published since 2015. All but one of the studies were conducted in the United States, with the other set
in Australia. The studies focused on undergraduate students and pre-service teachers, with one that
also included graduate students. Students were participating in a classroom or blended learning format.
Video-based feedback, both formative and summative, was received in the academic disciplines of
education, language learning, and humanities. About half of the instructors provided webcam video,
while the others provided screencast video. Video length, where reported, ranged between 5 and 15
minutes.
3 RESULTS
3.1 Overview
Overall, the seven articles that analysed video-based feedback artefacts found more cognitive and social
presence indicators in the feedback received by students. Video-based feedback artefacts promoted
cognitive presence by making more comments on thesis development, structure, and conceptual
engagement. The language choices, which tended toward praise, growth, and relationship building,
indicated instructors' increased social presence. Further, the feedback received was more
conversational and featured more expanding language, fewer imperatives, and less proclaiming
language.
8126
3.2 Cognitive Presence
Cognitive presence is a construct of the Community of Inquiry framework. It is defined as “the extent to
which participants in any particular configuration of a community of inquiry are able to construct meaning
through sustained communication” [28, p. 89] and is central to the learning process [29]. Analysis of
feedback artefacts provided evidence that video-based feedback had features that promoted the
development of cognitive presence.
Henderson and Phillips [30] reported positive results after using video feedback with teacher education
students. Their analysis of 30 feedback artefacts indicated that video feedback emphasized conceptual
engagement, growth, and relationship building. In contrast, text-based feedback was focused on textual
and structural issues.
Moore and Filling [31] reported on the use of video and screencast feedback with undergraduate
students in the humanities. The feedback artefacts analyzed were found to include a majority (>68%) of
comments on higher-level cognitive areas such as thesis statements and organization. Further, the
analysis revealed that video-based feedback had more suggestions for improvements and more
elaborations than corrections.
Elola and Oskoz [32] examined the screencast feedback provided to four SFL students. Textual analysis
of the feedback artefacts revealed that the instructor made more frequent comments on content,
structure, and organization when providing screencast feedback than with text-based feedback. On the
other hand, the instructor provided a more consistent and frequent indication of errors in form when
using text-based feedback. The indirect comments used in screencast feedback were less precise than
the coding system used in digital markup.
3.3 Social Presence
The concept of social presence is also a construct included in the Community of Inquiry framework. It
refers to the perception of another individual as a real person in mediated online communication [33].
Social presence consists of affective expression, group cohesion, and interaction [34]. These aspects
of social presence were evident in the artefacts of video-based feedback received by students. In
contrast to the overwhelmingly positive perceptions of social presence in video-based feedback on the
part of many instructors and students [35], one of the studies that investigated artefacts of video-based
feedback found no significant difference in indicators [36], while three found positive results [37][39].
Borup et al. [37] sought to determine how the content of video feedback differed from digital markup. At
the end of the study, the comments from feedback samples of both types were collected and analyzed.
Videos were transcribed, and feedback comments were coded. The codes used were loosely related to
social presence categories. The average frequencies of the indicators for relationship building, praise,
support, and general correction were higher for video feedback than digital markup. On the other hand,
digital markup had more frequent indicators for specific correction.
The same researchers investigated a similar question a few years later [36]. In their second study, the
comparison between video and digital markup feedback and the method of transcribing and coding
feedback comments were the same. However, the researchers aligned the coding more closely to the
social presence construct. This study found no significant difference in the frequency of social presence
indicators between video feedback and text-based feedback. The frequency of the indicators for
cohesive expressions of small talk and complimenting was marginally higher in video feedback. On the
other hand, indicators for interactive expressions of asking questions and referring to previous
conversations were minimally higher in digital markup feedback. The authors acknowledged that coding
for social presence indicators in audio and video compared frequency, not quality. As such, the analysis
may not adequately account for all that the media communicates (e.g., tone of voice, visual self-
disclosure).
Cunningham [38], [39] also undertook two studies that analyzed the content of video-based feedback
compared to text-based feedback. The first study examined a small sample (n = 32) of artefacts using
the Systematic Functional Linguistics framework of Appraisal, which included categories for gradation,
appreciation, and engagement in language [38]. The analysis indicated that screencast feedback
contained higher praise and criticism levels and was more likely to be softened with words like “a little.”
In contrast, digital markup was more critical and less likely to be hedged. Also, in the engagement
category, screencast feedback was found to contain significantly more interpersonal and conversational
language as indicated by much higher frequency of expanding language (95% vs. 62% for digital
8127
markup) and much lower frequencies of imperatives (21% vs. 83%) and proclaiming/disclaiming
language (5% vs. 38%).
The second study used the same method but included a much larger sample size (n = 136) and focused
on the Appraisal framework's engagement category [39]. This study reported that clauses taken from
screencast feedback comments were 4.7 times more likely to use expanding language than those from
digital markup (p < .001). The use of expanding vocabulary invites a student into a conversation and
gives space for other perspectives. On the other hand, contracting language was significantly more
prevalent in text-based feedback and diminished interpersonal communication aspects and positioned
the instructor as an authority.
4 CONCLUSIONS
In this study, we conducted a systematic review of 7 articles that investigated artefacts of video-based
feedback in higher education. While the perspectives of instructors and students are important to
consider, the artefacts may be more revealing of the affordances and influences of video-based
feedback. Analyzing feedback artefacts provides a different vantage point than surveys of student
perceptions, which are prone to acquiescence and novelty bias on the part of respondents. Video-based
feedback artefacts were found to contain high levels of social and cognitive presence. The language
used by instructors providing video-based feedback promoted the perception of the instructor as a real
person whose feedback invited dialogue and prompted student response. Additionally, the audio-visual
indicators reinforced this message.
Based on this review, several research questions on the use of video-based feedback need to be
answered more fully. First, the question of the differences between video-based feedback and text-
based feedback artefacts has not been well-researched. Few studies were found that examined video-
based feedback artefacts and the sample size in most was small. Second, more research is needed on
the extent to which individual differences, pedagogical awareness, and feedback literacy influence the
artefacts of video-based feedback. Third, the influence of the increased social and cognitive presence
of video-based feedback on the learning outcomes of students has received little attention.
REFERENCES
[1] J. Hattie and H. Timperley, “The power of feedback,” Review of Educational Research, vol. 77, no.
1, pp. 81112, 2007, doi: 10/bf4d36.
[2] A. N. Kluger and A. DeNisi, “The effects of feedback interventions on performance: A historical
review, a meta-analysis, and a preliminary feedback intervention theory.,” Psychological Bulletin,
vol. 119, no. 2, pp. 254284, 1996, doi: 10/gtw.
[3] D. Boud and E. Molloy, “Rethinking models of feedback for learning: The challenge of design,”
Assessment and Evaluation in Higher Education, vol. 38, no. 6, pp. 698712, 2013, doi: 10/gcphxw.
[4] M. Price, K. Handley, J. Millar, and B. O’Donovan, “Feedback : all that effort, but what is the effect?,”
Assessment & Evaluation in Higher Education, vol. 35, no. 3, pp. 277289, May 2010, doi:
10/drrnc3.
[5] A. M. Rae and D. K. Cochrane, “Listening to students: How to make written assessment feedback
useful,” Active Learning in Higher Education, vol. 9, no. 3, pp. 217230, 2008, doi: 10/dhjczz.
[6] R. Ajjawi and D. Boud, “Researching feedback dialogue: an interactional analysis approach,”
Assessment & Evaluation in Higher Education, vol. 42, no. 2, pp. 252265, Feb. 2017, doi:
10/gcph6n.
[7] C. Evans, “Making sense of assessment feedback in higher education,” Review of Educational
Research, vol. 83, no. 1, pp. 70120, 2013, doi: 10/gf82tm.
[8] B. S. Bloom, “The 2 sigma problem: The search for methods of group instruction as effective as
one-to-one tutoring,” Educational Researcher, vol. 13, no. 6, pp. 416, Jun. 1984, doi: 10/ddj5p7.
[9] C. M. Anson, D. P. Dannels, J. I. Laboy, and L. Carneiro, “Students’ perceptions of oral screencast
responses to their writing: Exploring digitally mediated identities,” Journal of Business and Technical
Communication, vol. 30, no. 3, pp. 378411, Mar. 2016, doi: 10/gg57hm.
8128
[10] T. Ryan, M. Henderson, and M. Phillips, “Feedback modes matter: Comparing student perceptions
of digital and non-digital feedback modes in higher education,” British Journal of Educational
Technology, vol. 50, no. 3, pp. 15071523, 2019, doi: 10/gg57hg.
[11] J. Sommers, “The effects of tape-recorded commentary on student revision: A case study,” Journal
of Teaching Writing, vol. 8, no. 2, pp. 4976, 1989.
[12] N. Sommers, “Responding to student writing,” College Composition and Communication, vol. 33,
no. 2, pp. 148156, 1982, doi: 10/cz9brj.
[13] H. D. Semke, “Effects of the red pen,” Foreign Language Annals, vol. 17, no. 3, pp. 195202, 1984,
doi: 10/fnqggc.
[14] R. L. Dukes and H. Albanesi, “Seeing red: Quality of an essay, color of the grading pen, and student
reactions to the grading process,” The Social Science Journal, vol. 50, no. 1, pp. 96100, Mar. 2013,
doi: 10/f4r7rf.
[15] K. Richards, T. Bell, and A. Dwyer, “Training sessional academic staff to provide quality feedback
on university students’ assessment: Lessons from a faculty of law learning and teaching project,”
The Journal of Continuing Higher Education, vol. 65, no. 1, pp. 2534, Jan. 2017, doi: 10/gg57fr.
[16] C. Glover and E. Brown, “Written feedback for students: too much, too detailed or too
incomprehensible to be effective?,” Bioscience Education, vol. 7, no. 1, pp. 116, May 2006, doi:
10/gg57bp.
[17] M. R. Weaver, “Do students value feedback? Student perceptions of tutors’ written responses,”
Assessment & Evaluation in Higher Education, vol. 31, no. 3, pp. 379394, 2006, doi: 10/cjknpn.
[18] E. Pitt and L. Norton, “‘Now that’s the feedback I want!’ Students’ reactions to feedback on graded
work and what they do with it.,” Assessment & Evaluation in Higher Education, vol. 42, no. 4, pp.
499516, Jun. 2017, doi: 10/gdqbvq.
[19] I. Glover, H. J. Parkin, S. Hepplestone, B. Irwin, and H. Rodger, “Making connections: technological
interventions to support students in using, and tutors in creating, assessment feedback,” Research
in Learning Technology, vol. 23, no. 1, p. 27078, 2015, doi: 10/ghsgdz.
[20] S. Shields, “‘My work is bleeding’: exploring students’ emotional responses to first-year assignment
feedback,” Teaching in Higher Education, vol. 20, no. 6, pp. 614624, Aug. 2015, doi: 10/gf9k57.
[21] H. J. Parkin, S. Hepplestone, G. Holden, B. Irwin, and L. Thorpe, “A role for technology in enhancing
students’ engagement with feedback,” Assessment & Evaluation in Higher Education, vol. 37, no.
8, pp. 963973, 2012, doi: 10/d8njhq.
[22] S. Hepplestone, G. Holden, B. Irwin, H. J. Parkin, and L. Thorpe, “Using technology to encourage
student engagement with feedback: a literature review,” Research in Learning Technology, vol. 19,
no. 2, pp. 117127, 2011, doi: 10/fx6rbz.
[23] J. Li and R. De Luca, “Review of assessment feedback.,” Studies in Higher Education, vol. 39, no.
2, pp. 378393, Mar. 2014, doi: 10/gfxd5q.
[24] J. B. Killoran, “Reel-to-reel tapes, cassettes, and digital audio media: Reverberations from a half-
century of recorded-audio response to student writing,” Computers and Composition, vol. 30, no. 1,
pp. 3749, 2013, doi: 10/gcpgwb.
[25] A. Liberati et al., “The PRISMA statement for reporting systematic reviews and meta-analyses of
studies that evaluate health care interventions: Explanation and elaboration,” PLOS Medicine, vol.
6, no. 7, pp. 128, Jul. 2009, doi: 10/cw592j.
[26] D. Gough and J. Thomas, “Systematic reviews of research in education: Aims, myths and multiple
methods,” Review of Education, vol. 4, no. 1, pp. 84102, 2016, doi: 10/gg57hx.
[27] J. M. Corbin and A. L. Strauss, Basics of Qualitative Research: Techniques and Procedures for
Developing Grounded Theory, 3rd ed. Los Angeles, CA: Sage Publications, Inc, 2008.
[28] D. R. Garrison, T. Anderson, and W. Archer, “Critical inquiry in a text-based environment: Computer
conferencing in higher education,” The Internet and Higher Education, vol. 2, no. 2, pp. 87105,
2000, doi: 10/bxnpwj.
8129
[29] D. R. Garrison, M. Cleveland-Innes, and T. S. Fung, “Exploring causal relationships among
cognitive, social and teaching presence: Student perceptions of the community of inquiry
framework,” The Internet and higher education, vol. 13, no. 12, pp. 3136, 2010, doi: 10/bm4xmk.
[30] M. Henderson and M. Phillips, “Video-based feedback on student assessment: Scarily personal,”
Australasian Journal of Educational Technology, vol. 31, no. 1, pp. 5166, Jan. 2015, doi:
10/ghsgd2.
[31] N. S. Moore and M. Filling, “iFeedback: Using video technology for improving student writing,”
Journal of College Literacy & Learning, vol. 38, pp. 314, Jan. 2012, [Online]. Available: https://j-
cll.org/volume-38-2012.
[32] I. Elola and A. Oskoz, “Supporting second language writing using multimodal feedback,” Foreign
Language Annals, vol. 49, no. 1, pp. 5874, Feb. 2016, doi: 10/gg57f5.
[33] D. R. Garrison and J. B. Arbaugh, “Researching the community of inquiry framework: Review,
issues, and future directions,” The Internet and Higher Education, vol. 10, no. 3, pp. 157172, 2007,
doi: 10/fq3w8s.
[34] D. R. Garrison, T. Anderson, and W. Archer, “The first decade of the community of inquiry
framework: A retrospective,” The Internet and Higher Education, vol. 13, no. 1, pp. 59, 2010, doi:
10/cgsxxt.
[35] T. F. Bahula and R. H. Kay, “Exploring Student Perceptions of Video Feedback: A Review of the
Literature,” in ICERI2020 Proceedings, Nov. 2020, pp. 65356544, doi: 10/ghs38b.
[36] R. A. Thomas, R. E. West, and J. Borup, “An analysis of instructor social presence in online text
and asynchronous video feedback comments,” Internet and Higher Education, vol. 33, pp. 6173,
Apr. 2017, doi: 10/f96nbn.
[37] J. Borup, R. E. West, and R. A. Thomas, “The impact of text versus video communication on
instructor feedback in blended courses,” Educational Technology Research and Development, vol.
63, no. 2, pp. 161184, Feb. 2015, doi: 10/f65vp5.
[38] K. J. Cunningham, “APPRAISAL as a framework for understanding multimodal electronic feedback:
Positioning and purpose in screencast video and text feedback in ESL writing,” Writing & Pedagogy,
vol. 9, no. 3, pp. 457485, 2017, doi: 10/gf9rfh.
[39] K. J. Cunningham, “How language choices in feedback change with technology: Engagement in
text and screencast feedback on ESL writing,” Computers & Education, vol. 135, pp. 9199, Jul.
2019, doi: 10/gf9mk4.
8130
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Feedback is an integral component of learning and attempts to provide students with information about a perceived gap between their performance and desired outcomes. The standard format, particularly in higher education, is text-based feedback, despite significant advances in the ease of recording and distributing video-based feedback in digital learning environments. While recent studies have investigated the experimental use of video-based feedback, the perceptions of students who have received video-based feedback are not well understood. The purpose of the current study, then, was to conduct a systematic literature review of research on the use of video-based feedback in higher education from 2009-2019. Sixty-seven peer-reviewed articles, selected from a systematic search of electronic databases, were organized and examined through the lenses of Diffusion of Innovation and Community of Inquiry theory. An area of research that emerged as common to many studies was how students perceived the video feedback they received and video feedback in general. Analysis of the literature revealed that students preferred this form of feedback over text-based feedback. Students perceived video-based feedback positively, seeing it as more detailed, clearer, and richer, noting that it improved higher-order thinking skills and prepared them for future work. Video-based feedback also had a positive influence on their perceptions of cognitive and social presence. When students perceived video-based feedback negatively, they cited accessibility problems, the linear nature of feedback, and the evocation of negative emotions as adverse effects of receiving video feedback. This paper concludes with some educational implications arising from the perceptions of students and a discussion of research opportunities.
Article
Full-text available
An understanding of the impact of our technological choices in giving feedback has become a necessity for instructors. However, few studies have explored how technology choices might be influencing the nature and language of feedback. The present study investigates how the modes of video and text change the language used to give feedback and by doing so, shift its interpersonal aspects. The study employs engagement, from the appraisal framework, to investigate parallel collections of screencast and MS Word feedback from three English as a second language (ESL) writing instructors over four assignments in intact classes. This engagement analysis highlights how other voices are considered in the feedback and provides understanding of the position of the instructor and the role of the feedback itself and how they shift across modes. Text feedback was found to position the instructor as a single authority while video feedback better preserved student autonomy, offering feedback as suggestion and advice and positioning the instructor as one of many possible opinions. Understanding these differences can help instructors choose technology that will best support their pedagogical purposes.
Article
Full-text available
Assessment feedback is increasingly being provided in digital modes, from electronic annotations to digital recordings. Digitally recorded feedback is generally considered to be more detailed than text‐based feedback. However, few studies have compared digital recordings with other common feedback modes, including non‐digital forms such as face‐to‐face conversations. It is also unclear whether providing multiple feedback modes is better than a single mode. To explore these possibilities, an online survey asked 4514 Australian university students to rate the level of detail, personalisation and usability of the feedback comments they had most recently received. Of the students who received a single feedback mode only, electronic annotations and digital recordings were rated most highly on the three quality indicators. Students who received multiple modes were more likely to agree with all three indicators than those who received a single mode. Finally, students who received multiple modes were more likely to agree that the comments were detailed and usable when one of those modes was a digital recording. These findings enhance our understanding of feedback design, indicating that it is important to consider the strengths and weaknesses of particular modes, and the value of offering multiple modes.
Article
Full-text available
The quality of feedback provided to university students has long been recognised as the most important predictor of student learning and satisfaction. However, providing quality feedback to students is challenging in the current context, in which universities increasingly rely on casualised and inexperienced academic staff to assess undergraduate work. Ensuring that these staff are suitably equipped to provide quality feedback to students is vital if student learning and satisfaction goals are to be met. This article reports on a learning and teaching project undertaken in the School of Justice (Faculty of Law) at Queensland University of Technology that sought to address this issue. The project involved delivering an evidence-based training workshop to all casual academic staff in the School, on how to provide quality constructive feedback to students. Results from online surveys of sessional staff (N = 9) and a sample of undergraduate students (N = 141) are presented in this article. Findings suggest that on the whole, staff felt better equipped to provide constructive feedback to students following the workshop, and students perceived an improvement in the feedback they received. We conclude that such training can create a modest improvement in the provision of feedback to students.
Article
Full-text available
This article presents a review of the literature over the past 10 years into the use of technological interventions that tutors might use to encourage students to engage with and action the feedback that they receive on their assessment tasks. The authors hypothesise that technology has the potential to enhance student engagement with feedback. During the literature review, a particular emphasis was placed on investigating how students might better use feedback when it is published online. This includes where an adaptive release technique is applied requiring students to submit an action plan based on their feedback to activate the release of their grade, and electronic generation of feedback using statement banks. Key journals were identified and a snowball technique was used to select relevant literature. The use of technology to support and enhance student learning and assessment is well documented in the literature, and effective feedback practices are similarly well published. However, in terms of the use of technology to support and enhance feedback processes and practices (i.e. production, publication, delivery and students making use of feedback through technology), we found the literature to be limited.
Article
Full-text available
The educational use of computer-based feedback in the classroom is becoming widespread. However, less is known about (1) the extent to which tools influence how instructors provide written and oral comments, and (2) whether receiving oral or written feedback influences the nature of learners' revisions. This case study, which expands existing research on computer-mediated feedback, examines how four Spanish learners enrolled in a Spanish advanced writing course received multimodal feedback while working on the different drafts of a narrative essay. The instructor provided written feedback via Microsoft Word and oral feedback using screencast software. Results indicate that the tool used affected the quantity and quality of the instructor's comments. When using the screencast software, the instructor provided additional and lengthier comments on content, structure, and organization; the instructor was more explicit on form when using the coding system in Word. Although learners revised similarly regardless of the tool being used, they tended to prefer the oral feedback for global aspects, such as content, structure, and organization, and the written feedback for form. However, learners agreed that no matter the mode and the tool, both approaches to feedback helped them improve their writing skills. © 2016 by American Council on the Teaching of Foreign Languages.
Article
Given the multimodal nature of new modes of electronic feedback, such as screencasting, there is a need for the application of robust, theoretically grounded frameworks to capture linguistic and functional differences in feedback across modes. The present study argues that the appraisal framework, an outgrowth of systemic functional linguistics (SFL) that focuses on evaluative language and interpersonal meaning, can provide understanding of and discernment between technology-mediated modes of feedback. The study demonstrates this potential through an appraisal analysis of a small corpus of 16 screencast video and 16 text (MS Word comment) feedback files given to eight students over four assignments in an intermediate ESL writing class. The results suggest possible variation between the video and text feedback in reviewer positioning and feedback purpose. Specifically, video seems to position the reviewer as one of many possible perspectives with feedback focused on possibility and suggestion, while the text feedback seems to position the reviewer as authority with feedback focused on correctness. The findings suggest that appraisal can aid in the understanding of multimodal feedback and identifying differences between feedback modes. Keywords: screencast feedback, ESL writing, APPRAISAL, technology-mediated feedback, writing feedback
Article
Online and blended instructors are increasingly providing student feedback via asynchronous video, and students have reported in previous research that they are better able to perceive their instructors' social presence in video as compared to text. However, research is lacking that examines actual feedback comments for indicators of social presence. We addressed this gap by coding for indicators of social presence in 422 text and asynchronous video feedback comments provided to preservice teachers in three blended and online courses. Minimal differences were found in the frequency of social presence indicators between text and video feedback. However, we warn against interpreting this finding too simplistically. While text and video feedback had similar numbers of indicators, the indicators in video feedback may have had a larger impact on social presence due to the richness of the medium. Further research is needed in order to understand how text and video feedback promote social presence in online courses.
Article
Systematic reviews are still a controversial topic in some quarters, with the arguments for and against their use being well-rehearsed. In an attempt to advance a more nuanced approach to thinking about systematic reviewing, this paper illustrates the wide range of theoretical perspectives, methodologies and purposes that underpin the vast range of systematic review approaches now available; and in the light of this picture, re-examines some of the perennial arguments against reviews, arguing that they are often poorly targeted, based on a misreading of what systematic reviews aim to do, or simply incommensurable with the tenets that underpin academic enquiry.
Article
This study explores the intersections between facework, feedback interventions, and digitally mediated modes of response to student writing. Specifically, the study explores one particular mode of feedback intervention—screencast response to written work—through students’ perceptions of its affordances and through dimensions of its role in the mediation of face and construction of identities. Students found screencast technologies to be helpful to their learning and their interpretation of positive affect from their teachers by facilitating personal connections, creating transparency about the teacher’s evaluative process and identity, revealing the teacher’s feelings, providing visual affirmation, and establishing a conversational tone. The screencast technologies seemed to create an evaluative space in which teachers and students could perform digitally mediated pedagogical identities that were relational, affective, and distinct, allowing students to perceive an individualized instructional process enabled by the response mode. These results suggest that exploring the concept of digitally mediated pedagogical identity, especially through alternative modes of response, can be a useful lens for theoretical and empirical exploration.