ArticlePDF Available

Exploring Instructor Perceptions of Using Video-Based Feedback: A Review of the Literature

Authors:

Abstract

The use of video feedback in face-to-face, blended, and online learning classes has increased markedly since 2014. However, the use of this form of feedback is not well understood. In this study, we conducted a systematic literature review of how higher education instructors perceive video-based feedback. We analyzed 39 peer-reviewed articles from 2009 to 2019 and identified four themes related to creating videos, the quality of feedback, connecting with students, and sustaining the practice of offering video-based feedback. Overall, most instructors claimed that creating video feedback was relatively easy and time-efficient to create. However, some instructors faced specific challenges related to recording, unwieldy software tools, and feeling anxious when creating videos. Instructors also noted that videos provided more detailed, higher-quality feedback. Additionally, instructors remarked that video feedback increased personal connections with their students. Finally, research on the long-term sustainability of providing video-based feedback was mixed.
Journal of Educational Informatics (2022), 3(1), 3-20.
Exploring Instructor Perceptions of Using Video-Based Feedback:
A Review of the Literature
TIMOTHY BAHULA AND ROBIN KAY
University of Ontario Institute of Technology
timothy.bahula@ontariotechu.net
robin.kay@ontariotechu.ca
The use of video feedback in face-to-face, blended, and online learning classes has
increased markedly since 2014. However, the use of this form of feedback is not well
understood. In this study, we conducted a systematic literature review of how higher
education instructors perceive video-based feedback. We analyzed 39 peer-reviewed
articles from 2009 to 2019 and identified four themes related to creating videos, the
quality of feedback, connecting with students, and sustaining the practice of offering
video-based feedback. Overall, most instructors claimed that creating video feedback
was relatively easy and time-efficient to create. However, some instructors faced
specific challenges related to recording, unwieldy software tools, and feeling anxious
when creating videos. Instructors also noted that videos provided more detailed, higher-
quality feedback. Additionally, instructors remarked that video feedback increased
personal connections with their students. Finally, research on the long-term
sustainability of providing video-based feedback was mixed.
Keywords: video feedback, feedback, assessment, higher education, systematic review
INTRODUCTION
High-quality feedback, essential for learning, involves communicating gaps between
desired outcomes and students performance (Carless, 2006). Effective feedback needs to move
from providing information and to motivating students to action (Mahoney et al., 2018). Hattie &
Timperley (2007) synthesized over 500 meta-analyses and identified feedback as a critical
determinant of student achievement. The review also suggested that the feedback quality and
impact varied significantly (Hattie & Timperley, 2007). To understand the effect of this variability,
educators and researchers need to conduct a more detailed analysis of the quality and format of
feedback.
Before the arrival of computer technology, instructors provided handwritten feedback
(Sommers, 1989), regularly leading to student disappointment and discouragement (C. Glover &
4 Bahula and Kay
Brown, 2006; Semke, 1984; Weaver, 2006). With the advent of computers, feedback morphed into
digital prose (Parkin et al., 2012; Ryan et al., 2019). Digital text, while easier to read (I. Glover et
al., 2015; Hepplestone et al., 2011; Price et al., 2010), did not address problems associated with
limited detail (Pitt & Norton, 2017), lack of pedagogical training (Richards et al., 2017), difficulty
students had connecting grades, feedback, and assessment criteria (I. Glover et al., 2015) or
negative emotions evoked from receiving feedback (Shields, 2015).
High-speed internet access, streaming services such as YouTube and Vimeo, and a growing
number of easy-to-use screencasting software tools have significantly increased video creation and
consumption (Henderson & Phillips, 2015). Accordingly, video-based feedback has emerged as a
viable format that can make feedback more detailed, personalized, and usable while addressing
specific challenges noted with handwritten and text-based formats (Ryan et al., 2019). Many
researchers have noted that higher education students prefer video over text-based feedback
(Cunningham, 2019; Ghosn-Chelala & Al-Chibani, 2018; Hall et al., 2016; Letón et al., 2018).
Students perceived video-based feedback to be more personal (Hall et al., 2016; Orlando, 2016),
detailed (Ghosn-Chelala & Al-Chibani, 2018; Mayhew, 2017), clear (Ali, 2016; Hall et al., 2016),
engaging (Edwards et al., 2012; Thompson & Lee, 2012), and stimulating higher-order thinking
(Lamey, 2015; Silva, 2012). Furthermore, video-based feedback improved revisions (Özkul &
Ortaçtepe, 2017), increased writing quality (Ali, 2016; Moore & Filling, 2012), and resulted in
higher grades (Alvira, 2016; Denton, 2014).
Although students embrace video-based feedback, we need to explore instructors
perspectives. If instructors view video-based feedback as too difficult or time-consuming to create
or ineffective, they are unlikely to adopt this format. The purpose of the current study was to
investigate instructors perceptions about creating and employing video-based feedback in higher
education.
DEFINITIONS
Text-Based Feedback
In this literature review, we operationally define text-based feedback as a method of
providing guidance, commentary, support, and assessment to students using handwritten or typed
text. Traditionally, instructors hand wrote symbols, codes, and comments using a red pen (Semke,
1984), also referred to as handwritten markup feedback. Digital markup is defined similarly but
with the feedback provided in digital form. The use of Microsoft Word or Google Docs comments
and the track changes feature is a common method used to create digital markup (Chang et al.,
2018).
Video-Based Feedback
We identified four types of video-based feedback in this review: video, screencast, pencast,
and VoiceThread. Video feedback is operationally defined as a type of video-based feedback
consisting of a video of an instructor talking that has been recorded with a camera. Typically, the
camera frames the head and shoulders of the instructor (Henderson & Phillips, 2015), thereby
leading to an alternative label of a talking head video (Mahoney et al., 2018).
Screencast feedback comprises a computer screen recording while an instructor narrates
and performs actions such as highlighting, formatting, text insertions, and deletions (Chang et al.,
2018). Screencasts sometimes include an embedded video of an instructor.
5 Bahula and Kay
Pencast feedback is similar to screencast feedback. Rather than typing, drawing and writing
are recorded using a graphics tablet (e.g., Wacom or Bamboo) and digital pen (e.g., Livescribe)
while the instructor talks. Pencasts are commonly used in problem-solving scenarios such as those
seen in Khan Academy instructional videos.
Finally, VoiceThread feedback provides video-based feedback using the VoiceThread
platform. VoiceThread is a cloud application that allows users to upload media files (e.g.,
documents, presentations, and images) and other users to comment on these media files using text,
audio, or video. While VoiceThread could be used to provide only digital markup, it also offers
the possibility of delivering audio and video feedback by attaching clips to specific pieces of text.
METHOD
Overview
We conducted a systematic review of the literature on instructor video-based feedback
using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)
framework (Liberati et al., 2009). The PRISMA approach, originating in the field of medicine,
focuses on screening, analyzing, and synthesizing peer-reviewed articles to produce a
comprehensive, reliable review (Gough & Thomas, 2016). In the screening phase, we established
basic eligibility criteria, tested search terms, reviewed abstracts for relevance, and scanned
reference lists of eligible articles for additional sources. We then conducted a full-text review of
prospective articles to confirm eligibility. Finally, we analyzed eligible articles through careful
reading, extracting characteristics, coding emergent themes, and describing methodologies.
Synthesis of results included aggregating quantitative data about the articles and configuring
qualitative results from the articles (Gough & Thomas, 2016).
Eligibility Criteria
Establishing the eligibility criteria occurs early in the PRISMA process. We used the
following eligibility criteria to access articles for this systematic review: use of an audiovisual
medium to provide assessment feedback in educational contexts, participants studying or teaching
in higher education, original research and reported in an English-language, peer-reviewed journal
from 2009 to 2019. Conference papers, dissertations, and book chapters were excluded from this
review, as were studies focusing on video feedback to improve a skill (e.g., golf swing, surgical
technique, or teaching ability).
Information Sources
We used three primary information sources to search for articles. First, we searched the
following electronic databases: ProQuests The Summon Service, Education Source via
EBSCOhost, ERIC via EBSCOhost, LearnTechLib, PsycINFO, Academic Search Premier, and
Scholar Portal Journals. Each of these searches was restricted to higher education contexts.
Second, we examined 11 top educational journals based on h5-index and h5-median metrics for
education, educational technology, or online education as calculated by Google Scholar. These
journals included Computers & Education, British Journal of Educational Technology, Journal of
Computer Assisted Learning, Turkish Online Journal of Distance Education, Australasian Journal
6 Bahula and Kay
of Education Technology, The Internet and Higher Education, Canadian Journal of Learning and
Technology, Computers in Human Behavior, Journal of Educational Computing Research, Journal
of Educational Technology and Society and Assessment & Evaluation in Higher Education.
Finally, we used Google Scholar and ResearchGate to locate recently published or in-press articles
and to reveal any missing articles using other sources.
Search Strategy
Our initial search strategy used the keywords screencast feedback AND assessment
and yielded six articles. We modified the search using screencast feedback OR video feedback
AND assessment, increasing the number of articles to 346. After scanning the first ten articles,
we refined the search process by excluding keywords that introduced different areas of focus,
specifically: coaching, autism, intervention, medical education, and video analysis. We used this
final keyword phrasing for all databases. Finally, we employed a snowball strategy (Greenhalgh
& Peacock, 2005), scanning references for relevant articles.
Final Article Selection
We selected articles from the search procedures by scanning titles, abstracts, and, when
available, keywords and abstract excerpts. For articles that seemed potentially relevant, we
scanned the contents of the whole article based on established eligibility criteria to assess inclusion
in the review. The PRISMA framework produced 39 peer-reviewed articles.
Data Analysis
To provide an overall context for this review, we analyzed all articles based on key
descriptors, including the year of publication, country, academic level, academic discipline,
assessment type, media used, and feedback length. Next, we identified emerging themes by closely
reading the results and discussion sections and noting key findings. We used open coding on a
convenience sample of five current, relevant, high-quality, empirical articles selected early in the
analysis process. We then employed a constant comparative method on the remaining articles to
review codes for consistency and alignment with emerging themes (Corbin & Strauss, 2008).
Overall Context
The 39 articles reviewed in this study were published from 2009 to 2019, with a majority
published since 2014. Most papers originated from the United States, the United Kingdom and
Australia and focused on higher education faculty in education, language learning, the humanities,
business or STEM. Instructors in these studies used video-based feedback to provide formative
and summative assessments. The principal media used to provide feedback included screencasts
and instructor videos. The average video-based feedback recording lasted seven minutes and
ranged from two to 26 minutes.
7 Bahula and Kay
RESULTS AND DISCUSSION
Creation of Video-Based Feedback
Mathisen (2012) reported that the process of creating video-based feedback was intuitive
for instructors. Many studies reported that providing video-based feedback, reuired less time than
providing written feedback (Denton, 2014; Gonzalez & Moore, 2018; Griffiths & Graham, 2010;
Henderson & Phillips, 2015; Lamey, 2015; Mathisen, 2012) or the same amount of time (Crook et
al., 2012; Jones et al., 2012; OMalley, 2011; Schilling & Estell, 2014; Vincelette & Bostic, 2013;
West & Turner, 2016). Gonzalez & Moore (2018) found that relying solely on video feedback
reduced time spent on each submission compared to written comments. Henderson & Phillips
(2015) reported that creating videos took, on average, half the amount of time as annotating
submissions. In Mathisens study (2012), one instructor asserted that providing video-based
feedback took a quarter of the time usually spent providing feedback and was much less prone to
misunderstanding. In Jones et al.s study (2012), another instructor observed that video-based
feedback took no more time than text-based feedback but provided more feedback and saved time
in one-on-one meetings about the assignment.
Factors that improved efficiency in creating video-based feedback included one-take
recording (Anson, 2015; Hall et al., 2016; Lamey, 2015; Moore & Filling, 2012), familiarity with
the software (Hyde, 2013), and a conscious effort to be concise (Lamey, 2015; Mathisen, 2012).
While one-take recording improved efficiency, Anson (2015) cautioned that comment quality
might suffer if instructors were inexperienced. Hyde (2013) reported that the first few video-
feedback recordings took longer than providing text-based feedback. However, recordings became
much faster after becoming familiar with the software and annotating on screen. Both Lamey
(2015) and Mathisen (2012) found that the five-minute limit imposed by their software choice
forced them to be more concise when providing video-based feedback, yielding greater efficiency.
Mathisen (2012) initially took extensive notes and recorded a five-minute video but found the
grading process took longer than before. Reducing the target length to four minutes and eliminating
detailed note-taking resulted in more disciplined feedback and increased efficiency of feedback
creation (Mathisen, 2012). Beyond the efficiency of creating the feedback, video-based feedback
made face-to-face follow-up meetings more focused and efficient (Jones et al., 2012; Robinson et
al., 2015). The rich content of the screencasts that students received encouraged them to think of
meaningful questions and prepare for in-person meetings (Robinson et al., 2015).
Not all instructors found video feedback easy or efficient to create (Borup et al., 2015;
Mathieson, 2012). Some key problem areas included finding a quiet recording environment (Borup
et al., 2015), excessive attention to attire, grooming, and surroundings (Lamey, 2015), editing
video and screencast comments (Borup et al., 2015), performance anxiety (Parton et al., 2010;
Soden, 2017; Vincelette & Bostic, 2013), the type of software used to record videos (Silva, 2012;
Soden, 2016), and the distribution of video-based feedback (Borup et al., 2015). A quiet recording
environment was difficult to find for graduate students who did not have an office. Instructors who
recorded feedback at home occasionally did so at night when the house was quiet enough (Borup
et al., 2015). Conversely, text-based feedback was easier to provide wherever the instructor
happened to be, especially with a tablet (Borup et al., 2015). While Lamey (2015) never considered
getting fully dressed or shaving before providing marginal comments, video-based feedback made
that much more important. Instructors in the study of Borup et al. (2015) reported that the difficulty
of editing video caused them to rerecord after making an error, sometimes reducing the number of
comments included when rerecording.
8 Bahula and Kay
Quantity and Quality of Video-Based Feedback
Research on the total quantity of video-based feedback was positive. Several studies
reported that instructors were more verbose when providing video-based feedback than when using
text-based feedback. Denton (2014) noted that one minute of video created in that study contained
on average 135 words, compared to an average typing speed of 30 words per minute. As a result,
Denton (2014) concluded that instructors could provide more feedback in less time when using
video. Further, an analysis of feedback artifacts in four studies showed that instructors provided
two to five times as many words in video-based feedback compared to text-based feedback (Borup
et al., 2015; Elola & Oskoz, 2016; Henderson & Phillips, 2015; Thomas et al., 2017). In the study
by Henderson & Phillips (2015), instructors provided feedback in five-minute videos, whereas
text-based feedback on similar assignments, when spoken, was less than one minute long. The
analysis of Borup et al. (2015) found significant differences (p < .01) in the average word count of
video feedback compared to text feedback, with more the triple the number of words in a video on
one assignment and nearly double on two others. Elola & Oskozs (2016) analysis indicated that
instructors providing screencast feedback used more words, addressed more topics, and did so in
greater detail. Likewise, Thomas et al. (2017) reported an average count of 190 words for video-
based feedback compared with 103 words for text-based feedback. Five other studies provided
interview and focus-group data that supported the conclusion that video-based feedback was more
plentiful in terms of words provided (Hyde, 2013; Jones et al., 2012; Moore & Filling, 2012;
Orlando, 2016; West & Turner, 2016). In another study, instructors reported that speaking was
faster than writing and that they would not take the time to write out the comments they provided
in their videos (Vincelette & Bostic, 2013).
Research on the overall quality of video-based feedback was also positive. Some
instructors perceived video-based feedback to be more in-depth (Harper et al., 2015; Hyde, 2013).
The multimedia nature of screencast feedback yielded cognitive and meta-cognitive benefits
(Harper et al., 2015). Rather than expecting students to figure out typed comments or codes
independently, instructors were able to provide an in-depth recorded audio explanation synced to
on-screen visuals (Harper et al., 2015). The digital multimedia environment allowed an instructor
to include course documents and external resources in the screencast (Séror, 2012). This show
and tell approach resulted in the perception of fuller explanations of problems as well as possible
solutions (Séror, 2012).
In several studies, instructors perceived that the contents of video-based feedback
addressed higher-order thinking (Henderson & Phillips, 2015; Lamey, 2015; Orlando, 2016;
Vincelette & Bostic, 2013). Video-based feedback, for example, focused more on big-picture ideas
(Henderson & Phillips, 2015; Orlando, 2016; Vincelette & Bostic, 2013) as opposed to identifying
mechanical issues such as grammar and spelling (Lamey, 2015). Henderson & Phillips (2015)
reported on a blog entry by an instructor who wrote that providing video feedback felt more like
being a teacher than an editor, resulting in increased enjoyment of marking. Lamey (2015) noted
that recording video resulted in student-centred, substantive comments, making the experience
more like an in-person feedback conference. An instructor interviewed by Vincelette & Bostic
(2013) said the video feedback encouraged her to comment on higher-level concepts that she would
not usually take the time to type, resulting in more in-depth comments on the submission content.
One study, by contrast, reported video-based feedback contained more small talk than text-based
feedback (Thomas et al., 2017).
9 Bahula and Kay
Connecting with Students
Instructors felt video-based feedback was more personal than text-based feedback and
fostered a greater connection to students (Borup et al., 2014; Jones et al., 2012; Mathisen, 2012;
Orlando, 2016; Séror, 2012). When interviewed, instructors in the study of Mathisen (2012)
indicated that they became closer to students by providing screencast feedback, and the method
made them more personal and encouraging. Perhaps the sense of cohesion resulted from deliberate
strategies such as addressing students by name and acknowledging their personalities (Borup et
al., 2014). However, the audio and video of video feedback added a personal touch that amplified
the strategies, helping to form connections (Borup et al., 2014). Instructors perceived even
incidental visual details such as items on a bookshelf in the video background to help connect with
students (Borup et al., 2014). Instructors also thought that their ability to express emotion was
improved because students could hear the tone of their voice (Borup et al., 2014; Harper et al.,
2015; Parton et al., 2010; Séror, 2012). Séror (2012) commented that recorded feedback added
emotional colour, helping to convey authentic sincerity of praise and genuine confusion of
misunderstanding. Furthermore, instructors considered video-based feedback to be more
interactive and engaging than text-based feedback, thereby eliciting stronger feelings of
connection (Borup et al., 2014; Jones et al., 2012; Séror, 2012).
Sustainability of Video-Based Feedback
Several studies reported mixed evidence regarding the sustainability of producing video
feedback (Harper et al., 2015; Henderson & Phillips, 2015; Soden, 2016; West & Turner, 2016).
Henderson & Phillips (2015) thought the provision of video-based feedback was as fast and easy
as text-based feedback and could be a sustainable practice. However, they saved time by taking no
notes before recording. They also discouraged using screencast feedback because it might shift
focus to minute details of the students submission and become too time-consuming (Henderson
& Phillips, 2015). A focus group discussion of instructors indicated that providing video feedback
was positive and could be a sustainable practice. However, no follow-up was reported to indicate
the extent to which those instructors used video feedback after the study (West & Turner, 2016).
Soden (2017) reported that four out of six instructors tried using video-based feedback in the
previous two years but had not continued using it. Interview data indicated that a significant barrier
to the continued use of screencast feedback was failing to perceive its relative advantage (Soden,
2017).
On the other hand, in a study of nine foreign-language faculty members, a follow-up survey
indicated that three of five responding faculty continued to provide video-based feedback one year
after the intervention (Harper et al., 2015). Two instructors who did not continue were uncertain
about institutional policies about the method but remained optimistic about the approach and were
open to using it again (Harper et al., 2015). The instructors valued the advantage of video feedback
providing extended explanations (Harper et al., 2015).
SUMMARY AND PRACTICAL IMPLICATIONS
Four key issues surfaced in a systemic review of the literature on instructors perspectives
of using video-based feedback in higher education: creation, quality, connection, and
10 Bahula and Kay
sustainability. Many, but not all, studies suggested that video-based feedback was relatively easy
and time-efficient to create. However, several potential barriers to creating video-based feedback
were identified based on the recording environment, editing, performance, and approach to
presenting content. These barriers indicate that the design and creation process is detailed,
somewhat complicated, and needs to be supported with sufficient guidance and training to be
successful. In addition, the research also suggests that video feedback is superior in terms of both
quantity and quality. This type of feedback provides more detail, depth, and interactivity.
Numerous studies also reported that video-based feedback created stronger personal and cognitive
connections with students. It is somewhat puzzling, then, given the clear relative advantage of
video-based feedback (efficiency, quality, and connection), that more studies did not support the
sustainability of this approach. More research is needed to examine why instructors might choose
to discontinue providing video-based feedback.
REFERENCES
Ali, A. D. (2016). Effectiveness of using screencast feedback on EFL students’ writing and
perception. English Language Teaching, 9(8), 106121. https://doi.org/10/gg57f4
Alvira, R. (2016). The impact of oral and written feedback on EFL writers with the use of
screencasts. Profile: Issues in Teachers’ Professional Development, 18(2), 7992.
https://doi.org/10/gg57jb
Anson, I. G. (2015). Assessment feedback using screencapture technology in political science.
Journal of Political Science Education, 11(4), 375390. https://doi.org/10/gg57fw
Armağan, S., Bozoğlu, O., Güven, E., & Çelik, K. (2016). Usage of video feedback in the course
of writing in EFL: Challenges and advantages. International Journal of Sciences: Basic
and Applied Research, 30(2), 95102.
http://gssrr.org/index.php?journal=JournalOfBasicAndApplied&page=article&op=view&
path%5B%5D=6459
Bissell, L. C. (2017). Screen-casting as a technology-enhanced feedback mode. Journal of
Perspectives in Applied Academic Practice, 5(1), 412. https://doi.org/10/gg57fz
Borup, J., West, R. E., & Thomas, R. A. (2015). The impact of text versus video communication
on instructor feedback in blended courses. Educational Technology Research and
Development, 63(2), 161184. https://doi.org/10/f65vp5
Borup, J., West, R. E., Thomas, R. A., & Graham, C. R. (2014). Examining the impact of video
feedback on instructor social presence in blended courses. The International Review of
Research in Open and Distributed Learning, 15(3), 232256. https://doi.org/10/gg57h2
Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education,
31(2), 219233. https://doi.org/10/cktfdf
Chang, C., Cunningham, K. J., Satar, H. M., & Strobl, C. (2018). Electronic feedback on second
language writing: A retrospective and prospective essay on multimodality. Writing &
Pedagogy, 9(3), 405428. https://doi.org/10/gg57h5
Corbin, J. M., & Strauss, A. L. (2008). Basics of Qualitative Research: Techniques and Procedures
for Developing Grounded Theory (3rd ed.). Sage Publications, Inc.
Crook, A., Mauchline, A., Maw, S., Lawson, C., Drinkwater, R., Lundqvist, K., Orsmond, P.,
Gomez, S., & Park, J. (2012). The use of video technology for providing feedback to
students: Can it enhance the feedback experience for staff and students? Computers &
Education, 58(1), 386396. https://doi.org/10/bfz77t
11 Bahula and Kay
Cunningham, K. J. (2019). Student perceptions and use of technology-mediated text and screencast
feedback in ESL writing. Computers and Composition, 52, 222241.
https://doi.org/10/gg57fv
Denton, D. W. (2014). Using screen capture feedback to improve academic performance.
TechTrends, 58(6), 5156. https://doi.org/10/gg57ft
Edwards, K., Dujardin, A.-F., & Williams, N. (2012). Screencast feedback for essays on a distance
learning MA in professional communication. Journal of Academic Writing, 2(1), 95126.
https://doi.org/10/gg57hp
Elola, I., & Oskoz, A. (2016). Supporting second language writing using multimodal feedback.
Foreign Language Annals, 49(1), 5874. https://doi.org/10/gg57f5
Ghosn-Chelala, M., & Al-Chibani, W. (2018). Screencasting: Supportive feedback for EFL
remedial writing students. The International Journal of Information and Learning
Technology, 35(3), 146159. https://doi.org/10/gg57hb
Glover, C., & Brown, E. (2006). Written feedback for students: Too much, too detailed or too
incomprehensible to be effective? Bioscience Education, 7(1), 116.
https://doi.org/10/gg57bp
Glover, I., Parkin, H. J., Hepplestone, S., Irwin, B., & Rodger, H. (2015). Making connections:
Technological interventions to support students in using, and tutors in creating, assessment
feedback. Research in Learning Technology, 23(1), 27078. https://doi.org/10/ghsgdz
Gonzalez, M., & Moore, N. S. (2018). Supporting graduate student writers with VoiceThread.
Journal of Educational Technology Systems, 46(4), 485504. https://doi.org/10/gg57hf
Gough, D., & Thomas, J. (2016). Systematic reviews of research in education: Aims, myths and
multiple methods. Review of Education, 4(1), 84102. https://doi.org/10/gg57hx
Greenhalgh, T., & Peacock, R. (2005). Effectiveness and efficiency of search methods in
systematic reviews of complex evidence: Audit of primary sources. BMJ, 331, 10641065.
https://doi.org/10/dcxh5h
Griffiths, M. E., & Graham, C. R. (2009). Using asynchronous video in online classes: Results
from a pilot study. International Journal of Instructional Technology & Distance Learning,
6(3), 6576. http://www.itdl.org/Journal/Mar_09/Mar_09.pdf#page=69
Griffiths, M. E., & Graham, C. R. (2010). Using asynchronous video to achieve instructor
immediacy and closeness in online classes: Experiences from three cases. International
Journal on E-Learning, 9(3), 325340. https://learntechlib.org/primary/p/30315/
Hall, T., Tracy, D., & Lamey, A. (2016). Exploring video feedback in philosophy: Benefits for
instructors and students. Teaching Philosophy, 39(2), 137162. https://doi.org/10/f3p8fc
Harper, F., Green, H., & Fernandez-Toro, M. (2015). Using screencasts in the teaching of modern
languages: Investigating the use of Jing® in feedback on written assignments. The
Language Learning Journal, 46(3), 277292. https://doi.org/10/gg57f2
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1),
81112. https://doi.org/10/bf4d36
Henderson, M., & Phillips, M. (2015). Video-based feedback on student assessment: Scarily
personal. Australasian Journal of Educational Technology, 31(1), 5166.
https://doi.org/10/ghsgd2
Hepplestone, S., Holden, G., Irwin, B., Parkin, H. J., & Thorpe, L. (2011). Using technology to
encourage student engagement with feedback: A literature review. Research in Learning
Technology, 19(2), 117127. https://doi.org/10/fx6rbz
12 Bahula and Kay
Hope, S. A. (2011). Making movies: The next big thing in feedback? Bioscience Education, 18(1),
114. https://doi.org/10/crrfhv
Hyde, E. (2013). Talking results: Trialing an audiovisual feedback method for e-submissions.
Innovative Practice in Higher Education, 1(3).
http://journals.staffs.ac.uk/index.php/ipihe/article/view/37
Jones, N., Georghiades, P., & Gunson, J. (2012). Student feedback via screen capture digital video:
Stimulating student’s modified action. Higher Education, 64(5), 593607.
https://doi.org/10/gg57f8
Lamey, A. (2015). Video feedback in philosophy. Metaphilosophy, 46(45), 691702.
https://doi.org/10/gg57f6
Letón, E., Molanes‐López, E. M., Luque, M., & Conejo, R. (2018). Video podcast and illustrated
text feedback in a web‐based formative assessment environment. Computer Applications
in Engineering Education, 26(2), 187202. https://doi.org/10/gg57hw
Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P. A., Clarke,
M., Devereaux, P. J., Kleijnen, J., & Moher, D. (2009). The PRISMA statement for
reporting systematic reviews and meta-analyses of studies that evaluate health care
interventions: Explanation and elaboration. PLOS Medicine, 6(7), 128.
https://doi.org/10/cw592j
Mahoney, P., Macfarlane, S., & Ajjawi, R. (2018). A qualitative synthesis of video feedback in
higher education. Teaching in Higher Education, 24(2), 123. https://doi.org/10/gg57h6
Mathieson, K. (2012). Exploring student perceptions of audiovisual feedback via screencasting in
online courses. American Journal of Distance Education, 26(3), 143156.
https://doi.org/10/gg57f7
Mathisen, P. (2012). Video feedback in higher education: A contribution to improving the quality
of written feedback. Nordic Journal of Digital Literacy, 7(2), 97113.
https://www.idunn.no/dk/2012/02. https://doi.org/10/gm4vw7
Mayhew, E. (2017). Playback feedback: The impact of screen-captured video feedback on student
satisfaction, learning and attainment. European Political Science, 16(2), 179192.
https://doi.org/10/gg57hk
McCarthy, J. (2015). Evaluating written, audio and video feedback in higher education summative
assessment tasks. Issues in Educational Research, 25(2), 153169.
http://www.learntechlib.org/p/161352
Moore, N. S., & Filling, M. (2012). iFeedback: Using video technology for improving student
writing. Journal of College Literacy & Learning, 38, 314. https://j-cll.org/volume-38-
2012
O’Malley, P. J. (2011). Combining screencasting and a tablet PC to deliver personalised student
feedback. New Directions in the Teaching of Physical Sciences, 7, 2730.
https://doi.org/10/gg57fx
Orlando, J. (2016). A comparison of text, voice, and screencasting feedback to online students.
American Journal of Distance Education, 30(3), 156166. https://doi.org/10/gg57hn
Özkul, S., & Ortaçtepe, D. (2017). The use of video feedback in teaching process‐approach EFL
writing. TESOL Journal, 8(4), 862877. https://doi.org/10/ghfcrq
Parkin, H. J., Hepplestone, S., Holden, G., Irwin, B., & Thorpe, L. (2012). A role for technology
in enhancing students’ engagement with feedback. Assessment & Evaluation in Higher
Education, 37(8), 963973. https://doi.org/10/d8njhq
13 Bahula and Kay
Parton, B. S., Crain-Dorough, M., & Hancock, R. (2010). Using flip camcorders to create video
feedback: Is it realistic for professors and beneficial to students. International Journal of
Instructional Technology & Distance Learning, 7(1), 1523.
http://www.itdl.org/Journal/Jan_10/article02.htm
Pitt, E., & Norton, L. (2017). ‘Now that’s the feedback I want!’ Students’ reactions to feedback on
graded work and what they do with it. Assessment & Evaluation in Higher Education,
42(4), 499516. https://doi.org/10/gdqbvq
Price, M., Handley, K., Millar, J., & O’Donovan, B. (2010). Feedback: All that effort, but what is
the effect? Assessment & Evaluation in Higher Education, 35(3), 277289.
https://doi.org/10/drrnc3
Richards, K., Bell, T., & Dwyer, A. (2017). Training sessional academic staff to provide quality
feedback on university students’ assessment: Lessons from a faculty of law learning and
teaching project. The Journal of Continuing Higher Education, 65(1), 2534.
https://doi.org/10/gg57fr
Robinson, S., Centifanti, L., Brewer, G., & Holyoak, L. (2015). The benefits of delivering
formative feedback via video-casts. UCLan Journal of Pedagogic Research, 6(1).
http://pops.uclan.ac.uk/index.php/ujpr/article/view/326/0
Ryan, T., Henderson, M., & Phillips, M. (2019). Feedback modes matter: Comparing student
perceptions of digital and non‐digital feedback modes in higher education. British Journal
of Educational Technology, 50(3), 15071523. https://doi.org/10/gg57hg
Schilling, W., & Estell, J. K. (2014). Enhancing student comprehension with video grading.
Computers in Education Journal, 5(1), 2839. http://asee-
coed.org/index.php/coed/article/view/Schilling_Enhancing
Semke, H. D. (1984). Effects of the red pen. Foreign Language Annals, 17(3), 195202.
https://doi.org/10/fnqggc
Séror, J. (2012). Show me! Enhanced feedback through screencasting technology. TESL Canada
Journal, 30(1), 104116. https://doi.org/10/gg57hs
Shields, S. (2015). ‘My work is bleeding’: Exploring students’ emotional responses to first-year
assignment feedback. Teaching in Higher Education, 20(6), 614624.
https://doi.org/10/gf9k57
Silva, M. L. (2012). Camtasia in the classroom: Student attitudes and preferences for video
commentary or Microsoft Word comments during the revision process. Computers and
Composition, 29(1), 122. https://doi.org/10/gcpgt7
Soden, B. (2016). Combining screencast and written feedback to improve the assignment writing
of TESOL taught master’s students. The European Journal of Applied Linguistics and
TEFL, 5(1), 213236.
http://www.theeuropeanjournal.eu/download/EJALTEFL_01_2016.pdf
Soden, B. (2017). The case of screencast feedback: Barriers to the use of learning technology.
Innovative Practice in Higher Education, 3(1), 121.
http://journals.staffs.ac.uk/index.php/ipihe/article/view/109
Sommers, J. (1989). The effects of tape-recorded commentary on student revision: A case study.
Journal of Teaching Writing, 8(2), 4976.
https://journals.iupui.edu/index.php/teachingwriting/article/view/1012.
Sommers, J. (2013). Response 2.0: Commentary on student writing for the new millennium.
Journal of College Literacy & Learning, 39, 2137. https://j-cll.org/volume-39-2013
14 Bahula and Kay
Thomas, R. A., West, R. E., & Borup, J. (2017). An analysis of instructor social presence in online
text and asynchronous video feedback comments. The Internet and Higher Education, 33,
6173. https://doi.org/10/f96nbn
Thompson, R., & Lee, M. J. (2012). Talking with students through screencasting:
Experimentations with video feedback to improve student learning. The Journal of
Interactive Technology and Pedagogy, 1(1).
https://jitp.commons.gc.cuny.edu/2012/02/17/
Turner, W., & West, J. (2013). Assessment for “digital first language” speakers: Online video
assessment and feedback in higher education. International Journal of Teaching &
Learning in Higher Education, 25(3), 288296.
http://www.isetl.org/ijtlhe/past2.cfm?v=25&i=3
Vincelette, E. J., & Bostic, T. (2013). Show and tell: Student and instructor perceptions of
screencast assessment. Assessing Writing, 18(4), 257277. https://doi.org/10/gcpgkz
Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written
responses. Assessment & Evaluation in Higher Education, 31(3), 379394.
https://doi.org/10/cjknpn
West, J., & Turner, W. (2016). Enhancing the assessment experience: Improving student
perceptions, engagement and understanding using online video feedback. Innovations in
Education and Teaching International, 53(4), 400410. https://doi.org/10/gg57h4
15 Bahula and Kay
Appendix A Coding of Studies
Data item
Description
Values
Author(s)
The author(s) who wrote the article.
First author (and second, if there are only
two). See References for additional
authors.
Year
The year the article was published.
Year
Country
The country in which the study was conducted.
Three-letter country code
Population
The academic level of the population being
studied.
4, Undergraduate
5, Graduate
6, Pre-service teachers
Discipline
The academic discipline in which the study was
conducted.
1, Business
2, Education
3, Humanities
4, Language learning
5, Multidisciplinary
6, Social sciences
7, STEM
8, Other
Instructional mode
The instructional mode of the course in which the
feedback was provided.
1, Classroom
2, Blended
3, Asynchronous online
Assessment type
The purpose of the assessment (and of the
feedback provided).
1, Formative
2, Summative
Feedback recipient
The person(s) who the feedback was directed
toward. That is, how specific or generic was the
feedback.
1, Individual
2, Group
Feedback media 1, 2, 3
The feedback media that were explicitly studied.
Feedback media 1 is the data item for the first
condition. Feedback media 2 and 3 are the data
items for any comparison conditions.
0, Face-to-face
1, Video
2, Screencast
3, Pencast
4, VoiceThread
5, Audio
6, Digital markup
7, Handwritten markup
8, Rubric
Feedback length
The length in minutes of the video-based feedback
provided.
Number of minutes
Capture method
The method used to capture the feedback.
1, Adobe Connect
2, CamStudio
3, Camtasia
4, Camtasia Relay
5, Camtasia Studio
6, Canvas
7, Flip
8, iMovie
9, iPhone
10, Jing
11, Logitech
12, Multiple
13, Photo Booth
14, QuickTime Player
15, Screencast-O-Matic
16, SnagIt
16 Bahula and Kay
Data item
Description
Values
17, VoiceThread
18, Windows Media Encoder
Research method
The research method(s) used in the study.
1, Qualitative
2, Observation
3, Questionnaire
Reliab
Whether the reliability of quantitative statistics in
the study is discussed.
0, No
1, Yes
Valid
Whether the validity of quantitative statistics in the
study is discussed.
0, No
1, Yes
Qual
What level of checks were used on qualitative
data.
0, None
1, One
2, Multiple
Student sample
The number of students who participated in the
study by completing a survey, focus group,
interview, etc.
Number of students
Instructor sample
The number of instructors who participated in the
study by completing a survey, focus group,
interview, self-reflection, etc.
Number of instructors
Sample desc
The completeness of the sample description.
0, No description
1, Partial description
2, Complete description
Appendix B Table of Coded Articles, Part 1
Author(s)
Country
Population
Discipline
Instructional
Mode
Assessment
Type
Feedback
Recipient
Feedback
Media 1
Feedback
Media 2
Feedback
Media 3
Feedback
Length
Capture
method
Anson
USA
4
6
1
1 (?)
1
2
6
4
10
Armağan et
al.
TUR
4
4
1
1
1
2
Bissell
GBR
4
3
1
2
1
2
10
15
Borup et al.
USA
6
2
2
1
1
1
6
6
Borup et al.
USA
6
2
2
1 + 2
1
1
6
6
Crook et al.
GBR
4 + 5
5
1
1
2
2
1
3
2 + 8
Cunningham
USA
4
4
1
1
1
2
6
4-11
16
Denton
USA
4
2
1 (?)
1
1
2 + 8
3.17
Edwards et
al.
GBR
5
1
3
2
1
2 + 8
6 + 8
5
10
Elola and
Oskoz
USA
4
4
1
1
1
2
6
15
15
Gonzalez
and Moore
USA
5
2
2
1
1
4
17
Griffiths and
Graham
USA
6
2
3
1
1
Griffiths and
Graham
USA
6
2
3
1
1
1
12
Hall et al.
USA
4
3
1
1
1
1
7
5
13
Harper et al.
GBR
4
4
3 (?)
2 (?)
1
2 + 6
6
5
10
Henderson
and Phillips
AUS
4 + 5
2
2
2
1
1
5
9 + 11
Hope
GBR
4
7
1 (?)
2
1
2
6
5-13
10
Hyde
GBR
4
8
1
2
1
2
5
10
Jones et al.
GBR
4 + 5
1
1
1 (?)
1
2
7
6
18
Lamey
USA
4
3
1 (?)
2 (?)
1
1
7
4
13
Mathieson
USA
5
8
3
2
1
2 + 6
6
5
10
Mathisen
NOR
4
5
1 (?)
1 + 2
1 + 2
2 + 6
5
10
Mayhew
GBR
4
6
1
2
1
1 + 2
4-10
5
McCarthy
AUS
4
3
1
2
1
2
5
6 + 8
4
5
Moore and
Filling
USA
4
3
1
1 + 2
1
1 OR 2
6 (?) + 8
5-15
8 + 14
OMalley
GBR
5
7
1 (?)
1
1
3
5-10
5
18 Bahula & Kay
Author(s)
Country
Population
Discipline
Instructional
Mode
Assessment
Type
Feedback
Recipient
Feedback
Media 1
Feedback
Media 2
Feedback
Media 3
Feedback
Length
Capture
method
Orlando
USA
5
3 (?)
1
2
5
6
15
Parton et al.
USA
5
2
2
1 + 2
1
1
1 + 7
7
5
7
Robinson et
al.
GBR
4
6
1
1
1
2 + 6
6
0
10-20
1
Schilling and
Estell
USA
4
7
2
2 (?)
1 + 2
2 + 8
3-22
Séror
CAN
4
4
1 (?)
1
1
2 + 8
5
10
Silva
USA
4
7
2
1
1
2
6
7-14.5
3
Soden
GBR
5
2
1
1
1
2 + 6
6
5-6
15
Soden
GBR
4 + 5
5
1 (?)
1
1
2
2-20
12
Sommers
USA
4
3
1 (?)
1
2
5
10
Thomas et
al.
USA
6
2
2
2
1
1
6
Turner and
West
AUS
6
2
1 (?)
1
2
6-12
4
Vincelette
and Bostic
USA
4
3
1
2 (?)
1
2
5
10
West and
Turner
AUS
6
2
1
1
2 + 8
10-20
4
19 Bahula & Kay
Appendix C Table of Coded Articles, Part 2
Author(s)
Research
Method
Reliability
Valid
Qual
Student
Sample
Instructor
Sample
Sample
Desc
Anson
3
0
0
0
95
3 (?)
0
Armağan et al.
1
0
40
3
1
Bissell
3
0
0
0
15
1
0
Borup et al.
1 + 3
1
1
2
130
10
2
Borup et al.
1 + 2 + 3
0
1
1
229
9
1
Crook et al.
3
0
0
0
314
27
2
Cunningham
1 + 2 + 3
0
0
0
12
1
1
Denton
2 + 3
1
1
36
1
1
Edwards et al.
3
1
0
0
14
1
0
Elola and Oskoz
2 + 3
1
0
0
4
1
1
Gonzalez and
Moore
1
0
18
2
0
Griffiths and
Graham
3
0
0
0
38
1
1
Griffiths and
Graham
2
0
0
1
3
1
Hall et al.
3
0
0
1
40
3
1
Harper et al.
1 + 2 + 3
0
0
1
54
9
0
Henderson and
Phillips
1 + 3
0
126
2
1
Hope
2 + 3
0
0
0
145
1
1
Hyde
1
0
10
1
0
Jones et al.
1 + 3
1
1
0
75
20
1
Lamey
1 + 3
0
0
0
74
1
0
Mathieson
3
0
1
0
13
1
1
Mathisen
1 + 3
0
1
0
120
6
1
Mayhew
1 + 2 + 3
0
0
0
50
1
1
McCarthy
1 + 3
0
0
0
77
1
2
Moore and Filling
1 + 2 + 3
0
0
1
45
2
2
OMalley
3
0
0
0
1
0
Orlando
3
0
0
0
30
6
0
Parton et al.
1 + 3
0
0
0
12
1
0
Robinson et al.
3
0
0
0
18
2
0
20 Bahula & Kay
Author(s)
Research
Method
Reliability
Valid
Qual
Student
Sample
Instructor
Sample
Sample
Desc
Schilling and Estell
3
0
0
0
70
2
0
Séror
1
0
0
0
1
0
Silva
2 + 3
0
0
0
19
1
2
Soden
1 + 2
0
9
1
1
Soden
1
0
6
2
Sommers
2 + 3
0
0
0
97
1
0
Thomas et al.
2
1
0
1
167
6
1
Turner and West
3
0
0
0
59
2
1
Vincelette and
Bostic
1 + 3
1
1
1
39
5
2
West and Turner
3
0
0
1
299
8
1
Article
Full-text available
Highlights Students preferred video feedback for its ease of use, clarity and efficiency. Observations supported student perceptions of clarity & efficiency. Successful revision rates were similar across text & video feedback. Text feedback often needed clarification to be used, but video did not. Video feedback took less time to create than text feedback (2/3 the time). Abstract In an effort to expand understanding of the impact of technology choices in giving feedback, this exploratory study investigates the efficacy of screencast and text feedback given to 12 students over four assignments in an intermediate ESL writing course. Employing a series of six surveys in conjunction with screencast observations, draft comparisons, and a small group interview, it provides insight into student perceptions and use of technology-mediated screencast and text feedback. Results suggest that while students found utility in both screencast and text feedback, screencast video feedback was preferred for its efficiency, clarity, ease of use and heightened understanding. Observations supported these student assertions as students working with screencast feedback took less time to revise, remained in the target language and did not need to ask clarification questions, which was not the case with the text feedback. Successful changes were made at similar rates for both types of feedback with screencast resulting in a slightly, but not significantly, higher average percentage of successful global changes. To consider feasibility, the study also compared the length of time to create each feedback file, finding that video feedback offered a 33% time savings.
Article
Full-text available
Assessment feedback is increasingly being provided in digital modes, from electronic annotations to digital recordings. Digitally recorded feedback is generally considered to be more detailed than text‐based feedback. However, few studies have compared digital recordings with other common feedback modes, including non‐digital forms such as face‐to‐face conversations. It is also unclear whether providing multiple feedback modes is better than a single mode. To explore these possibilities, an online survey asked 4514 Australian university students to rate the level of detail, personalisation and usability of the feedback comments they had most recently received. Of the students who received a single feedback mode only, electronic annotations and digital recordings were rated most highly on the three quality indicators. Students who received multiple modes were more likely to agree with all three indicators than those who received a single mode. Finally, students who received multiple modes were more likely to agree that the comments were detailed and usable when one of those modes was a digital recording. These findings enhance our understanding of feedback design, indicating that it is important to consider the strengths and weaknesses of particular modes, and the value of offering multiple modes.
Article
Full-text available
The quality of feedback provided to university students has long been recognised as the most important predictor of student learning and satisfaction. However, providing quality feedback to students is challenging in the current context, in which universities increasingly rely on casualised and inexperienced academic staff to assess undergraduate work. Ensuring that these staff are suitably equipped to provide quality feedback to students is vital if student learning and satisfaction goals are to be met. This article reports on a learning and teaching project undertaken in the School of Justice (Faculty of Law) at Queensland University of Technology that sought to address this issue. The project involved delivering an evidence-based training workshop to all casual academic staff in the School, on how to provide quality constructive feedback to students. Results from online surveys of sessional staff (N = 9) and a sample of undergraduate students (N = 141) are presented in this article. Findings suggest that on the whole, staff felt better equipped to provide constructive feedback to students following the workshop, and students perceived an improvement in the feedback they received. We conclude that such training can create a modest improvement in the provision of feedback to students.
Article
While written and audio feedback have been well-examined by researchers, video feedback has received less attention. This review establishes the current state of research into video feedback encompassing three formats: talking head, screencast and combination screencast. Existing research shows that video feedback has a high level of acceptability amongst both staff and students and may help strengthen student-marker relationships; however, the impact of video feedback on student learning outcomes is yet to be determined. In addition, current evidence is drawn largely from small-scale studies and self-reported data susceptible to the novelty effect. While video feedback appears to be a promising alternative to traditional written feedback for its relative relational richness, the medium continues to be primarily used for information transmission rather than dialogue. Further research is needed to establish how the medium of video influences the feedback process, its potential to facilitate dialogue and its effects on student learning.
Article
Purpose: The purpose of this paper is to explore screencasting as a computer-mediated feedback approach for Arabic native (L1) speakers taking an English as a foreign language (EFL) college remedial writing class. Design/methodology/approach: This case study focused on an EFL remedial writing class consisting of eight Lebanese, Arabic L1 students at a private university in Lebanon. Students received screencast feedback through Jing® for one essay intended to assist them with subsequent revision. The multimodal screencast videos included indirect corrections, annotations, and oral commentary guided by a rubric. Students then completed a perspectives survey on screencast feedback. The instructor also led an informal group discussion to allow for further elaboration of students’ responses. Findings: Students reported that screencasting’s multimodality provided for better engagement and support of learning preferences. They also perceived screencast feedback to be clearer and more useful than traditional written feedback. Research limitations/implications: This study applied screencasting to address feedback challenges pertaining to clarity, learning preferences, and engagement. As this was a classroom case study, further research using a larger sample is recommended. Originality/value: The aim of research into computer-mediated human feedback is to address such challenges as increasing student engagement, improving clarity, and responding to students’ preferences. Studies of screencast feedback have been few, particularly for EFL writing students. A survey of the literature indicates the need to explore contextualized classroom feedback case studies and approaches to enhance feedback. https://www.emeraldinsight.com/eprint/RUPVHC3DJ6Q9HCHW2IIC/full
Article
As technology has made a range of modes of communication available and created new ways to integrate these modes, feedback has become increasingly electronic and multimodal. From written to audio, video, and screencast feedback, the multimodal options for electronic feedback (e-feedback) have expanded in such a way that we might speak of a ‘multimodal turn’ in feedback on foreign and second language writing. However, feedback studies on second language writing are just beginning to explore these complex areas. This essay offers a multimodal perspective on e-feedback by illustrating the scope of current research and highlights future research directions. The retrospective underscores the scarcity of research in the area with a specific focus on multimodality and identifies needs for speciality feedback systems that consider practical and contextualized perspectives. We argue that future research should strive for a context-rich description of e-feedback activities, gathering thick data about feedback provision, learner engagement with feedback and uptake through screencasting, eye-tracking, and keystroke logging technologies. These data should be triangulated with information about all factors impacting the feedback activity outcome, ranging from participant variables over modal affordances of the platforms used to environmental factors like institutional support. https://doi.org/10.1558/wap.32515
Article
This qualitative case study examined the influence of the use of VoiceThread technology on the feedback process for thesis writing in two online asynchronous graduate courses. The influence on instructor feedback process and graduate student writers’ perceptions of the use of VoiceThread were the foci of the study. Master’s-level students (n = 18) in two different degree programs received and responded to multiple rounds of instructor feedback on their thesis paper via VoiceThread technology for one semester. Instructor and student comments on VoiceThread and an open-ended survey of students’ experiences using VoiceThread in the course were analyzed. Findings show that VoiceThread promoted a two-way dialogue between the instructors and the students during the revision process, students had a generally positive perception of the use of the technology, and that instructors’ feedback processes were impacted in different ways by the use of the technology.
Article
This experimental study investigated the use of video feedback as an alternative to feedback with correction codes at an institution where the latter was commonly used for teaching process-approach English as a foreign language (EFL) writing. Over a 5-week period, the control and the experimental groups were provided with feedback based on comments and correction codes and video feedback, respectively, and the extent of feedback incorporation was analyzed through descriptive and inferential statistics. In addition, a questionnaire was administered to the experimental group to explore their perceptions of video feedback. The findings show that teacher feedback delivered in the form of videos is more effective than written feedback when EFL learners revise their written work in process writing. The study confirms that video feedback is more information-rich, and in return results in more correction in learners’ subsequent drafts. The findings imply that video feedback, because of its features of conferencing and multimodality, is an effective method of providing EFL learners with teacher feedback and is therefore eligible for classroom practice and for future research.
Article
It is well known that computer-based formative assessment and timely feedback enhance effective student learning but there is still a debate about what type of feedback should be given, being the text-based the most used feedback in practice. Although the use of video content as a learning resource has recently increased in both educational and non-educational contexts, there is very little research on its effectiveness as assessment feedback and the existing results are contradictory. For that reason, we have combined in this work the use of a web-based formative assessment system (Siette) where we have integrated a specific type of video podcast (“modular teaching mini-videos”: MTMs) and equivalent illustrated text. We have carried out two experiments with a twofold purpose, first to compare the effect of video feedback versus correct response feedback alone and secondly to compare the effect of video feedback versus equivalent illustrated text feedback. In the context of our research, a Statistic Course at university level, our results show, as expected, that there is a statistically significant positive effect in favor of video feedback over correct response feedback alone. Surprisingly, in contrast to our expectations, based on our context of acquisition of procedural knowledge, there is a statistically non-significant effect of video feedback versus equivalent illustrated text feedback. From the students’ satisfaction survey, there is not statistically significant differences in the overall score of the activity based on video podcast feedback or equivalent illustrated text, which is an indication that we have really obtained equivalent feedback materials.
Article
Feedback by tutors to students on their written assignments is most frequently given via written comments and occasionally through a one-to-one tutorial. Recent developments in screen-casting technology have allowed students to receive feedback in MP4 video format where they are guided through their assignment visually and aurally by the tutor. This paper disseminates the pilot of screen-casting as a technology-enhanced feedback mode in a performing arts HEI. The aim of this limited but focussed research was to explore alternative ways of providing feedback to students and to engage with a range of approaches to learning and assessment. This paper reflects on the potential of screen-casting technology as a feedback mode for written work to provide extensive aural and visual feedback and suggests other possible applications in other areas of learning and teaching.