Content uploaded by Martin Ebner
Author content
All content in this area was uploaded by Martin Ebner on Jul 08, 2015
Content may be subject to copyright.
Originally published in: Wachtler, J., Ebner, M. (2015). Impacts of Interactions in Learning-Videos: A Subjective and Objective Analysis. In
Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2015. pp. 1205-1213 Chesapeake, VA:
AACE.
Impacts of Interactions in Learning-Videos:
A Subjective and Objective Analysis
Josef Wachtler
Institute for Information Systems and Computer Media
Graz University of Technology
Austria
josef.wachtler@tugraz.at
Martin Ebner
Social Learning - Computer and Information Services
Graz University of Technology
Austria
martin.ebner@tugraz.at
Abstract: It is clear that a system known as selective attention is the most crucial resource for
human learning (Heinze et al. 1994). Due to this fact a web based information system is developed
to support the attention of the watchers of a learning video. This is done by enriching the video
with different forms of interactions. Among others there are interactions presenting multiple-choice
questions at predefined positions in the video as well as randomly occurring interactions displaying
general questions, which are not content related. To gain a basic plan which points out how to
distribute the interactions over the video the usage of the information system at a lecture is
analyzed. For that on the one hand some feedback of the users is provided and on the other hand an
objective analysis of the results of the multiple-choice questions is done by evaluating their time of
occurrence in the video.
Introduction
It is a widely known fact that a mechanism introduced as selective attention is the most crucial resource for human
learning (Heinze et al. 1994). This indicates that supporting and managing this attention enhances both, behavioral
and neuronal performance (Spitzer et al. 1988). Furthermore the attention is heavily influenced by the interaction as
well as the communication in all forms and directions (Carr!Chellman & Duchastel 2000). This means that the
communication should be done by using different methods like face-to-face or e-mail. In addition it is important that
the communication happens not only from the lecturer to the students and vice versa. Also communication of the
students with the content itself is required.
The importance of managing the attention is additionally pointed out by the fact that videos have only consuming
character. This statement is gained from the fact that on the one hand the technical aspects of videos changed
dramatically over the last decades which means that the early videos were presented by a projector whereas today it
is common to search up a video on the internet and to watch it on different (mobile) devices. On the other hand the
changes regarding the media itself were minor. So the maxim “TV is easy and book is hard” (Salmon 1984) is still
an accepted fact.
To address the mentioned problems regarding the attention and the media itself a web based information system was
first introduced by Ebner et al. (2013). It uses different forms of interactions to support the attention during a
learning video (see Section Interactions in Learning Videos). It is shown that the approach is basically working if the
number of interactions as well as the distribution of them across the video is well balanced. In this research work we
would like to address the following research problem: What is the ideal distribution of the interactions over a
learning video to reach acceptable results to the interactions presenting multiple-choice questions by analyzing the
conjunction of these results to the time of occurrence of the interactions. Or with other words this research study
tries to “find a basic recommendation of where to place multiple-choice questions in learning videos”.
At first some related work is presented and after that the information system, which is responsible for placing
interactions in videos is explained for a better understanding of the context of this study (see Section Interactions in
Learning Videos). This is followed by the Section Subjective User Feedback as well as the Section Objective
Originally published in: Wachtler, J., Ebner, M. (2015). Impacts of Interactions in Learning-Videos: A Subjective and Objective Analysis. In
Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2015. pp. 1205-1213 Chesapeake, VA:
AACE.
Evaluation, which point out the results of the study by presenting the observed impacts of the time of occurrence of
the interactions to the results of the multiple-choice questions. Finally these impacts are discussed by the Section
Discussion and converted to a basic recommendation of how to place interactions in videos right before a conclusion
sums up the main points of this work and gives an outlook to planned studies for proving the gathered impacts.
Related Work
The famous video platform Youtube offers some features to add a piece of interactivity to videos. With that it is
possible to place questions in a video. However there is a lack of detailed mechanisms for analyzing the results.
One possibility to control and to manage students’ attention in standard classroom situations is the usage of an
Audience Response System (ARS). Such a system is used in lecture theatres to present questions to the students
which could be answered by them by using a handset or a mobile phone to enter the answer (Haintz et al. 2014).
Furthermore it offers some functionalities of analysis. (Tobin 2005)
The ability to enhance both, the attention and the participation, of an ARS is proven by many studies (Ebner 2009).
Stowel & Nelson (2007) for instance compared an ARS to other standard classroom communication methods (e.g.
hand-rising). Their study claims that an ARS reaches the highest formal participation. In addition a similar
observation was made by Freeman & Dobbie (2005).
Interactions in Learning Videos
To clarify the context of this study this section provides an overview of the main functionalities of the web based
information system named LIVE (Live Interaction in Virtual learning Environments). As already mentioned it offers
learning videos to the students and enriches them with different methods of interaction as well as communication.
Figure 1 shows a screenshot of LIVE displaying a video, which is currently paused and overlaid by an interaction,
which presents a multiple-choice question. Furthermore it can be seen that there are some control elements located
at the right side of the video to invoke interactions as for instance asking a question to the lecturer. (Wachtler &
Ebner 2014a)
Figure 1: A multiple-choice question is shown during a video
As a general overview the following list states the most important features of LIVE in a summarizing way:
(Wachtler & Ebner 2014a) (Ebner et al. 2013)
• registered and authenticated users only
Originally published in: Wachtler, J., Ebner, M. (2015). Impacts of Interactions in Learning-Videos: A Subjective and Objective Analysis. In
Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2015. pp. 1205-1213 Chesapeake, VA:
AACE.
• different possibilities of analysis (Wachtler & Ebner 2014b)
o a detailed logging of the watched timespans to point out at which time a user watched which part
of the video
o a calculation of an attention level to measure the attention of the students
• some methods of interaction
o randomly displayed questions which are not content related
o asking questions to the lecturer
o sending text-based questions to the students
o multiple-choice questions at predefined positions
o reporting technical problems
In the current study the focus lays on the multiple-choice questions in conjunction with the random questions. Due
to the fact that multiple-choice questions are displayed at predefined positions in the video it is necessary to plan
them before the video is deployed. For that LIVE offers a dialog to select a position in the video and to add a
question there. Now the question will be displayed to the students at this point in the video. (Wachtler & Ebner
2014a)
In comparison to the multiple-choice questions the randomly occurring questions are asking very simple questions
which are not related to the content. Naturally it is not required to plan them because they are occurring
automatically and randomly seven times per hour. This means that there are slots of 8:34 min. in length in which
such a question is shown. So it can be seen that there is a period of at maximum 17:08 min. of no occurring random
question. (Wachtler & Ebner 2014a)
Subjective User Feedback
In order to gain a subjective feedback LIVE was used at several lectures at Graz University of Technology. The
topics of the lectures ranged from a large freshman course in computer science to a course in the last semester of the
bachelor program in computer science as well as a course in adult education regarding cleanroom technology. The
students of these courses were asked to report their experiences with LIVE providing the interactions presented
above.
The returned feedback addresses mainly the following two issues:
• First it was pointed out that the interactions, which are presenting content related questions are more
favored than these displaying general questions. This statement was often supported by the claim that
general questions are disturbing because they are seeming to be useless. However the questions with a
content, which is related to the topic of the video are highly appreciated.
• The second reported issue regards the distribution itself as well as the number of interactions in the video.
This means that it was recommended that at a maximum of ten interactions per hour is acceptable.
Furthermore it was mentioned that the interactions should be spread evenly across the video.
Objective Evaluation
In contrast to the subjective feedback provided by the previous section this section aims to evaluate the distribution
of the interactions across learning videos. For that the results of the multiple-choice questions are analyzed in
conjunction with the time of occurrence of the interactions. This is done at the first six videos of the lecture “Logic
and Computability”1 which cover the topics of propositional logic as well as of predicate logic. Furthermore the
decidability of these logics is explained.
Figure 2 shows the number of students who watched these six videos. It can be seen that beginning with the third
video the number of students who watched most of the video (more than 75%) is quite high. The reason for the
lower acceptance of the first two videos could be that especially the first one covers only organizational and very
basic topics.
______________________________________
[1] http://www.iaik.tugraz.at/content/teaching/bachelor_courses/logik_und_berechenbarkeit/ (last accessed November 2014)
Originally published in: Wachtler, J., Ebner, M. (2015). Impacts of Interactions in Learning-Videos: A Subjective and Objective Analysis. In
Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2015. pp. 1205-1213 Chesapeake, VA:
AACE.
As mentioned above the evaluation of the distribution of the interactions and their impacts to the multiple-choice
questions analyzes the results of them at all six videos. This analysis shows a very similar outcome at each video.
Due to that reason the analysis is shown only for the third video which holds as a representative for the remaining
five ones.
The number of students who returned an answer to any multiple-choice questions of the third video is shown by
Figure 3. It is visible that this starts decreasing at the fifth question. This is explained by the fact that not every
student watched the full video (see Figure 2) and due to that they simply never reached the later questions.
Figure 4 shows the results of the multiple-choice questions of the third video whereas the questions placed in the
other videos deliver very similar results. The green part of the bars represents the users who answered the question
more often correct than false. In contrast to that the orange part indicates these ones who answered more often false
than correct. Finally the yellow part shows the number of equally answered questions. It is required to analyze how
often a question is answered correct or false by a student because it is possible to watch a video more often than
once and due to that a question could also be answered multiple times.
Figure 2: Number of users of the first six videos
Figure 3: Number of answers to the multiple-choice questions of the third video
Originally published in: Wachtler, J., Ebner, M. (2015). Impacts of Interactions in Learning-Videos: A Subjective and Objective Analysis. In
Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2015. pp. 1205-1213 Chesapeake, VA:
AACE.
Figure 4: Correctness of the answers to the multiple-choice questions of the third video
So the following three issues could be seen at the third video as well as at the remaining five videos in a very similar
form if we consider the distribution of the questions as the influencing factor of the success rate. Furthermore it is
assumed that the questions are equal in their difficulty.
• Lazy Start: The number of correct answers to the first question is not very high.
• Correct after Question Pause. Correct answers are numerous at the third question despite the fact that the
timespan since the last question is quite high.
• Tight-Placed Errors: If questions are placed very tight the number of correct answers is decreasing. This
effect is shown by the questions number three and four as well as by number five to seven.
To find a basic explanation of these issues Table 1 shows the multiple-choice questions and their time of occurrence
in the video. Furthermore it states the timespan since the last multiple-choice question as well as the maximum
timespan since the last random interaction. This is necessary to evaluate the impacts of the interactions to the results
of the multiple-choice questions if we assume that the difficulty of the multiple-choice questions itself is more or
less at the same level at each question.
No.
Time
Timespan since last
MC question
Max. timespan
since last random
interaction
1
0:16:11
-
0:16:11
2
0:17:33
0:01:22
0:08:59
3
0:39:26
0:21:53
0:13:44
4
0:39:27
0:00:01
0:13:45
5
1:01:04
0:21:37
0:09:40
6
1:03:24
0:02:20
0:12:00
7
1:05:26
0:02:02
0:14:02
8
1:12:45
0:07:19
0:12:47
9
1:14:50
0:02:05
0:14:52
10
1:14:57
0:00:01
0:14:53
Table 1: Occurrence of the multiple-choice questions of the third video in comparison
to the random interactions
Regarding the issue named “Lazy Start” it has to be considered that it is the first multiple-choice question and due to
that it was not necessary until the occurrence of this question to answer a content related question. Furthermore it
Originally published in: Wachtler, J., Ebner, M. (2015). Impacts of Interactions in Learning-Videos: A Subjective and Objective Analysis. In
Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2015. pp. 1205-1213 Chesapeake, VA:
AACE.
can be seen that the last random interaction could also be happened long ago (at a max 16:11 min.). So the timespan
since the last interactive task for the student could be too long.
This assumption could be seen at the “Correct after Question Pause” issue in its opposite form too. Although the last
multiple-choice question is placed long ago (21:53 min) the number of correct answers is quite high. As done by the
previous issue this could be explained by the time of occurrence of the last random question. It can be seen that this
timespan is smaller than above (13:44 min.) and this could be a reason for the outcome of this question despite its
difficulty.
The last issue (“Tight-Placed Errors”) addresses the observation that the number of correct answers is decreasing if
the questions are placed within a short period. This effect could be recognized at the questions number three and
four as well as number five to seven. In addition the questions number eight to ten are showing the same effect
however to a smaller degree. It can be seen that a space of 2:20 min. or less between questions leads to a decreasing
number of correct answers whereas 7:19 min. or more are leading to an acceptable result to the questions. This effect
is not shown by the combination of the questions number one and two which could be explained by assuming that
the first question is influenced by the “Lazy Start” issue and due to that this combination of questions could not be
considered as a valid sample.
Discussion
This section discusses the observations presented by the Section Objective Evaluation and additionally compares
them with the subjective user feedback. Furthermore it aims to give some basic recommendations of how to place
multiple-choice questions in a learning video base on the gathered observations.
A comparison of the “Lazy Start” issue with previous research studies points out that the reduced performance at the
first question happens unexpected. These studies always indicate that the attention of the students is decreasing
along the duration of the video. Due to that the performance at the first question is assumed to be one of the best
however the observed low performance leads to a contradiction. (Wachtler & Ebner 2014a) (Wachtler & Ebner
2014b) (Ebner et al. 2013)
The statement of the users that questions which are related to the content of the video are more favored than general
questions is a contradiction to the issue named “Correct after Question Pause”. It was observed that randomly
occurring general questions are probably leading to a better performance at the multiple-choice questions. This
means that the general questions might be a handy tool to overcome longer periods of no interactivity.
In contrast to the first user feedback it was possible to gather a similar observation for the second one by the
objective analysis. This means that all issues claim that an even distribution leads to an increased performance at the
multiple-choice questions. Furthermore a placement of the questions with minor space is followed by a low
performance at the questions.
Based on the observations Figure 5 presents a distribution of the interactions, which is constructed according the
observed issues. It marks the slots in which a general question occurs randomly with green bars. Furthermore it is
visible that a further interaction is placed in each random slot. Due to the fact that the welcome-interaction is located
in the first slot a multiple-choice question is first placed in the second slot. In addition it is pointed out that the
recommended place for a multiple-choice question lays in the middle of random slot because a balanced distribution
is suggested by the observations.
As a summary the following list presents a basic recommendation of how to place interactions in learning videos.
Naturally it is clear that these recommendations should be adapted to the content of the video. Furthermore it is
required to fully prove the observed issues, which are the base for these recommendations (see Section Outlook).
Figure 5: The ideal occurrence of multiple-choice questions
Originally published in: Wachtler, J., Ebner, M. (2015). Impacts of Interactions in Learning-Videos: A Subjective and Objective Analysis. In
Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2015. pp. 1205-1213 Chesapeake, VA:
AACE.
• The maximum number of interactions per hour is approximately ten to reach a balanced distribution with
enough space between the interactions.
• Content related questions are important for the satisfaction of the students.
• General questions seem to be useful to support the attention between content related questions.
• The space between content related questions should not be too small.
Outlook
As mentioned above the recommendations are based on observations gathered at six videos of a lecture at Graz
University of Technology. Due to that it is required to fully prove the accuracy of these observations. For that it is
planned to enrich some videos with interactions so that it is possible to check the accuracy of the observed issues
mathematically. All multiple-choice questions suggested bellow are supposed to be related to the content of the
video as well as from the same level of difficulty.
To prove the “Lazy Start” issue it is planned to place a multiple-choice question at the following positions in
different learning videos. This will be done to find a suitable region for the first question.
• 5:00 min.
• 7:00 min.
• 9:00 min.
• 11:00 min.
• 13:00 min.
• 15:00 min.
• 17:00 min.
Regarding the issue named “Correct after Question Pause” it is planned to place a multiple-choice question in a
learning video after a period of at least 20:00 min. with no occurrence of such a question. This video will be offered
on the one hand with no further interactivity and on the other hand it will be supported with random and automatic
questions, which are not related to the content of the video.
The last issue to prove is named “Tight-Placed Errors”. For that the following schedule of multiple-choice questions
in different videos will be constructed where t is a timestamp located in the suitable region for the first question
according to the “Lazy Start” prove.
• Video #1
o t.
o t + 2:00 min.
o t + 4:00 min.
• Video #3
o t
o t + 6:00 min.
o t + 12:00 min.
• Video #2
o t
o t + 4:00 min.
o t + 8:00 min.
• Video #4
o t
o t + 8:00 min.
o t + 16:00 min.
Finally a video following the recommendations of how to place interactions (see Section Discussion) should be
constructed. With the different videos enriched with interactions as shown above it will be possible to validate the
accuracy of the observation as well as the recommendations.
In addition it is recommended to evaluate the long term learning success of the usage of interactions in videos. For
that it would be required to analyze the results of an exam, which is performed some time after the video supported
with interactions. These results should be compared to the outcome of an exam, which is performed by students who
watched the video without any interactivity.
Conclusion
With this document a study is presented which analyzes the impacts of different forms of interactions in learning
videos. For that a web based information system is developed and used to integrate such interactions. The usage of
this information system at a lecture is analyzed in a subjective as well as in an objective way. With the outcome of
this analysis it is possible to state some recommendations of how to place multiple-choice questions in videos,
Originally published in: Wachtler, J., Ebner, M. (2015). Impacts of Interactions in Learning-Videos: A Subjective and Objective Analysis. In
Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2015. pp. 1205-1213 Chesapeake, VA:
AACE.
which are based on observed trends. Due to that the research problem (see Introduction) is finally answered by
constructing the following hypothesis: “Smaller timespans between interactions will lead to a decreased
performance at multiple-choice questions and an even distribution of the interactions will increase this
performance.”
To prove these recommendations, which are summarized by the hypothesis some further research is planned. It will
be required to fully validate the observed trends to ensure the accuracy of the recommendations. Furthermore it
should be analyzed how the interactions influence the long term learning success.
References
Carr!Chellman, A., & Duchastel, P. (2000). The ideal online course. British Journal of Educational Technology,
31(3), 229-241.
Ebner, M., Wachtler, J., & Holzinger, A. (2013). Introducing an information system for successful support of
selective attention in online courses. In Universal Access in Human-Computer Interaction. Applications and
Services for Quality of Life (pp. 153-162). Springer Berlin Heidelberg.
Ebner, M. (2009). Introducing live microblogging: how single presentations can be enhanced by the mass. Journal of
research in innovative teaching, 2(1), 91-100.
Freeman, J., & Dobbie, A. (2005). Use of an audience response system to augment interactive learning. Fam Med,
37(1), 12-4.
Haintz, C., Pichler, K., & Ebner, M. (2014). Developing a Web-Based Question-Driven Audience Response System
Supporting BYOD. J. UCS, 20(1), 39-56.
Heinze, H. J., Mangun, G. R., Burchert, W., Hinrichs, H., Scholz, M., Münte, T. F., ... & Hillyard, S. A. (1994).
Combined spatial and temporal imaging of brain activity during visual selective attention in humans. Nature.
Salomon, G. (1984). Television is easy and print is tough. The differential investment of mental effort in learning as
a function of perceptions and attributions. Journal of Educational Psychology, 76:647–658.
Spitzer, H., Desimone, R., & Moran, J. (1988). Increased attention enhances both behavioral and neuronal
performance. Science, 240(4850), 338-340.
Stowell, J. R., & Nelson, J. M. (2007). Benefits of electronic audience response systems on student participation,
learning, and emotion. Teaching of psychology, 34(4), 253-258.
Tobin, B. (2005). Audience response systems. http://med.stanford.edu/irt/edtech/contacts/documents/2005-
11_AAMC_tobin_audience_response_systems.pdf.
Wachtler, J., & Ebner, M. (2014a, June). Support of Video-Based lectures with Interactions-Implementation of a
first prototype. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (Vol. 2014,
No. 1, pp. 582-591).
Wachtler, J., & Ebner, M. (2014b). Attention Profiling Algorithm for Video-Based Lectures. In Learning and
Collaboration Technologies. Designing and Developing Novel Learning Experiences (pp. 358-367). Springer
International Publishing.