ArticlePDF Available

Automatic Judgement of Online Video Watching: I Know Whether or Not You Watched

Authors:

Abstract and Figures

Videos have long been viewed through the free choice of customers, but in some cases currently, watching them is absolutely required, for example, in institutions, companies, and education, even if the viewers prefer otherwise. In such cases, the video provider wants to determine whether the viewer has honestly been watching, but the current video viewing judging system has many loopholes; thus, it is hard to distinguish between honest viewers and false viewers. Time interval different answer popup quiz (TIDAPQ) was developed to judge honest watching. In this study, TIDAPQ randomly inserts specially developed popup quizzes in the video. Viewers must solve time interval pass (RESULT 1) and individually different correct answers (RESULT 2) while they watch. Then, using these two factors, TIDAPQ immediately performs a comprehensive judgement on whether the viewer honestly watched the video. To measure the performance of TIDAPQ, 100 experimental subjects were recruited to participate in the model verification experiment. The judgement performance on normal watching was 93.31%, and the judgement performance on unusual watching was 85.71%. We hope this study will be useful in many areas where watching judgements are needed.
Content may be subject to copyright.
mathematics
Article
Automatic Judgement of Online Video Watching:
I Know Whether or Not You Watched
Eunseon Yi 1, Heuiseok Lim 1,* and Jaechoon Jo 2 ,*
1Department of Computer Science and Engineering, Korea University, Seoul 02841, Korea;
sasilian@korea.ac.kr
2Division of Computer Engineering, Hanshin University, Osan 18101, Korea
*Correspondence: limhseok@korea.ac.kr (H.L.); jaechoon@hs.ac.kr (J.J.)
Received: 23 September 2020; Accepted: 14 October 2020; Published: 18 October 2020


Abstract:
Videos have long been viewed through the free choice of customers, but in some cases
currently, watching them is absolutely required, for example, in institutions, companies, and education,
even if the viewers prefer otherwise. In such cases, the video provider wants to determine whether the
viewer has honestly been watching, but the current video viewing judging system has many loopholes;
thus, it is hard to distinguish between honest viewers and false viewers. Time interval dierent
answer popup quiz (TIDAPQ) was developed to judge honest watching. In this study, TIDAPQ
randomly inserts specially developed popup quizzes in the video. Viewers must solve time interval
pass (RESULT 1) and individually dierent correct answers (RESULT 2) while they watch. Then, using
these two factors, TIDAPQ immediately performs a comprehensive judgement on whether the viewer
honestly watched the video. To measure the performance of TIDAPQ, 100 experimental subjects were
recruited to participate in the model verification experiment. The judgement performance on normal
watching was 93.31%, and the judgement performance on unusual watching was 85.71%. We hope
this study will be useful in many areas where watching judgements are needed.
Keywords:
video; watching; judgement; viewer; popup quiz; video learning; video advertising;
flipped learning; online class; blended learning
1. Introduction
Modern society has continued technological advances in telecommunications and equipment,
such as artificial intelligence (AI), ubiquitous computing, the Internet of Things (IoT), smart cities,
machine learning, big data and satellites, street advertisements, smart devices, and the surrounding
environment, so it can provide personalized videos. These videos may or may not be desired by the
individual. Videos began a long time ago as movies or TV programs but now include all videos produced
using computer technology as a generic term for moving pictures [
1
]. Videos are actively watched for
one’s own interest and satisfaction, such as dramas, movies, news, information, privacy of others,
and YouTube. However, even though you may not want to view online learning or advertising videos,
occasionally you may have to watch them for your own goals or for your company’s purposes [
2
,
3
];
furthermore, there are even cases where watching is essential for safety, such as school violence,
corporate violence, fire, industrial safety, and disasters. However, unwanted viewing places pressure
on people and causes them to play videos without watching.
Especially in education areas such as e-learning, smart learning, flipped learning, and blended
learning, videos are being used very actively [
3
,
4
]. Due to COVID-19 in 2020, most schools around the
world have been ordered to close and classes are being conducted through online remote learning [
5
].
However, the form of online learning, which requires self-directed learning, makes it dicult to
identify false viewers [
6
8
]. Currently, used online remote learning is marked as “Viewing Complete”
Mathematics 2020,8, 1827; doi:10.3390/math8101827 www.mdpi.com/journal/mathematics
Mathematics 2020,8, 1827 2 of 19
in the system, but it is dicult to determine whether it was actually watched. In actual online training
experiments, a lack of consistency in learning eectiveness is largely related to video learning [
9
,
10
].
However, teachers cannot know exactly whether video learning has faithfully been done, and there is
little research related to video watching judgement.
Common online video watching judgement techniques, which have been used for a long time
and are now commonly used, are examined below. First, if the video is played from beginning to end,
the system recognizes it as watching completion. Second, if in the middle of the video, the viewer
solves the quizzes related to the content he or she watched, the system leads to the next video, and if
that plays to the end, the system recognizes it as watching completion. Third, if another window on
the computer is over the video while the video is played, it stops playing. However, even if you just
play it until the end, the system recognizes it as watching completion. Fourth, when the play stops in
the middle of the video, if the viewer clicks, it moves to the next screen, and even if the viewer just
repeats these tasks to the end, the system recognizes it as watching completion. These methods can
easily incorporate simple obstacles, but even if the video stops, it then plays continuously as long
as the obstacle is cleared, so it is eventually misjudged as watching completion. These instances are
dicult to see as honest watching completion because even if students do other activities without
watching videos, the system misjudges watching completion. Therefore, it is necessary to accurately
determine whether the online video was watched normally.
This paper proposes the time interval different answer popup quiz (TIDAPQ) model to judge video
watching. TIDAPQ is a model that presents two interval popup quizzes in a video of approximately 10 min.
This model calculates the time difference of answer submissions (RESULT 1) and the individual/different
correct answers (RESULT 2). Then, if both of them are TRUE, the TIDAPQ will judge completion as
normal watching; otherwise, it will judge completion as abnormal watching. After developing the
TIDAPQ, 100 students at engineering universities were recruited as participants for the experiments,
and the model was verified. In this paper’s experiment, the video was content of a learning character,
and two popup quizzes were used in the videos of approximately 10 min long. However, TIDAPQ
was not developed to judge only learning video watching. Depending on the purpose of watching a
video, the length of the video and the number of popup quizzes may vary; thus, this paper describes
the most basic of the TIDAPQ’s various forms.
We hope that the results of this study will be used in various fields, including online learning,
reward advertising, and announcements from institutions or companies that require judging the
watching of videos.
The composition of this paper is as follows. Section 2(Background) explains the existing video
watching judgement methods and describes their weaknesses. Section 3describes the TIDAPQ model;
of the three watching judgement approaches, time dierence pass (RESULT 1) is explained in Section 3.3,
individual/dierent correct answers (RESULT 2) in Section 3.4, and comprehensive judgement, which
makes a final judgement using these two, is explained in Section 3.5. Section 4shows the verification
experiments of the TIDAPQ model, and Section 5presents the conclusions and suggestions.
2. Background
The first area to try video watching judgement was in education due to the expansion of the online
education market with the development of telecommunications and technology. In particular, video
watching judgement systems emerged in 2015. Video learning methods were not absent previously in
education; however, while previous video learning was watched based on the learners’ needs in an
auxiliary learning mode, current video learning consists of diversified forms of learning using video,
such as e-learning and smart learning. This diversification has occurred because one can take online
video classes such as edX, Coursera, and Udacity and obtain a certificate, and one can obtain a degree
with only online classes such as the Academic Credit Bank System [
1
]. In particular, education has also
appeared in which classroom classes can be conducted only when you do the prior video learning,
such as flipped learning [
11
,
12
]. Currently, flipped learning using online video is a great means
Mathematics 2020,8, 1827 3 of 19
of educating students in schools where talented people are gathered, such as Harvard, MIT, Seoul
National University (SNU), and Korea Advanced Institute of Science and Technology (KAIST) [
13
,
14
].
However, flipped learning is dicult to expand into groups that lack self-directed learning skills, such
as elementary, middle, and high schools, because it is dicult to determine whether they have normally
done prior learning via online video watching [
9
,
10
]. In addition, due to the influence of COVID-19,
online video learning has already entered the realm of elementary, middle, and high school public
education around the world [
5
]; however, educational materials are only being distributed online and
it is dicult to know whether young students who lack self-directed learning have honestly watched
the online learning videos [
6
,
15
]. The form of learning that was expected in the future has been moved
forward to the present. Accordingly, ancillary technological developments will have to be achieved.
In addition to the education sector, there are records that have been studied in the reward
advertisement field for the purpose of judging the viewing of advertising videos [
16
], but they
have not been used. Since the use of smartphones has become more active, advertising apps have
appeared where you can obtain points when you look at advertising videos for approximately 2~3 min.
During these advertisements, it was hard to identify false viewers who engaged in dierent activities
as soon as the video started playing. Because the advertiser had to pay points for customers who
did not watch the advertisements, currently, the method of paying points for watching advertising
videos has almost disappeared. Instead of video advertising, advertising apps now oer points for
photography, writing, web pages, trying, touching, and signing up.
The following Sections 2.12.3 describe the viewing judgement method attempted in the fields of
education and advertising that generated the need for video viewing judgement.
2.1. The Appearance of the Video Watching Judgement System
Education is the first area to feel the need for video viewing judgement, and Zaption, which
provides learner analysis, appeared in 2015 [
17
,
18
], followed by Educannon and Workday [
19
,
20
],
but the service was terminated without a long duration. They carefully analyzed learners to help
online video learning judgements, but the function was too complicated and the teacher had to make
the final decision on watching completion, so it was ineective. The URLs for Zaption and Educannon
remain, but the pages have been deleted; the URLs for Workday, which acquired Zaption, have been
changed from a learner analysis system to a system that helps the business.
Zaption of Figure 1, a representative company in learning video viewing judgement, shows viewers
by date, average viewing time, question completion, stars, average skip forward, and average skip
backward with graphs. Zaption can also check each student’s submitted responses, last submission
date, last viewed date, total viewing time, and total views and check the answers to the video quizzes
completed by the student [
18
]. Very detailed analysis was possible, but it was dicult to judge
watching completion by compiling this information.
Mathematics 2020,8, 1827 4 of 19
Mathematics 2020, 8, x FOR PEER REVIEW 4 of 19
(a)
(b)
(c)
Figure 1. (a) Overview graph of Zaption; (b) Viewer’s tour analysis of Zaption; (c) The video quizzes
and answers for each student in Zaption.
Figure 1.
(
a
) Overview graph of Zaption; (
b
) Viewer’s tour analysis of Zaption; (
c
) The video quizzes
and answers for each student in Zaption.
Mathematics 2020,8, 1827 5 of 19
2.2. Current Video Watching Judgement System
There are currently no systems to make a video watching judgement after this video watching
learning system was discontinued between 2015 and 2020, but similar systems include Playposit and
Oce Mix [7,8].
Playposit chose “Inducing View through Interactive Video” as its signature feature, but just when
the quizzes, related to the content of the video, are solved in the middle of a video, the video can
simply play into the next screen [
7
]. This technique is dicult to call a special interaction and it is
dicult to determine whether the video was watched.
Oce Mix is a function supported in MS Oce 2013 and above and can create lecture videos by
placing the teacher’s voice and face on PowerPoint slides. The provided analysis menus show the quiz
answer rate, slide view frequency, and slide view time for each slide, but not enough information to
analyze each user; it only shows a rough analysis [
8
]. The learner watching analysis through Oce
Mix expired in 2018 [
8
]; existing MS Oce buyers can continue to use it, but new buyers cannot use it
for free. Microsoft is currently operating it for a fee by converting Oce Mix’s extended capabilities to
Microsoft Stream. Microsoft Stream said, “We have online intelligent video, so it induces learners to
watch” [
21
]. However, this approach is only dierent in that the video includes lecture content and it is
released to a designated group, but there is no special dierence from YouTube. While watching the
video, a user can ask questions through chatting and have opportunities to interact with the teacher,
but the user cannot see the watching analysis data, such as quiz answer rate, slide view frequency,
or slide watch time provided by Oce Mix.
2.3. Khan Academy
Khan Academy, famous for free online lecture services, has been growing steadily since 2008.
Khan Academy is not a service run for learner analysis, but it shows “How much did they invest in
studying per day?”, “What video did they see?”, “When did they stop the video and what did they
look at?”, “What exercise did they use?”, and “Where did they focus?” [
22
]. It also shows the exercises
and videos that many students focused on [
23
]. It is not a detailed analysis, such as Zaption and
Educannon, but it is enough to grasp the student’s learning status. However, Khan Academy also has
diculties for teachers to judge whether video watching has been completed by analyzing the data
provided by Khan Academy. There are Edmodo [
24
], Moodle [
25
], Blackboard [
26
], Schoology [
27
],
Brightspace [
28
], Litmos [
29
], and TalentLMS [
30
] as learning management systems (LMS), which give
or manage points for watching videos and prior learning, but like Khan Academy, they also have
diculties judging video watching.
2.4. Advertising Video Viewing Surveillance System
In addition to education, a field that perceives the necessity of watching judgements is advertising.
The content viewing monitoring system of mobile reward advertising was developed to determine
whether customers watched a provided commercial video [
31
]. For video advertisements that pay
money or point rewards for watching, it is important to determine whether sincere viewing is occurring.
The content viewing monitoring system of Figure 2can detect facial areas from images acquired by
cameras in Android smartphone environments; it then can monitor the location of eyes and the opening
of eyes, so it can determine whether clients are looking at the screen. The eye detection method of the
system uses block contrast between the right central block and the surrounding block to detect the
eyebrow and then, looks for the eye location using geometric properties between the eyebrows and
eyes to determine whether the eyes are open or closed [
31
]. In the integrated image of the eyebrow
area, the eyebrows are extracted using the characteristic that the area corresponding to the eyebrows is
relatively dark compared to the surrounding blocks. At the same time, from the integral image of each
eye seeking area, the eye candidate areas are extracted using the characteristics that the pupil blocks
Mathematics 2020,8, 1827 6 of 19
are relatively dark and symmetrical compared to the rest of the surrounding blocks. Then, the central
pixel of the block with the maximum block contrast is taken as the pupil candidate point.
Mathematics 2020, 8, x FOR PEER REVIEW 6 of 19
Figure 2. Administrator mode of the content viewing monitoring system.
The content viewing monitoring system was developed to determine the viewing of reward
advertisements, but viewing judgement, which uses eye color contrast, determines watching
completion regardless of whether a face photo or a teddy bear is placed in front of the camera, so it
is difficult to apply to actual reward advertisements. This technology needs to be supplemented
slightly more.
However, this technology has the great advantage of being simple to use to judge viewing. The
viewing learning judgement systems, described in Section 2.1, 2.2, and 2.3, were ambiguous in the
criteria for viewing judgement and difficult to use to judge because they provided incidental data
collected after viewing rather than judging while viewing. On the other hand, the content viewing
monitoring system can perform the viewing judgement immediately after viewing using only videos,
without incidental data that occur after viewing. It has great advantages of being simple to use and
immediate judgement. It is expected that if this technology develops, it will be a means of effective
viewing judgement.
The abovementioned methods of video watching judgement have a common weakness—false
viewers may occur when playing videos and performing other activities. When you find a video that
has been stopped while doing other activities, you can just answer the quizzes. Even if it is a difficult
quiz, it is possible to ask your colleague for answers through a chat, allowing you to write the answers
any time you want. The video, which has been displayed on the monitor for a long time, becomes a
video that the student has focused on and points will pile up even if there is a face photo in front of
the camera. Learner analysis has been well done, but teachers have difficulty determining whether
they watched the video through extensive data analysis. To achieve smooth learning progression, it
is necessary to accurately determine whether videos are being watched. This paper uses only video
and immediately calculates the watching judgement based on the viewer’s events occurring while
watching the video, clarifies the criteria for watching judgement, and presents a simple method of
watching judgement.
3. Time Interval Different Answer Popup Quiz (TIDAPQ) Model
3.1. Definition of Watching Completion
Watching completion means staring at the screen without doing anything else within a set
period. We do not call it watching completion if the viewer has been watching for a long time or if
the viewer skips the screen and watches it quickly because of knowing it already. A viewer being on
STOP for too long means that they have done something else without watching the video. Normal
watching completion means that the viewers stared at the screen honestly from the beginning to the
end of the video. In this paper, to explain accurately, an answer and the correct answer were used
separately. An answer means any record submitted by the viewers, whether it is a wrong answer or
Figure 2. Administrator mode of the content viewing monitoring system.
The content viewing monitoring system was developed to determine the viewing of reward
advertisements, but viewing judgement, which uses eye color contrast, determines watching completion
regardless of whether a face photo or a teddy bear is placed in front of the camera, so it is dicult to
apply to actual reward advertisements. This technology needs to be supplemented slightly more.
However, this technology has the great advantage of being simple to use to judge viewing.
The viewing learning judgement systems, described in Section 2.1, Section 2.2, and Section 2.3,
were ambiguous in the criteria for viewing judgement and dicult to use to judge because they
provided incidental data collected after viewing rather than judging while viewing. On the other hand,
the content viewing monitoring system can perform the viewing judgement immediately after viewing
using only videos, without incidental data that occur after viewing. It has great advantages of being
simple to use and immediate judgement. It is expected that if this technology develops, it will be a
means of eective viewing judgement.
The abovementioned methods of video watching judgement have a common weakness—false
viewers may occur when playing videos and performing other activities. When you find a video that
has been stopped while doing other activities, you can just answer the quizzes. Even if it is a dicult
quiz, it is possible to ask your colleague for answers through a chat, allowing you to write the answers
any time you want. The video, which has been displayed on the monitor for a long time, becomes a
video that the student has focused on and points will pile up even if there is a face photo in front of
the camera. Learner analysis has been well done, but teachers have diculty determining whether
they watched the video through extensive data analysis. To achieve smooth learning progression, it is
necessary to accurately determine whether videos are being watched. This paper uses only video
and immediately calculates the watching judgement based on the viewer’s events occurring while
watching the video, clarifies the criteria for watching judgement, and presents a simple method of
watching judgement.
3. Time Interval Dierent Answer Popup Quiz (TIDAPQ) Model
3.1. Definition of Watching Completion
Watching completion means staring at the screen without doing anything else within a set period.
We do not call it watching completion if the viewer has been watching for a long time or if the viewer
skips the screen and watches it quickly because of knowing it already. A viewer being on STOP for
too long means that they have done something else without watching the video. Normal watching
Mathematics 2020,8, 1827 7 of 19
completion means that the viewers stared at the screen honestly from the beginning to the end of the
video. In this paper, to explain accurately, an answer and the correct answer were used separately.
An answer means any record submitted by the viewers, whether it is a wrong answer or a right answer,
and the correct answer means the right answer. We also used the timepoint and the time separately.
The timepoint means one moment in time and the time means general time.
3.2. TIDAPQ Model
Figure 3shows systems with the time interval dierent answer popup quiz (TIDAPQ) model
applied. TIDAPQ is a model that inserts popup quizzes in the video and mixes two methods—Time
Interval Pass (RESULT 1) and Interval Dierent Correct Answer (RESULT 2)—to perform Watching
Completion Comprehensive Judgement. The system with this paper’s TIDAPQ model was used for
Section 4’s model verification, which obtained the character of learning because the participants of
the experiment were students in the computer department of an engineering university. However,
the TIDAPQ model is not only developed for judging learning video watching; the system may vary
depending on the purpose of watching the video. In addition, this paper uses two popup quizzes in
a 10 min long video to describe the most basic form of TIDAPQ, but the length of the video and the
number of popup quizzes may vary depending on the purpose of watching the video. The system’s
process, applied to the TIDAPQ model, is as follows.
Mathematics 2020, 8, x FOR PEER REVIEW 7 of 19
a right answer, and the correct answer means the right answer. We also used the timepoint and the
time separately. The timepoint means one moment in time and the time means general time.
3.2. TIDAPQ Model
Figure 3 shows systems with the time interval different answer popup quiz (TIDAPQ) model
applied. TIDAPQ is a model that inserts popup quizzes in the video and mixes two methods—Time
Interval Pass (RESULT 1) and Interval Different Correct Answer (RESULT 2)—to perform Watching
Completion Comprehensive Judgement. The system with this paper’s TIDAPQ model was used for
Section 4s model verification, which obtained the character of learning because the participants of
the experiment were students in the computer department of an engineering university. However,
the TIDAPQ model is not only developed for judging learning video watching; the system may vary
depending on the purpose of watching the video. In addition, this paper uses two popup quizzes in
a 10 min long video to describe the most basic form of TIDAPQ, but the length of the video and the
number of popup quizzes may vary depending on the purpose of watching the video. The system’s
process, applied to the TIDAPQ model, is as follows.
Figure 3. Systems with Time Interval Different Answer Popup Quiz (TIDAPQ) Model.
Professors sign up for a membership and create classes. Students sign up for a membership and
request admission to classes created by the professor. The professor only accepts requests from the
students in the corresponding class. Professors can manage videos, learners, announcements,
questions, and answers. When a professor uploads a video to the TIDAPQ system, the system takes
two random popup quizzes from the Quiz Pool Database and inserts them into the video. Students
can check their learning progress, ask questions, and be given their own unique numbers. Students
should solve two popup quizzes while watching the video for approximately 10 min. The length of
the video may vary depending on the purpose, but the study was conducted with a video of
approximately 10 min. The number of popup quizzes may also increase if the video is longer, but this
paper describes the most basic form among the various forms of TIDAPQ. Student events are used
to judge watching.
Time Interval Pass (RESULT 1) calculates the allowed time zone using the viewer’s event time
while watching the video, and if the viewer completes the work within the calculated time range, it
is judged as TRUE or FALSE. The event times used for calculation are as follows: starting timepoint
Figure 3. Systems with Time Interval Dierent Answer Popup Quiz (TIDAPQ) Model.
Professors sign up for a membership and create classes. Students sign up for a membership
and request admission to classes created by the professor. The professor only accepts requests from
the students in the corresponding class. Professors can manage videos, learners, announcements,
questions, and answers. When a professor uploads a video to the TIDAPQ system, the system takes
two random popup quizzes from the Quiz Pool Database and inserts them into the video. Students can
check their learning progress, ask questions, and be given their own unique numbers. Students should
solve two popup quizzes while watching the video for approximately 10 min. The length of the video
may vary depending on the purpose, but the study was conducted with a video of approximately 10
Mathematics 2020,8, 1827 8 of 19
min. The number of popup quizzes may also increase if the video is longer, but this paper describes
the most basic form among the various forms of TIDAPQ. Student events are used to judge watching.
Time Interval Pass (RESULT 1) calculates the allowed time zone using the viewer’s event time
while watching the video, and if the viewer completes the work within the calculated time range, it is
judged as TRUE or FALSE. The event times used for calculation are as follows: starting timepoint of
watching, popping up timepoint of 1st quiz, disappeared timepoint of 1st quiz, popping up timepoint
of 2nd quiz, disappeared timepoint of 2nd quiz, and ending timepoint of watching. Since the individual
popping up timepoints are dierent, the individual allowed time zones are also dierent. Time Interval
Pass (RESULT 1) will be described in more detail in Section 3.3.
The quizzes are taken randomly from the database and are inserted into the video for a certain
period. Since the quiz inserted into one video is the same problem for a certain period, a group of
viewers who accessed the quiz during the same period see the same problem. However, to prevent
any exchange of information with others, the individual correct answers are dierent. The unique
numbers given to viewers were used as primary keys in the database, and since the problems stored
in the database were created using individual primary keys, the correct answer also depends on the
individual primary key. The viewers who answered the popup quizzes correctly have a value of TRUE
for individual/dierent correct answers (RESULT 2). Individual/dierent correct answers (RESULT 2)
will be described in more detail in Section 3.4.
Comprehensive Judgement uses the result values of RESULT 1 and RESULT 2 to become
TRUE when both are TRUE. When the result of Compressive Judgement is TRUE, the TIDAPQ
system determines that the viewer has achieved Normal Watching Completion, and the viewer can
automatically check the result immediately after watching the video. The Normal Watching Completion
Comprehensive Judgement will be described in more detail in Section 3.5.
After being judged as normal watching completion, the video can be watched repeatedly without
interference from the popup quizzes. The judgement results of the TIDAPQ system can be reflected in
the grade or be earned as points and can be reflected in a performance assessment. Therefore, viewers
who need the normal watching completion result will stare at the screen without any other activity
while the TIDAPQ video is playing.
3.3. Time Interval Pass
3.3.1. Time Interval Pass Process
Figure 4shows the process of time interval pass.
Mathematics 2020, 8, x FOR PEER REVIEW 9 of 19
7.
Figure 4. Process of Time Interval Pass.
3.3.2. Quiz Appearance Time Range
Figure 5 shows the time range in which the quizzes appear. The 1st popup quiz appears in front
of the video and the 2nd popup quiz appears behind the video. The values used to calculate the
appearance time are the start times of each popup quiz. Based on “Half” in Figure 5, the front is the
start time possible range of the 1st popup quiz (𝑆𝑇𝑅) and the back is the start time possible range
of the 2nd popup quiz (𝑆𝑇𝑅). The quiz pops up at any time within a set time range, and each
individual has a different popup time. After normal watching completion is judged in the
comprehensive judgement, the viewers can watch a part of the video repeatedly without interference
from popup quizzes.
𝑆𝑇𝑅

—Start Time Possible Range of 1st Popup Quiz;
𝑆𝑇𝑅

—Start Time Possible Range of 2nd Popup Quiz;
x—Free Time of 60 s.
Figure 5. Quiz appearance time range.
Equation (1) is the calculation of the start time possible range of the 1st popup quiz (𝑆𝑇𝑅)
and Equation (2) shows the calculation of the start time possible range of the 2nd popup quiz
(𝑆𝑇𝑅). To prevent the 2nd quiz from appearing while the 1st quiz is shown, 60 s of free time x is
placed between the two popup quizzes. Since the 1st quiz is shown for 15 s and time is required to
prepare the next task, free time x was set at 60 s. Free time is adjustable.
3.3.3. Quiz Answer Time Range
Figure 6 and Equations (3) and (4) show the normal quiz answer submission time range. The
answer time range of the 1st popup quiz (𝐴𝑇𝑅) of Equation (3) cannot exceed Half, and the
answer time range of the 2nd popup quiz (𝐴𝑇𝑅) of Equation (4) should be between Half and
VideoEnd. Popup quizzes disappear after 15 s, and viewers should answer them immediately as soon
as they appear. The time duration the quiz is displayed for is adjustable, but too long a time may
interfere with watching and too short a time may be insufficient to write answers.
0 𝑆𝑇𝑅
 𝐻𝑎𝑙𝑓 − 𝑥 (1)
𝐻𝑎𝑙𝑓 𝑆𝑇𝑅 𝑉𝑖𝑑𝑒𝑜𝐸𝑛𝑑 𝑥 (2)
Figure 4. Process of Time Interval Pass.
1. The moment the video begins, the Watching Start Timepoint is recorded.
2.
While watching the video, the 1st quiz appears at any timepoint and the video does not stop.
The moment the 1st quiz appears, the Popping Up Timepoint of the 1st quiz is recorded. Popup
quizzes must be answered within 15 s. The content of the quiz is very easy, so students can answer
it right away. The moment the answer to the 1st popup quiz is written, the Answer Timepoint of
the 1st quiz is recorded. After 15 s of the Popping Up Timepoint of the 1st quiz, the 1st popup
quiz disappears and the Disappeared Timepoint of the 1st quiz is recorded.
3. Viewers should keep watching the video. Otherwise, they may miss the popup quiz.
4.
While watching the video, the 2nd quiz appears at any timepoint and the video does not stop.
The moment the 2nd quiz appears, the Popping Up Timepoint of the 2nd quiz is recorded. Popup
Mathematics 2020,8, 1827 9 of 19
quizzes must be answered within 15 s. The content of the quiz is very easy, so students can
answer it right away. The moment the answer to the 2nd popup quiz is written, the Answer
Timepoint of the 2nd quiz is recorded. After 15 s of the Popping Up Timepoint of the 2nd quiz,
the 2nd popup quiz disappears and the Disappeared Timepoint of the 2nd quiz is recorded.
5. The moment the video ends, the Watching End Timepoint is recorded.
6. TIDAPQ calculates the time interval pass (RESULT 1) using all timepoints.
3.3.2. Quiz Appearance Time Range
Figure 5shows the time range in which the quizzes appear. The 1st popup quiz appears in front
of the video and the 2nd popup quiz appears behind the video. The values used to calculate the
appearance time are the start times of each popup quiz. Based on “Half” in Figure 5, the front is
the start time possible range of the 1st popup quiz
STR1stQ
and the back is the start time possible
range of the 2nd popup quiz
STR2edQ
. The quiz pops up at any time within a set time range,
and each individual has a dierent popup time. After normal watching completion is judged in the
comprehensive judgement, the viewers can watch a part of the video repeatedly without interference
from popup quizzes.
0<STR1stQ <Hal f x(1)
Hal f <STR2edQ <VideoEnd x(2)
Mathematics 2020, 8, x FOR PEER REVIEW 9 of 19
7.
Figure 4. Process of Time Interval Pass.
3.3.2. Quiz Appearance Time Range
Figure 5 shows the time range in which the quizzes appear. The 1st popup quiz appears in front
of the video and the 2nd popup quiz appears behind the video. The values used to calculate the
appearance time are the start times of each popup quiz. Based on “Half” in Figure 5, the front is the
start time possible range of the 1st popup quiz (𝑆𝑇𝑅) and the back is the start time possible range
of the 2nd popup quiz (𝑆𝑇𝑅). The quiz pops up at any time within a set time range, and each
individual has a different popup time. After normal watching completion is judged in the
comprehensive judgement, the viewers can watch a part of the video repeatedly without interference
from popup quizzes.
𝑆𝑇𝑅

—Start Time Possible Range of 1st Popup Quiz;
𝑆𝑇𝑅

—Start Time Possible Range of 2nd Popup Quiz;
x—Free Time of 60 s.
Figure 5. Quiz appearance time range.
Equation (1) is the calculation of the start time possible range of the 1st popup quiz (𝑆𝑇𝑅)
and Equation (2) shows the calculation of the start time possible range of the 2nd popup quiz
(𝑆𝑇𝑅). To prevent the 2nd quiz from appearing while the 1st quiz is shown, 60 s of free time x is
placed between the two popup quizzes. Since the 1st quiz is shown for 15 s and time is required to
prepare the next task, free time x was set at 60 s. Free time is adjustable.
3.3.3. Quiz Answer Time Range
Figure 6 and Equations (3) and (4) show the normal quiz answer submission time range. The
answer time range of the 1st popup quiz (𝐴𝑇𝑅) of Equation (3) cannot exceed Half, and the
answer time range of the 2nd popup quiz (𝐴𝑇𝑅) of Equation (4) should be between Half and
VideoEnd. Popup quizzes disappear after 15 s, and viewers should answer them immediately as soon
as they appear. The time duration the quiz is displayed for is adjustable, but too long a time may
interfere with watching and too short a time may be insufficient to write answers.
0 𝑆𝑇𝑅
 𝐻𝑎𝑙𝑓 − 𝑥 (1)
𝐻𝑎𝑙𝑓 𝑆𝑇𝑅 𝑉𝑖𝑑𝑒𝑜𝐸𝑛𝑑 𝑥 (2)
Figure 5. Quiz appearance time range.
Equation (1) is the calculation of the start time possible range of the 1st popup quiz (
STR1stQ
) and
Equation (2) shows the calculation of the start time possible range of the 2nd popup quiz
(STR2edQ)
.
To prevent the 2nd quiz from appearing while the 1st quiz is shown, 60 s of free time x is placed
between the two popup quizzes. Since the 1st quiz is shown for 15 s and time is required to prepare
the next task, free time x was set at 60 s. Free time is adjustable.
3.3.3. Quiz Answer Time Range
Figure 6and Equations (3) and (4) show the normal quiz answer submission time range. The answer
time range of the 1st popup quiz
ATR1stQ
of Equation (3) cannot exceed Half, and the answer time
range of the 2nd popup quiz
ATR2ndQ
of Equation (4) should be between Half and VideoEnd. Popup
quizzes disappear after 15 s, and viewers should answer them immediately as soon as they appear.
The time duration the quiz is displayed for is adjustable, but too long a time may interfere with
watching and too short a time may be insucient to write answers.
0<ATR1stQ <Hal f (3)
Hal f <ATR2ndQ <VideoEnd (4)
Mathematics 2020,8, 1827 10 of 19
Mathematics 2020, 8, x FOR PEER REVIEW 10 of 19
𝐴𝑇𝑅

—Answer Time Range of 1
st
Popup Quiz;
𝐴𝑇𝑅

—Answer Time Range of 2
nd
Popup Quiz;
Figure 6. Quiz answer submission time range.
0
𝐴
𝑇𝑅 𝐻𝑎𝑙𝑓 (3)
𝐻𝑎𝑙𝑓 
𝐴
𝑇𝑅 𝑉𝑖𝑑𝑒𝑜𝐸𝑛𝑑 (4)
3.3.4. Judgement of Time Interval Pass (RESULT 1)
This section describes the time interval pass judgement (RESULT 1), one of the two methods
used to calculate the watching judgement of TIDAPQ. Existing online lectures are recognized as
watching completion, even if viewers watch longer than the length of video, but TIDAPQ requires
video length and watching time to match, and the answer submission time difference of the two
quizzes should be within the calculated range. After the entire watching judgement is over, it
becomes possible to look at an incomprehensible part repeatedly. RESULT 1, the result of time
interval pass, has to meet the following four conditions:
Condition 1. Was the video length and play time the same? (An error time of 5 seconds is added).
Condition 2. Was the 1st answer (𝐴𝑇
) written between the Start Timepoint of the 1st Popup
Quiz (𝑆𝑇
) and the End Timepoint of the 1st Popup Quiz (𝐸𝑇
)?
Condition 3. Was the 2nd answer (𝐴𝑇) written between the Start Timepoint of the 2nd Popup
Quiz (𝑆𝑇 ) and the End Timepoint of the 2nd Popup Quiz (𝐸𝑇)?
Condition 4. Was the time interval between two answers (𝐴𝑇𝐼) between the calculated maximum
(𝐴𝑇𝐼) and minimum(𝐴𝑇𝐼) values?
The above four conditions are expressed in the following Equations. Please see Figure 7 and
Condition 4 further described below.
Condition 1: 𝑉𝑖𝑑𝑒𝑜𝐿𝑒𝑛𝑔𝑡ℎ ≤ 𝑃𝑙𝑎𝑦𝑇𝑖𝑚𝑒 𝑉𝑖𝑑𝑒𝑜𝐿𝑒𝑛𝑔𝑡ℎ  5 𝑠𝑒𝑐𝑜𝑛𝑑𝑠
(5)
Condition 2: 𝑆𝑇

𝐴
𝑇
 ≤ 𝐸𝑇

(6)
Condition 3: 𝑆𝑇
𝐴
𝑇 ≤ 𝐸𝑇
 (7)
Condition 4:
𝐴
𝑇𝐼
𝐴
𝑇𝐼
𝐴
𝑇𝐼 (8)
Figure 7 shows the time flow of TIDAPQ videos and the time interval pass process of viewer A.
Viewer A cases submitted normal answers in the time interval pass judgement. The dotted line
connects the time used to calculate TIDAPQ’s Time Interval Pass and viewer A’s time. Viewer A
cannot know when the popup quiz will appear and the popup quiz disappears after 15 s, so if
students turn on the video and do other activities at the same time, they may miss the popup quiz.
Even if the quiz is exposed on the screen, the video does not stop automatically, so the viewers who
want normal watching completion cannot perform other activities.
Figure 6. Quiz answer submission time range.
3.3.4. Judgement of Time Interval Pass (RESULT 1)
This section describes the time interval pass judgement (RESULT 1), one of the two methods used
to calculate the watching judgement of TIDAPQ. Existing online lectures are recognized as watching
completion, even if viewers watch longer than the length of video, but TIDAPQ requires video length
and watching time to match, and the answer submission time dierence of the two quizzes should be
within the calculated range. After the entire watching judgement is over, it becomes possible to look
at an incomprehensible part repeatedly. RESULT 1, the result of time interval pass, has to meet the
following four conditions:
Condition 1.
Was the video length and play time the same? (An error time of 5 seconds is added).
Condition 2.
Was the 1st answer
AT1stQ
written between the Start Timepoint of the 1st Popup Quiz
ST1stQand the End Timepoint of the 1st Popup Quiz ET1stQ?
Condition 3.
Was the 2nd answer
AT2ndQ
written between the Start Timepoint of the 2nd Popup
Quiz ST2ndQand the End Timepoint of the 2nd Popup Quiz ET2ndQ?
Condition 4.
Was the time interval between two answers
(ATI)
between the calculated maximum
(ATImax)and minimum (ATImin)values?
The above four conditions are expressed in the following Equations. Please see Figure 7and
Condition 4 further described below.
Condition 1 : VideoLength PlayTime VideoLength +5seconds (5)
Condition 2 : ST1stQ AT1stQ ET1stQ (6)
Condition 3 : ST2ndQ AT2ndQ ET2ndQ (7)
Condition 4 : ATImin ATI ATImax (8)
Figure 7shows the time flow of TIDAPQ videos and the time interval pass process of viewer A.
Viewer A cases submitted normal answers in the time interval pass judgement. The dotted line
connects the time used to calculate TIDAPQ’s Time Interval Pass and viewer A’s time. Viewer A cannot
know when the popup quiz will appear and the popup quiz disappears after 15 s, so if students turn
on the video and do other activities at the same time, they may miss the popup quiz. Even if the
quiz is exposed on the screen, the video does not stop automatically, so the viewers who want normal
watching completion cannot perform other activities.
The
VideoLength PlayTime VideoLength +
5
s
of Condition 1 prevents playing longer than
the length of video. If the play time is longer than the video length of Condition 1, even if Condition 2,
Condition 3, and Condition 4 have been met, then it is likely that after solving the 2nd popup quiz, he
or she did not keep in his or her own seat.
Mathematics 2020,8, 1827 11 of 19
Mathematics 2020, 8, x FOR PEER REVIEW 11 of 19
Q1—1st Popup Quiz
Q2—2nd Popup Quiz
𝐴𝑇

—Answer Timepoint of 1st Popup Quiz
𝐴𝑇

—Answer Timepoint of 2nd Popup Quiz
𝐴𝑇𝐼—Time Interval Between Two Answers
𝐴𝑇𝐼

—Minimum Time Interval Between Two Answers
𝐴𝑇𝐼

—Maximum Time Interval Between Two
Answers
𝑆𝑇

—Start Timepoint of 1st Popup Quiz
𝐸𝑇

—End Timepoint of 1st Popup Quiz
𝑆𝑇

—Start Timepoint of 2nd Popup Quiz
𝐸𝑇

—End Timepoint of 2nd Popup Quiz
Dotted line—The Same Timepoint Connection
Used to Judge Time Interval Pass
Figure 7. TIDAPQ Video Time Flow and Viewer A’s Time Interval Pass Process.
The VideoLength ≤ PlayTime ≤ VideoLength  5 s of Condition 1 prevents playing longer
than the length of video. If the play time is longer than the video length of Condition 1, even if
Condition 2, Condition 3, and Condition 4 have been met, then it is likely that after solving the 2nd
popup quiz, he or she did not keep in his or her own seat.
Condition 2 and Condition 3 are designed to recognize the quiz popping up at any timepoint
and answer it immediately within 15 s, and if the answers have not been submitted within the time
of Condition 2 and Condition 3, TIDAPQ shall deem them not to have been staring at the screen.
Condition 4 relates to how the video is played. Most of the existing video watching judgement
systems prevent play bars from being manipulated. They also block the play speed from being
adjusted. This approach is not a bad approach to judge a first watching. However, in the case of online
video lectures, you may want to focus on parts you do not know and watch only one part over and
over again, but if you are not allowed play bars and have to watch it again from the beginning, or if
you are not allowed to control the play speed, this may cause you to avoid watching it again for
review. The same goes for commercial and public announcement videos. After receiving the results
of normal watching completion, even if you want to focus on a part of the advertisement video that
describes the special features of the equipment or even if you want to focus on a part of an important
announcement video, if the system limits the skip button, speed, and play bar, the viewers will give
up repeat watching and intensive watching. Therefore, not only the judgement of the video’s
watching completion but also free operation of the video should be allowed. Considering the
inconvenience for these users, TIDAPQ allowed free play, and after watching completion was judged,
the viewers were allowed to focus freely on the parts they wanted. In addition, TIDAPQ will be used
for groups that need the result of normal watching completion, but if you are a viewer who does not
need the result of normal watching completion, such as parents and simply people interested in
information, you can freely manipulate even the first video without the stress of watching it from the
beginning. However, the problem that can occur at this time is that viewers can freely move the play
bar, find only the popup quizzes, and submit answers, or viewers can change the play speed to quick
and achieve abnormal watching. Condition 4 has been added to solve this problem. With the free
operation of play buttons such as skip, speed, and play bars, Condition 2 and Condition 3 can be
resolved, but it is difficult to resolve Condition 4, as shown below.
Figure 7. TIDAPQ Video Time Flow and Viewer A’s Time Interval Pass Process.
Condition 2 and Condition 3 are designed to recognize the quiz popping up at any timepoint
and answer it immediately within 15 s, and if the answers have not been submitted within the time of
Condition 2 and Condition 3, TIDAPQ shall deem them not to have been staring at the screen.
Condition 4 relates to how the video is played. Most of the existing video watching judgement
systems prevent play bars from being manipulated. They also block the play speed from being adjusted.
This approach is not a bad approach to judge a first watching. However, in the case of online video
lectures, you may want to focus on parts you do not know and watch only one part over and over
again, but if you are not allowed play bars and have to watch it again from the beginning, or if you
are not allowed to control the play speed, this may cause you to avoid watching it again for review.
The same goes for commercial and public announcement videos. After receiving the results of normal
watching completion, even if you want to focus on a part of the advertisement video that describes the
special features of the equipment or even if you want to focus on a part of an important announcement
video, if the system limits the skip button, speed, and play bar, the viewers will give up repeat watching
and intensive watching. Therefore, not only the judgement of the video’s watching completion but
also free operation of the video should be allowed. Considering the inconvenience for these users,
TIDAPQ allowed free play, and after watching completion was judged, the viewers were allowed
to focus freely on the parts they wanted. In addition, TIDAPQ will be used for groups that need
the result of normal watching completion, but if you are a viewer who does not need the result of
normal watching completion, such as parents and simply people interested in information, you can
freely manipulate even the first video without the stress of watching it from the beginning. However,
the problem that can occur at this time is that viewers can freely move the play bar, find only the popup
quizzes, and submit answers, or viewers can change the play speed to quick and achieve abnormal
watching. Condition 4 has been added to solve this problem. With the free operation of play buttons
such as skip, speed, and play bars, Condition 2 and Condition 3 can be resolved, but it is dicult to
resolve Condition 4, as shown below.
Condition 4’s
ATImin ATI ATImax
is that for the time interval between two answers
(ATI)
to be within the calculated time range, it must satisfy the following—Equations (9)–(11). In Figure 7,
the first 15 s popup quiz appears at
ST1stQ
, and the second 15 s popup quiz appears at
ST2ndQ
.
ATI
is
time interval between two answers,
ATImin
is minimum time interval between two answers, and
ATImax
is maximum time interval between two answers.
Mathematics 2020,8, 1827 12 of 19
ATI =AT2ndQ AT1stQ1(9)
ATImin =ST2ndQ ET1stQ (10)
ATImax =ET2ndQ ST1stQ (11)
Equation (9) is the time interval between two answers (
ATI
), which is the time dierence between
the answer timepoint of the 1st popup quiz
AT1stQ
and the answer timepoint of the 2nd popup quiz
AT2ndQ
. Equation (10) is the minimum time interval between two answers (
ATImin
), which is the
time dierence between the end timepoint of the 1st popup quiz
ET1stQ
and the start timepoint of the
2nd popup quiz
ST2ndQ
. Equation (11) is the maximum time interval between two answers (
ATImax
),
which is the time dierence between the start timepoint of the 1st popup quiz
ST1stQ
and the end
timepoint of the 2nd popup quiz (
ET2ndQ
). Therefore, to satisfy Condition 4 of Equation (8), the time
interval between two answers
(ATI)
must be between the maximum and minimum values. Using this
approach, there can be false viewers who only solved both Condition 2 and Condition 3 with free play
manipulation and who watched abnormally. In addition, viewers can concentrate and watch the video
repeatedly, beginning with the second watching through free play manipulation.
By combining the four conditions above, RESULT 1, the result of TIDAPQ’s time interval pass
judgement, can be expressed as follows:
RESULT_1 =Conditions_1 Conditions_2 Conditions_3 Conditions_4 (12)
RESULT_1: Result of Time Interval Pass
Thus, TIDAPQ’s time interval pass prevents other activities and makes it possible to stare at the
screen while the video is playing. If any one of the four conditions is not satisfied, RESULT 1, the result
of time interval pass, is FALSE, so viewers who need the normal watching completion result realize
that “It is wise to keep watching while the video is playing”. In addition, the content of the popup
quizzes should be easy to avoid excuses that “The problem is dicult and time is delayed”. The details
of the problem are described in the sections below.
3.4. Individual/dierent Correct Answers
3.4.1. Dierent Correct Answers to the Same Quiz
The quizzes presented for the watching judgement in the existing system are related to the video,
and all viewers have the same answer. Because there is only one answer, viewers can share the answer
with their colleagues or friends and can present the answer without watching even dicult questions.
In addition, problems related to the video may delay time in the process of solving the problem,
and even though viewers have watched normally, if they write wrong answers that are judged as
abnormal watching, such viewers may be disadvantaged. To prevent this situation, individual/dierent
correct answers were developed.
The content of TIDAPQ’s popup quiz is presented with very simple questions that are not related
to the video content. Questions related to the video content might occur due to the diculty in creating
several correct answers for the same question. The same problem ensures equity within the watching
group, and the individual/dierent correct answers prevent the sharing of correct answers. In this
study, considering that the time available to submit answers for time interval pass (RESULT 1) was 15 s,
we created easy questions that can be answered by all viewers in less than three seconds. Additionally,
even if the problem is easy, there may be viewers who submit only the correct answers shared without
watching the video, so we have created more than 10 correct answers.
Table 1shows the content of a popup quiz, and Table 2shows the individual/dierent correct
answers for three viewers with the unique numbers 26053072, 37621116, and 72541207. Correct answers
for Quiz 1 can be made up to 19, correct answers for Quiz 2 can be made up to 10, and correct answers
Mathematics 2020,8, 1827 13 of 19
for Quiz 3 can be made for more than 100. Viewers were given their own unique numbers to write
the dierent correct answers to the same question. In this paper, the unique number was based on
their birth date and class number. The unique number is used with the primary key of the database.
Since the problem was created using the primary key, the correct answer should also be solved using
the primary key.
Table 1. Popup quiz content.
No. Content
Quiz 1. Add the front second digit and the back second digit of the unique number.
(_ _____)
Quiz 2.
Write the flower name of the back third digit of the unique number.
(______ _)
1. Rose 2. Lilies 3. Hydrangea 4. Dandelion 5. MorningGlory
6. Azalea 7. Forsythia 8. Tulip 9. Daodil 0. Gypsophila
Quiz 3. Subtract the back second digit from the front double digits of the unique number.
(  _____)
Table 2. Individual, dierent correct answers to popup quiz.
Correct Answer
Unique Number Quiz 1 Quiz 2 Quiz 3
26053072 13 Gypsophila 19
37621116 8 Rose 36
72541207 2 Lilies 72
3.4.2. Judgement of Individual/Dierent Correct Answer (RESULT 2)
This section describes individual/dierent correct answer judgement (RESULT 2) among the two
methods that are used for the watching judgement of TIDAPQ. Equation (13) shows how to judge the
individual/dierent correct answer. The video contains two popup quizzes. RESULT 2, the judgement
result of the individual/dierent correct answer, becomes TRUE when both the correct answer to the
1st popup quiz CA1stQand the correct answer to the 2nd popup quiz CA2ndQare TRUE.
RESULT 2=CA1stQ CA2ndQ (13)
RESULT 2Result of Individual/dierent Correct Answer
CA1stQ Correct Answer to 1st Popup Quiz
CA2ndQ Correct Answer to 2nd Popup Quiz
3.5. Comprehensive Judgement of TIDAPQ
Equation (14) shows the comprehensive judgement to use two methods, which are calculated for
the watching judgement of TIDAPQ. Comprehensive judgement becomes TRUE when both RESULT 1
of time interval pass judgement and RESULT 2 of individual/dierent correct answer judgement are
TRUE. In comprehensive judgement, if TRUE results, it is recognized as normal watching, and if
FALSE results, it is recognized as abnormal watching. As soon as the video is finished, viewers can
automatically check the results.
COMPREHENSIVE JUDGMENT =RESULT_1 RESULT_2 (14)
COMPREHENSIVE JUDGMENT Comprehensive Judgement of TIDAPQ
RESULT_1 Judgement of Time Interval Pass
RESULT_2 Judgement of Individual/dierent Correct Answer
Mathematics 2020,8, 1827 14 of 19
4. TIDAPQ Model Verification
4.1. Participants
To measure the accuracy of the TIDAPQ system, 100 engineering university students participated
in a TIDAPQ model verification. We recruited students to participate in model verification experiments
among freshmen who had just entered engineering universities. Two groups were recruited for a
certain period of time: the participants to watch normally and the participants to watch abnormally.
Of the applicants, students who lacked prior knowledge about the video were selected for the study
and participated in the experiment. Among normal watching applicants, 67 lacked prior knowledge,
and among abnormal watching applicants, 33 lacked prior knowledge. The study participants gathered
in the same classroom and 67 people watched the video honestly and 33 people watched the video
while doing other activities according to the director’s instructions. The group watching honestly all
produced TIDAPQ results of TRUE, but the group watching the video while doing other activities
all produced TIDAPQ results of FALSE. Table 3shows 100 model validation participants and their
TIDAPQ results.
Table 3. Participants of TIDAPQ Model Verification.
Experimental Participants (100) TIDAPQ Results
Participants who watched the video honestly 67 ALL TRUE (67)
Participants who watched videos while doing other activities 33 ALL FALSE (33)
4.2. Research Procedure
4.2.1. Experimental Design
Figure 8shows the model verification experimental procedure of the TIDAPQ system. From among
first-year freshmen who had just entered the engineering university, we recruited students to participate
in the model verification experiment. Since participants should solve 10 problems related to the
content after watching the video, the questionnaire used in the selection consisted of questions asking
whether they knew the terms and answers used in the 10 questions. In the questionnaire checking
for prior knowledge, students who indicated no prior knowledge participated in the experiment.
The participants gathered in the same classroom; one group continuously watched TIDAPQ videos,
and another group watched TIDAPQ videos while performing other activities by the request of the
director. After watching the video, they had to solve 10 questions immediately. The 10 questions
earned one point for each question and are core problems related to the video.
Mathematics 2020, 8, x FOR PEER REVIEW 14 of 19
4. TIDAPQ Model Verification
4.1. Participants
To measure the accuracy of the TIDAPQ system, 100 engineering university students
participated in a TIDAPQ model verification. We recruited students to participate in model
verification experiments among freshmen who had just entered engineering universities. Two groups
were recruited for a certain period of time: the participants to watch normally and the participants to
watch abnormally. Of the applicants, students who lacked prior knowledge about the video were
selected for the study and participated in the experiment. Among normal watching applicants, 67
lacked prior knowledge, and among abnormal watching applicants, 33 lacked prior knowledge. The
study participants gathered in the same classroom and 67 people watched the video honestly and 33
people watched the video while doing other activities according to the director’s instructions. The
group watching honestly all produced TIDAPQ results of TRUE, but the group watching the video
while doing other activities all produced TIDAPQ results of FALSE. Table 3 shows 100 model
validation participants and their TIDAPQ results.
Table 3. Participants of TIDAPQ Model Verification.
Experimental Participants (100) TIDAPQ Results
Participants who watched the video honestly 67 ALL TRUE (67)
Participants who watched videos while doing other activities 33 ALL FALSE (33)
4.2. Research Procedure
4.2.1. Experimental Design
Figure 8 shows the model verification experimental procedure of the TIDAPQ system. From
among first-year freshmen who had just entered the engineering university, we recruited students to
participate in the model verification experiment. Since participants should solve 10 problems related
to the content after watching the video, the questionnaire used in the selection consisted of questions
asking whether they knew the terms and answers used in the 10 questions. In the questionnaire
checking for prior knowledge, students who indicated no prior knowledge participated in the
experiment. The participants gathered in the same classroom; one group continuously watched
TIDAPQ videos, and another group watched TIDAPQ videos while performing other activities by
the request of the director. After watching the video, they had to solve 10 questions immediately. The
10 questions earned one point for each question and are core problems related to the video.
Figure 8. Model verification experimental design.
4.2.2. Video Learning Design
Table 4 shows the video content used to verify the model. The video is approximately 10 min
long and consists mainly of conceptual explanations. After participants watched a 10 min video with
the following content, they immediately solved 10 problems related to the video.
Figure 8. Model verification experimental design.
4.2.2. Video Learning Design
Table 4shows the video content used to verify the model. The video is approximately 10 min long
and consists mainly of conceptual explanations. After participants watched a 10 min video with the
following content, they immediately solved 10 problems related to the video.
Mathematics 2020,8, 1827 15 of 19
Table 4. Video content design used for model validation.
Video Content for Model Verification
Artificial Intelligence (AI)
·Artificial narrow Intelligence (ANI)
·Artificial general Intelligence (AGI)
Machine Learning Techniques
·Unsupervised Learning
·Supervised Learning
Deep Learning
·Neural Network
TensorFlow, Brain.js
4.3. Research Instrument
Table 5shows the content of the questionnaire for selecting the research participants. Among
first-year freshmen who had just entered engineering universities, students without prior knowledge
should participate in the model verification experiment, so we conducted a survey to check the students’
prior knowledge. The survey is related to the video and asks whether they understand professional
terms and concepts such as artificial intelligence (AI), machine learning, and deep learning.
Table 5. Research Instrument for Selecting Research Participants.
Questionnaire Content for Research Participants Selection
·
Do you know the dierence between Artificial Intelligence (AI) and Machine Learning Techniques and Deep Learning?
·Do you know the concept and type of Artificial Narrow Intelligence (ANI)?
·Do you know the concept of Artificial General Intelligence (AGI)?
·Do you know the concept and type of Machine Learning Techniques?
·Do you know Group and Interpret Data Based on Input Data?
·Do you know Develop Predictive Model Based on Both Input and Output Data?
·Do you know Unsupervised Learning?
·Do you know Supervised Learning?
·Do you know the relationship between Deep Learning and Neural Network?
·Do you know TensorFlow and Brain.js?
Table 6shows the content of the test paper that the research participants took immediately after
watching the video. One hundred participants, who did not know technical terms and concepts such
as artificial intelligence, machine learning, and deep learning, had watched videos on TIDAPQ for
model validation. Then, the following test was immediately performed.
Table 6. Research instrument for model verification.
Test Content for Model Verification
1. What is the dierence between Artificial Intelligence (AI) and Machine Learning Techniques and Deep Learning?
2. What is Artificial Narrow Intelligence (ANI)?
3. What is Artificial General Intelligence (AGI)?
4. What are the concepts and types of Machine Learning Techniques?
5. What is Group and Interpret Data Based on Input Data?
6. What is Develop Predictive Model Based on Both Input and Output Data?
7. What is Unsupervised Learning?
8. What is Supervised Learning?
9. What is the relationship between Deep Learning and Neural Network?
10.
What is TensorFlow, Brain.js used for?
Mathematics 2020,8, 1827 16 of 19
4.4. Analysis and Results
To check the performance of TIDAPQ, model verification was performed; Table 7shows the
precision and recall ratio of TIDAPQ. The participants who answered “I have no knowledge of the video”
gathered in the same classroom; 67 of them had watched TIDAPQ videos honestly, and 33 of them had
watched TIDAPQ videos while performing other activities, according to the director’s instructions.
The sixty-seven people who watched the video honestly achieved Normal Watching Completion
(TRUE) on the TIDAPQ system, and the thirty-three people who did not watch TIDAPQ properly,
while engaging in other activities, achieved Abnormal Watching (FALSE) on the TIDAPQ system.
Table 7. Precision and recall ratio.
Experimental Participants (100) TIDAPQ Results Inter Ratio Precision Recall
Ratio F1
Pass Fail
Participants who watched the
video honestly
67
ALL TRUE (67) 63 4 92.64 0.94
93.31
Participants who watched videos
while doing other activities
33
ALL FALSE (33) 5 28 87.5 0.84
85.71
F1 Score: Harmonized Average Value of Precision and Recall.
After watching the video, they had to solve 10 questions immediately. The 10 questions earned
one point for each question, and those questions addressed core problems related to the video. On the
written test, the criterion score of the video watching was set at 8 (Inter Ratio). Sixty-three people of
the 67 who TIDAPQ judged with a normal watching scored more than 8 points on the written test.
Twenty-eight people of the 33 who TIDAPQ judged with abnormal watching scored less than 8 points
on the written test. TIDAPQ watching completion judgement determined that the participants who
actually watched video had 92.64% Precision and 94% Recall Ratio, and the participants who did not
watch video had 87.5% Precision and 84% Recall Ratio.
Finally, TIDAPQ watching completion judgement showed that TRUE Judgement performance is
93.31% and FALSE Judgement performance is 85.71%. Therefore, TIDAPQ has valid performance in
determining whether the video has been watched honestly.
5. Conclusions
This study presents the time interval dierent answer popup quiz (TIDAPQ) model to determine
whether viewers watched an online video honestly. One-hundred students at engineering universities
were recruited as research subjects and participated in the model verification.
Modern society has drawn attention to the technological development of telecommunications and
equipment, and the fourth industrial revolution, street advertising, and the surrounding environment
provide individual videos through smart devices. The form of videos has also diversified. Videos have
long provided us with much information through TV and video players; however, viewer ratings have
only been investigated through surveys that showed no interest in individual watching judgements.
Since then, videos uploaded online, including news, movies, dramas, YouTube, and even personal
privacy have become virtually infinite, but this situation also did not require individual watching
judgement. However, since 2015, several companies and studies have attempted to judge video
watching [
18
,
20
,
31
]. These attempts appeared for the purpose of judging honest watching when
viewers had to make unwanted watching mandatory. Education was the first area to attempt video
watching judgement. The reasons are as follows: because of the development of technology, the field
of online education has also developed, so an education program has been developed that can bestow
a degree just through online classes such as the Academic Credit Banking System [
1
]. There have also
appeared education programs in which classroom classes are possible only when video learning has
taken precedence, such as flipped learning [
11
,
12
]. Especially in groups in which self-directed learning
abilities are excellent, such as Harvard and MIT, flipped learning using online video watching is going
Mathematics 2020,8, 1827 17 of 19
well [
13
,
14
], but in groups in which self-directed learning ability has not been verified, especially in
younger grades, it is dicult to proceed smoothly due to false watching [
6
,
15
]. Online video watching
judgements which are currently being used have diculty in determining honest watching [
7
,
8
,
21
].
In addition, due to the influence of COVID-19 in 2020, online video learning has reached the realm of
elementary, middle, and high school public education around the world [
5
]; however, educational
materials are only distributed online and it is dicult to know whether students have watched online
learning videos [
6
]. Teachers want to know accurately whether video learning has faithfully been
done to design quality online classes, and in the current global situation, there is an urgent need to
judge online video watching. In the area of reward advertising, which paid points for watching an
advertisement, viewer judgement was necessary [
31
]. However, there was no particularly successful
method used and it was dicult to identify false viewers, so the point payment method through
advertising videos has almost disappeared; instead, point payment is currently made based on photos,
web pages, trying to use, or signing up, etc. If it is possible to judge video watching, it is expected that
the reward advertising using video clips will also be expanded. Similarly, judgement of video watching
is a necessary technology, but it is dicult to find research related to the judgement of video watching.
The online video watching judgement method, which is currently widely used, needs to solve
obstacles even if the video stops. Additionally, even if viewers have been doing other activities, if the
video has just been played from start to finish, the system misjudges watching completion. Similarly,
because these methods misjudge abnormal watching as normal watching, it is dicult to identify false
viewers. Therefore, a technology that clearly determines whether the video has been viewed honestly
is needed.
TIDAPQ, developed in this study, is a model that inserts two popup quizzes in videos, makes
watching judgements with time interval pass (RESULT 1) and individual/dierent correct answers
(RESULT 2), and then, makes comprehensive judgements on whether viewers were normally watching
or abnormally watching using these two. First, TIDAPQ calculates the allowed time range using
various timepoints in the video, including popping up timepoint of the quiz and disappeared timepoint
of the quiz, and if viewers’ event time comes within the calculated time range, it judges time interval
pass (RESULT 1) as TRUE. Second, the quizzes are randomly taken from the database and shown
for a certain period on the video screen. It is dicult to share the correct answer with colleagues
because the quiz has been created using the unique number given to each viewer, and the correct
answer is dierent depending on the unique number. At this time, if you write the correct answer to
all popup quizzes, individual/dierent correct answers (RESULT 2) is determined by TRUE. Finally,
comprehensive judgement uses the results of RESULT 1 and RESULT 2 to determine whether normal
watching is completed, and it automatically informs the viewers immediately after they watch the
video. After being judged as watching complete, viewers can watch the parts they want intensively
and repeatedly without interference from the popup quizzes.
To measure the accuracy of TIDAPQ, the research subjects were recruited among freshmen in
domestic engineering universities, and 100 students without prior video knowledge were selected to
participate in the verification of the TIDAPQ model. The video used in the experiment contains content
of approximately 10 min of learning, but TIDAPQ is not a watching judgement model exclusively for
the education sector; the length and content of the video can be dierent depending on the watching
purpose. In addition, the number of popup quizzes used to validate the model is two, but this form
is the most basic type of TIDAPQ; the number of popup quizzes can be dierent depending on the
length of the video. As a verification result of TIDAPQ’s model, the performance of normal watching
judgement was 93.31%, and the performance of abnormal watching judgement was 85.71%. These
results show that TIDAPQ has high performance in video watching judgement.
The existing methods of video watching judgement have a common weakness; false viewers may
occur when playing videos and performing other activities [
6
8
,
15
23
]. When you find a video that
has been stopped while doing other activities, you can just answer the quizzes [
17
20
]. Even if there is
a dicult quiz, it is possible to ask your colleague for answers through a chat, allowing you to write
Mathematics 2020,8, 1827 18 of 19
the answers any time you want. The video, which has been displayed on the monitor for a long time,
becomes a video that the student has focused on [
17
20
,
22
,
23
], and points will pile up even if there
is a face photo in front of the camera [
24
]. Learner analysis has been well done, but teachers have
diculty determining whether they watched the video through extensive data analysis [
17
20
,
22
,
23
].
On the other hand, the study justifies developing a TIDAPQ model that can judge the honest watching
completion of online videos. This would dierentiate it from previous research—to set the allowable
time range using the viewer’s event time and to judge watching completion using individual/dierent
correct answers even though it is the same problem. TIDAPQ uses only video and immediately
calculates the watching judgement based on the viewer’s events occurring while watching the video,
clarifies the criteria for watching judgement, and presents a simple method of watching judgement.
TIDAPQ is a model developed to monitor watching by groups that must watch unwanted videos,
but researchers should be aware that applying TIDAPQ to videos that are too long can cause great
stress to viewers due to the frequent popup quizzes. This paper has limitations in that the number of
participants in the samples was small and that there have not been experiments on the actual video
watching monitoring site. We plan to verify the eectiveness of TIDAPQ by applying it to sites where
more participants and compulsory watching are required in the future. Through this study, we expect
that TIDAPQ will be used properly in areas where watching completion judgements are needed,
and we hope there will be more research related to video watching judgement.
Author Contributions:
E.Y.: Conceptualization, Data curation, Formal analysis, Methodology, Project
administration, Resources, Software, Supervision, Visualization, Writing-original draft; H.L. and J.J.: Validation,
Writing-review & editing. All authors have read and agreed to the published version of the manuscript.
Funding:
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC
(Information Technology Research Center) support program (IITP-2020-2018-0-01405) supervised by the IITP
(Institute for Information and Communications Technology Planning and Evaluation).
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Doosan Encyclopedia. Available online: http://www.doopedia.co.kr/(accessed on 19 September 2020).
2.
Eliezer, D.A.; Marcus, V.S.P. The change of education with the technology advancement. Int. J. Adv. Eng.
Res. Sci. 2020,7, 43–49.
3.
Zhang, X.; Zeng, S. Research on Application of blended learning activities based on seamless flip learning in
higher vocational colleges. Theory Pract. Innov. Enntrepreneurship 2020,3, 156–158.
4.
Jaechoon, J.; Wonhui, Y.; Kyuhan, K.; Heuiseok, L. Development of a game-based learning judgment system
for online education environments based on video lecture: Minimum learning judgment system. J. Educ.
Comput. Res. 2018,56, 802–825. [CrossRef]
5.
James, W. Coronavirus: ‘Scenarios’ Planned for Schools’ September Return. BBC Politics. 28 June 2020.
Available online: https://www.bbc.com/news/uk-wales-politics-53210382 (accessed on 19 September 2020).
6. Juntae, K. Learning Gap: School’s New Coronavirus Challenge. The Korea Herald. 27 July 2020. Available
online: http://www.koreaherald.com/view.php?ud=20200727000726 (accessed on 19 September 2020).
7. Playposit. Available online: https://go.playposit.com/(accessed on 19 September 2020).
8. OceMix. Available online: https://mix.oce.com (accessed on 19 September 2020).
9.
Suh, M.O. The meta analysis of the eectiveness of flipped classroom. J. Educ. Technol.
2016
,32, 707–741.
[CrossRef]
10.
Minkyung, L. Case study on eects and signification of flipped classroom. J. Korean Educ.
2014
,41, 87–116.
[CrossRef]
11.
Spieler, B.; Grandl, M.; Ebner, M.; Slany, W. Bridging the Gap: A computer science Pre-MOOC for first
semester students. Electron. J. E Learn. 2020,18, 248–260. [CrossRef]
12.
Van Alten, D.C.; Phielix, C.; Janssen, J.; Kester, L. Self-regulated learning support in flipped learning videos
enhances learning outcomes. Comput. Educ. 2020, 158. [CrossRef]
13.
Gungor, Y. Preparation before class or homework after class? Flipped teaching practice in higher education.
Int. J. Progress. Educ. 2020,16, 297–307. [CrossRef]
Mathematics 2020,8, 1827 19 of 19
14.
Fatih, S.Y.; Serkan, S. Flipped classroom implementation in science teaching. Int. Online J. Educ. Teach.
2020
,
7, 606–620.
15.
Feng, J.J. Research on the main problems and countermeasures of Flipped Classroom in college teaching
practice. In Proceeding of the International Conference on Computer Engineering and Application (ICCEA),
Ningbo City, China, 20 March 2020; IEEE Computer Society Digital Library: Washington, DC, USA, 2020.
[CrossRef]
16.
Requires, J.; Barrio, V.L.; Agirre, I.; Acha, E.; Bizkarra, K. Designing a flipped classroom in an industrial
engineering master subject. In Proceedings of the 11th International Conference on Education and New
Learning Technologies, Palma, Spain, 1–3 July 2019; G
ó
mez Chova, L., L
ó
pez Mart
í
nez, A., Candel Torres, I.,
Eds.; Dialnet. Universidad de La Rioja: Logroño, Spain, 2019. [CrossRef]
17.
Stigler, J.; Geller, E.; Givvin, K. Zaption: A platform to support teaching, and learning about teaching, with
video. J. E Learn. Knowl. Soc. 2015,11, 13–25. [CrossRef]
18. Zaption. Available online: http://zapt.io/ttnkgsq2 (accessed on 28 October 2015).
19. Educannon. Available online: https://www.educanon.com (accessed on 28 October 2015).
20. Workday. Available online: http://www.workday.com (accessed on 28 October 2015).
21.
OceStream. Available online: https://www.microsoft.com/ko-kr/microsoft-365/microsoft-stream (accessed
on 19 September 2020).
22.
Hava, E.V.; Paz, B.A. Khan academy eectiveness: The case of math secondary students’ perceptions.
Comput. Educ. 2020, 157. [CrossRef]
23.
Yassine, S.; Kadry, S.; Sicilia, M.A. Statistical Profiles of Users’ Interactions with Videos in Large Repositories:
Mining of Khan Academy Repository. Korean Soc. Internet Inf. 2020,14, 2101–2121. [CrossRef]
24. Edmodo. Available online: https://new.edmodo.com/?go2url=/home (accessed on 13 October 2020).
25. Moodle. Available online: https://moodle.org/(accessed on 13 October 2020).
26. Blackboard. Available online: https://www.blackboard.com/(accessed on 13 October 2020).
27. Schoology. Available online: https://www.schoology.com/(accessed on 13 October 2020).
28. Brightspace. Available online: https://www.d2l.com/(accessed on 13 October 2020).
29. Litmos. Available online: https://www.litmos.com/(accessed on 13 October 2020).
30. TalentLMS. Available online: https://www.talentlms.com/(accessed on 13 October 2020).
31.
Woongchun, O.; Taeho, K.; Noyoon, K. Eye detection method using geometrical features between eyebrows
and eyes in smart phone. In Proceedings of the Korean Society of Broadcast Engineers Autumn Conference,
Seoul, Korea, 7 November 2014; Volume 11, pp. 41–44.
Publisher’s Note:
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional
aliations.
©
2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... Some specifically mentioned using what they had learned in the "real" classroom, echoing the work of Kearns [37], who found that the experience of teaching online had a positive impact upon the in-person teaching of some instructors. Others intend to move to a more blended approach in future, or at least provide recordings of lectures, which generally prove popular with students [38], although there are questions around whether videos that appear to have been watched actually have been [39]. Although a tenth of those who commented here volunteered unprompted that they never wanted to teach online again, this figure is largely in line with other findings on the experience of online teaching [40]. ...
Article
Full-text available
In this paper, we consider the experiences of mathematics lecturers in higher education and how they moved to emergency remote teaching during the initial university closures due to the COVID-19 pandemic. An online survey was conducted in May – June 2020, which received 257 replies from respondents based in 29 countries. We report on the particular challenges mathematics lecturers perceive there to be around teaching mathematics remotely, as well as any ad-vantages or disadvantages of teaching mathematics online that they report. Over 90% of respondents had little or no prior experience teaching mathematics online, and initially, 72% found it stressful and 88% thought it time-consuming. 88% felt there was a difference between teaching mathematics in this way and other disciplines. Four main types of challenges were associated with emergency remote teaching of mathematics: technical challenges; student challenges; teaching challenges; and nature of mathematics. Respondents identified flexibility as the main advantage of online teaching, with lack of interaction featuring strongly as a disadvantage. We also consider respondents’ personal circumstances during this time, in terms of working conditions and caring responsibilities and conclude by summarising the impact they perceive this experience may have upon their future teaching. 46% of respondents self-identified as having caring responsibilities and 61% felt the experience would impact upon their future teaching.
Article
Full-text available
Knowledge in Computer Science (CS) is essential, and companies have increased their demands for CS professionals. Despite this, many jobs remain vacant. Furthermore, computational thinking (CT) skills are required in all contexts of problem solving. A further serious problem arises from the gender disparity in technology related fields. Even if tech companies want to hire women in technology, the number of women who enter these fields is remarkably low. In high schools with no technical focus, most teenagers acquire only low‑level skills in CS. The consequences are misleading preconceptions about the fundamental ideas of CS and stereotype‑based expectations. Consequently, many teenagers exclude computing from their career path. In this paper, two promising concepts to overcome these challenges are presented. In 2018, a voluntary gamified lecture “Design your own app”, held at the University of Graz for students of all degree programs, was introduced. The course attracted over 200 students and received positive evaluations. This led to the second concept. In January 2019, a MOOC (Massive Open Online Course) with the title “Get FIT in Computer Science” was designed and launched in August 2019 on the platform iMooX.at with the goal to provide a basic introduction to different concepts of CS, including programming and the application of game design strategies. The MOOC was accompanied by an offline lecture, following the principles of flipped classroom and inverse blended learning. For evaluation purposes, we collected data at three stages: 1) during the MOOC, 2) during the offline lecture, and 3) two months after the lecture. The results showed that the MOOC framework was a promising approach to support and motivate at least a certain group of first‑semester students, especially those who had no prior knowledge in CS.
Article
Full-text available
In flipped learning, students study learning material before class and apply the content of the learning material during class. This requires self-regulated learning (SRL) behavior due to the increased autonomy in this instructional approach. Providing students with video-embedded SRL support (i.e., prompts and explicit instruction) during the learning activities before class has proven to be an effective strategy in primary and higher education to enhance students’ SRL and learning outcomes. The current study aims to replicate the effects of SRL support in a Flipped class in secondary education over the course of eight weeks. In total, 115 eighth-grade students from five classes participated in a quasi-experimental study, which measured the effects of SRL support on students’ SRL (self-reports and online activities), learning outcomes, and satisfaction. We found a positive effect of SRL support on learning outcomes, but we could not explain this by differences in students’ SRL. Although all the students were generally positive about the flipped learning environment, some students clearly disliked the SRL instruction. We conclude that SRL support is beneficial for students’ learning but that it should be carefully designed to avoid students’ dissatisfaction, which could potentially nullify these beneficial effects on learning.
Article
Full-text available
Along with the fast development in the internet technologies, every day new learning and teaching approaches are introduced and implemented. In this context, the aim of this research is to reveal the effect of Flipped Classroom model on the academic success of the students for the subject of Interaction of Matter with Heat of 8th grade Science course and opinions of students on the Flipped Classroom Model. Study group of the research consists of a total of 63 8th grade students, studied in a state middle school located in the central part of city of Konya during the 2017-2018 school year. In the research, mixed method, a method that allows integration of research results via utilization of quantitative and qualitative data collection methods conjointly is used. As the quantitative data collection tool, ”Success Test” developed by a group of four, consisting of four professionalist science teachers and researchers, and as the qualitative data collection tool, ”Semi-Structured Review” form, developed by the researcher have been used. In the research, a quasi-experimental design with a posttest control group has been used. As the result of the research It was concluded that the academic achievement of the experimental group was statistically significant and positively higher than the control group. Students consisting the experimental group has expressed that with the flipped classroom application, their success and their participation to the class has improved and they found this method to be more fun than listening to the lecturing as per the current program requires. Also, the students have stated their absence of internet access at home and the problems they encounter due to hardware inadequacies of their computers are the drawbacks of the implementation.
Article
Full-text available
We propose a minimum learning judgment system that is appropriate for online learning environments, and we verify this minimum learning judgment system through various experiments. By focusing on the learning effort, this system can easily and quickly determine whether learners have exerted the minimum effort required for learning. To do this, the system automatically generates a word game and determines whether minimum learning has taken place through the results of the word game. To verify the minimum learning judgment system, we conducted a comparative experiment on the importance of high-frequency words, a word count verification test for word games, and a judgment criteria verification test based on the length of a video lecture. Results of the experiments show that high-frequency words can be used as a feature to determine minimum learning. The appropriate number of words in the word game for the minimum learning judgment was found to be seven, and the results showed that the video length did not affect the minimum learning criteria. In addition, the minimum learning judgment accuracy result was 82%. This is not considered very high judgment accuracy, but the accuracy of the judgment is positive considering the aim of this study.
Article
The rapid growth of instructional videos repositories and their widespread use as a tool to support education have raised the need of studies to assess the quality of those educational resources and their impact on the quality of learning process that depends on them. Khan Academy (KA) repository is one of the prominent educational videos’ repositories. It is famous and widely used by different types of learners, students and teachers. To better understand its characteristics and the impact of such repositories on education, we gathered a huge amount of KA data using its API and different web scraping techniques, then we analyzed them. This paper reports the first quantitative and descriptive analysis of Khan Academy repository (KA repository) of open video lessons. First, we described the structure of repository. Then, we demonstrated some analyses highlighting content-based growth and evolution. Those descriptive analyses spotted the main important findings in KA repository. Finally, we focused on users’ interactions with video lessons. Those interactions consisted of questions and answers posted on videos. We developed interaction profiles for those videos based on the number of users’ interactions. We conducted regression analysis and statistical tests to mine the relation between those profiles and some quality related proposed metrics. The results of analysis showed that all interaction profiles are highly affected by video length and reuse rate in different subjects. We believe that our study demonstrated in this paper provides valuable information in understanding the logic and the learning mechanism inside learning repositories, which can have major impacts on the education field in general, and particularly on the informal learning process and the instructional design process. This study can be considered as one of the first quantitative studies to shed the light on Khan Academy as an open educational resources (OER) repository. The results presented in this paper are crucial in understanding KA videos repository, its characteristics and its impact on education.
Article
It is argued that Khan Academy (KA) is a useful platform for learning math. However, little research has been conducted on how learners perceive using KA. The case study examined the effectiveness of KA combined with traditional learning, as perceived by secondary students (N = 27) in math. Qualitative tools included a reflective diary accompanied by semi-structured interviews. Main categories emerging from content analysis were the teacher, the student, teacher-student relations, subject and content, and learning environment. Main findings show: 1. students perceived themselves as independent learners, investing in and aware of their functions as learners, more committed to the subject of math. 2. Teachers using KA were perceived as more professional, dedicated, connected to students' needs, and innovative. 3. KA was perceived as encouraging independence, available, and more interesting than books. 4. Learning math via KA was more motivating and enjoyable. 5. The teacher-student relationship was the emotional and motivational basis perceived as more important than the innovative learning environment. The main conclusion is that KA is effective in promoting personalization, independence and innovative teaching-learning processes. However, the teacher's mediation of cognitive and emotional learning is crucial. Hence, teachers should exploit KA while creating and maintaining direct lines of teacher-student interaction.