Conference PaperPDF Available

Patterns of quiz attempts in a MOOC. The full-points-pattern and other patterns on the way to a successful MOOC in a lecture setting

Authors:

Abstract and Figures

The analysis of learner data in MOOCs provides numerous opportunities to look for patterns that may indicate participants' learning strategies. In this article, we investigated how participants in a MOOC (N=1,200), in which they must successfully complete a quiz in each unit, deal with the fact that they can repeat this quiz up to five times. On the one hand, patterns can be identified regarding the success of the quiz attempts: For example, 32.7% of the course participants always repeat the quizzes up to a full score, while about 16.0% of the participants repeat, but only until they pass all quizzes. Regarding the number of attempts, independent of the success, there is only a uniformity in "single attempt"; 12.6% of the participants only take exactly one attempt at each of the quizzes in the MOOC. An analysis of a subgroup of 80 learners which were students of a course where the MOOC was obligatory, shows that the proportion of learners attributed to patterns making more attempts is generally bigger. It can be shown as well that learners who uses several attempts, even after a full score results, tend to get better exam. The article concludes by discussing how these patterns can be interpreted and how they might influence future MOOC developments.
Content may be subject to copyright.
Patterns of quiz attempts in a MOOC. The full-points-pattern and other
patterns on the way to a successful MOOC in a lecture setting
Bettina Mair
Graz University of Technology
Austria
office@bettina-mair.at
Sandra Schön
Graz University of Technology
Austria
andra.schoen@tugraz.at
Martin Ebner
Graz University of Technology
Austria
martin.ebner@tugraz.at
Sarah Edelsbrunner
Graz University of Technology
Austria
sarah.edelsbrunner@tugraz.at
Philipp Leitner
Graz University of Technology
Austria
philip.leitner@tugraz.at
Angela Schlager
Private Pädagogische Hochschule Augustinum
Austria
angela.schlager@pph-augustinum.at
Martin Teufel
Pädagogische Hochschule Steiermark
Austria
martin.teufel@phst.at
Stefan Thurner
Graz University of Technology
Austria
Stefan.thurner@tugraz.at
Abstract: The analysis of learner data in MOOCs provides numerous opportunities to look for
patterns that may indicate participants' learning strategies. In this article, we investigated how
participants in a MOOC (N=1,200), in which they must successfully complete a quiz in each unit,
deal with the fact that they can repeat this quiz up to five times. On the one hand, patterns can be
identified regarding the success of the quiz attempts: For example, 32.7% of the course participants
always repeat the quizzes up to a full score, while about 16.0% of the participants repeat, but only
until they pass all quizzes. Regarding the number of attempts, independent of the success, there is
only a uniformity in "single attempt"; 12.6% of the participants only take exactly one attempt at
each of the quizzes in the MOOC. An analysis of a subgroup of 80 learners which were students of
a course where the MOOC was obligatory, shows that the proportion of learners attributed to
patterns making more attempts is generally bigger. It can be shown as well that learners who uses
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
Draft - originally published in:
Mair, B., Schön, S., Ebner, M.,
Edelsbrunner, S., Leitner, P., Schlager,
A., Teufel, M. & Thurner, S. (2022).
Patterns of quiz attempts in a MOOC.
The full-points-pattern and other
patterns on the way to a successful
MOOC in a lecture setting. In T.
Bastiaens (Ed.), Proceedings of EdMedia
+ Innovate Learning (pp. 1169-1179).
New York City, NY, United States:
Association for the Advancement of
Computing in Education (AACE).
Retrieved July 13, 2022 from https://
www.learntechlib.org/primary/p/
221430/
several attempts, even after a full score results, tend to get better exam. The article concludes by
discussing how these patterns can be interpreted and how they might influence future MOOC
developments.
1. Introduction
MOOC designers have some ideas on how learners should learn within a MOOC and create “learning
paths” with the help of platform features (such as a discussion forum), their instructional media (video and texts) and
assignments (for the discussion forum or quizzes). Of course, learners do not always follow such paths, i.e. they do
not view video by video or unit by unit. Anyhow, learners tend to be more successful if they follow the suggested
paths in MOOCs (Davis et al., 2016). The following implementation of quizzes in a MOOC for learning assessment
and certification is not unusual (Chauhan & Goel, 2016): In the case of the national Austrian MOOC platform
iMooX.at, quizzes are typically the last part of each unit. The learners can use them to check whether they learned
the basics of the unit. A positive quiz result is also the base for the final participant certification. As quizzes have a
double role within the MOOC, participants can repeat them up to 4 times (so 5 trials in total), and the highest grade
is taken for the final certificate (if wished or needed); at least 75 percent of the points are needed to pass a quiz.
When studying learners' activities in several of our MOOCs, we stumbled upon the fact that there might be different
behavior concerning the quizzes: Some participants for example repeat them several times in a short time frame,
others make only single attempts. Khalil & Ebner (2015) have already described anecdotical patterns of behavior in
a MOOC at iMooX.at (N=269): “Lastly, our observations show that almost the majority of participants are
performing better with each quiz attempt. Usually, they stop when their score meets the required passing grade.
Nevertheless, others liked to accept the challenge and took the chance to receive the full mark” (p. 1221). In a later
analysis of another MOOC where we interpreted quiz results as feedback for learners and made an exploratory
analysis of this data (Schön et al. 2021, N=1,484), we have recognized similar activities.
For this contribution, we decided to investigate whether there are such different behavioral patterns or
strategies around quiz activities, how they can be described, and how often they occur. We therefore aim for better
insights into how learners use and deal with the learning setting of a certain MOOC. So, we see the contribution as
an analysis of the extent to which predefined learning structures are used differently (strategically) by MOOC users.
Additionally, we anticipate such potential patterns as a resource for future applications of learning analytics, which
might give learners reasonable feedback for their learning behavior (see Greller & Drachsler, 2012).
2. Learning strategies in MOOCs concerning quiz results
Chauhan & Goel (2016) distinguish two types of quizzes in MOOCs: (a) independent quizzes as an
“autonomous unit” for practice and/or assessment purposes and (b) embedded or in-video quizzes which “facilitates
a learner to test their knowledge without worrying about the effect of evaluation on final grading”. In the case of the
iMooX.at platform, the successful completion of all quizzes (at least 75 percent of the points in each quiz) is the
prerequisite for obtaining a certificate of attendance. To ensure that the quizzes do not only have the character of an
exam but are also perceived as learning support and self-monitoring, it is therefore possible to repeat the quizzes.
Learners appreciate quizzes in MOOCs because they feel that they are well supported in their learning, at least this
has been shown to be true for the quiz feature in Coursera (Anggraini et al. 2018).
Quiz results or feedback in MOOCs can be seen as part of the topic of learning analytics: Data analyses,
interpretations, and applications to support individual learning processes are assigned to the area of "Learning
Analytics" (Greller & Drachsler, 2012). As quiz results are easy to count and analyze, they are part of the basic
elements of a learning management system or MOOC platform. It might be of interest to consider them as helpful
criteria to support and guide learners in a MOOC. Although the quiz results and feedback seem to be an obvious
criterion for learners’ support, there are not many studies on the topic (see also Schön et al., 2021): We found some
studies analyzing the relation of quiz activities or results as predictor of the final MOOC success or future activities.
For example, Ren, Rangwala and Johriused (2016) apply a multi-regression model to predict performance on the
final MOOC assessment, which is a homework task. Data of learners’ activities (server logs from EdX courses), for
example the number of quizzes or number of logins, was used as a basis for analysis. According to this study, the
number of quizzes passed is the strongest predictor for the final assessment, followed by the percentage of videos
watched (p. 489). Admiraal, Huisman & Pilli (2015) were able to show that quizzes and peer assessment in a
MOOC predict the final grade better than self-assessments.
There are also not many contributions on the question of (learning) strategies or patterns of learner
behaviour in MOOCs. Milligan, Littlejohn & Margaryan (2013) describe learner types in connectivistic MOOCs, as
they are also known from social media applications: Accordingly, there is active participation, lurker, or passive
participation. Similarly, Hill (2013) distinguishes four types of participants in MOOCs depending on their behavior:
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
(a) lurkers enroll, but only observe some items, if, (b) drop-ins are partly active, but do not attempt to complete, (c)
passive participants in the course act as consumers and do not actively take part in discussion and (d) active
participants are active throughout the whole course. However, there is no empirical evidence presented by Hill to
support his typology. Berger et al. (2014) show major differences in behavior of participants in their study about the
MIT-based edX course Circuits and Electronics: 76 percent of participants account for only 8 percent of the time
spent on the course while 7 percent accounted for 60 percent. Anderson et. al. (2014) also classified MOOC
participants depending on their behavior. In short, they base their taxonomy on styles of engagement in relation to
viewing a lecture and handing in assignments. In conclusion, they give five categories for styles of engagement.
Viewers mainly only watch lectures without handing in assignments. Solvers are the opposite type, where
participants only hand in assignments and view none or just a few lectures. All-rounders, as the name suggests, are
somewhere in between these types. These types are completed by collectors and bystanders. The first ones tend to
just collect all the materials they can get from the MOOC, while not handing in assignments or watching many units.
Bystanders account for the portion of participants whose total activity is very low. This classification further proves
that there are a multitude of interaction types in MOOCs and that a simple expression like completion or viewing
rate cannot be enough to justify the success of a course. In addition to this taxonomy, Anderson et. al. (2014) also
note that motivations or intentions are another essential factor for a learner taxonomy. Their analysis of the final
grades of a machine learning course from Coursera show that on the one hand, about 40% of all participants
received a grade of 0 and therefore did not complete the course. However, many of these students were assigned to
the viewer type and therefore did put non-trivial amounts of work into the course. On the other hand, about 10%
achieved a perfect score in the same course. These near-perfect students are either assigned to the solver or the all-
rounder class. Overall, Anderson et. al. (2014) concluded that MOOCs cannot be understood as “online analogues of
traditional offline university classes”, but they “come with their own set of diverse student behaviors”. There are
also other contributions that deal with different learning behavior and learning strategies in MOOCs: Littlejohn et al.
(2016) take a closer look at the differences learners have in a MOOC in terms of self-regulated learning (SRL) and
how they describe and perceive themselves. Rieber (2017) again looks at different motivations to participate in a
MOOC, which learning opportunities participants find helpful and how this affects behavior in the MOOC. Cohen et
al. (2019) analyze different learner behaviors in the forum of a MOOC, including the intensity of exchanges and the
topics. They also identify five types, ranging from the "super actives" who write numerous posts and tag them, for
example, to those who do not participate at all. Rizvi et al. (2020) describe further research and also present an
analysis of learner behavior in MOOCs concerning the progress marker, distinguishing “markers”, “partial-markers”
and “non-markers”.
3. Methodology: Research question, approach, and background
We would like to answer the following two research questions in this paper: (a) Are there any patterns in
the behavior of MOOC participants regarding quizzes and their repetition in a MOOC? If so, what are they? (b) Do
course results differ for students with different quiz patterns in the MOOC? Details of the approach and methods, as
well as the background of the specific MOOC, are described below.
There are no specific contributions to the possible behaviors yet, nor could we derive from existing theories
of learning behavior concerning quizzes. Therefore, we took an exploratory approach. Firstly, we chose a specific
MOOC to better understand and interpret certain patterns and behaviors from the MOOC context. To give more
details, we chose a MOOC which is part of a course offered by six Austrian universities within the framework of
teacher training, i.e., successful MOOC participation is a prerequisite for the course. Secondly, we used the data
profiles from our MOOC platform, listed quiz behavior by learners and discussed (potential) patterns. In an iterative
process, we analyzed the data and checked if and how we could sort the data and identify other learners with similar
or same quiz activities. To this end, we repeatedly adapted the model criteria and descriptions and tried to find
overlap-free classifications. In addition to this specific exploratory data analysis, we described general user activities
using descriptive statistics, and then evaluated this information again for individuals with the different patterns to get
a better picture of learners and their (other) behaviors in the MOOC. Besides the handling and sorting of the data,
the visualization of the quiz behavior pattern is not trivial. There are some visualizations that we found helpful in
this regard (Coffrin et al., 2014, Hill, 2013, Judd & Kennedy, 2004; Davis et al., 2016). Thirdly, with a research
cooperation we were able to use data from 117 students of two partner universities, which are the University College
of Teacher Education Styria and the Private University College of Teacher Education. For them, the MOOC was
obligatory part of a course, and we got their final exam points. We were able to clearly link 80 of them to the data in
the MOOC. This data is base for an (also) exploratory analysis, how quiz patterns might be connected to learning
results.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
The platform and the quiz implementations differ from others: iMooX.at is the Austrian MOOC platform,
hosted by Graz University of Technology (TU Graz). Established in 2013, based on the open-source system Moodle,
it serves as a nationwide and European-wide platform for online courses following the xMOOC design concept. The
course language is mostly German, but there are also several courses in English and other languages. Currently,
iMooX.at counts about 50,000 registered users and has hosted more than 200 MOOCs so far. A unique characteristic
of iMooX.at is that all courses are licensed as open educational resources (in short OER, Schaffert & Geser, 2008)
and therefore available under CC licenses, so it is possible to (re-)use as well as to modify them (Ebner et al., 2016).
Several MOOCs are part of lectures at universities or provided by partner universities, so that a variety of design
concepts such as blended MOOCs or pre-MOOCs (Ebner, Schön & Braun, 2020) or a design alternative coined as
Inverse Blended MOOC (Ebner & Schön, 2019) have already been explored. At iMooX.at the quizzes are a key for
learning assessment and certification. These are the conditions regarding the quiz in the analyzed MOOC, which are
typical quiz settings on the platform: each unit has a quiz; quizzes can be done up to 5 times; best result is counted;
and at least 75% quota for each quiz is needed for certificate.
Within this contribution we analyze the data of the MOOC “Lehren und Lernen mit digitalen Medien II”, a
MOOC delivered in academic term winter 2020/2021 addressing the topic “Teaching and learning with digital
media” (available at https://imoox.at/course/LULIIWS20). The MOOC was available for free and open to everyone
who wanted to participate and counted 1,200 registered participations at the end of June 2021. However, the
development focused on the main target group, future teachers. The MOOC was co-designed by lecturers from six
Austrian universities and integrated into their regular lecture system in the second year of studies. The MOOC
served as the “lecture” part in a combined “lecture with practical” setting at all partner universities. In other words,
the universities asked their students to join the MOOC and additionally take part in a face-to-face practical course.
The final grading is based on a final exam with partly the same multiple-choice questions as in the MOOC and a
piece of project work. The design of the first part of the MOOC (“Lehren und Lernen mit digitalen Medien I”) is
very similar (see Ebner et al., 2020). For previous MOOC implementations, analyses of various kinds have already
been carried out and already published (Schön et al., 2021, Ebner et al., 2020, p. 73, Lipp et al. 2021).
The lecture, which uses the MOOC, was conducted for the second time in winter semester 2020/2021, but
because of the Covid-19 measures, it was conducted mainly as an online course, meaning that the exercises, except
for the first session, also took place online. The MOOC itself had the following units and structure: Unit 1:
introduction, Unit 2: Single-board computers, interdisciplinary teaching with digital technologies, Iterative thinking
and algorithmizing, Unit 3: Technology-supported problem solving, designing multimedia and interactive teaching
and learning materials, Unit 4: Web-based information systems for teaching and learning, e-assessment, Unit 5:
effects of digital media in the classroom, and Unit 6: Educational technology - quo vadis, open and participatory
teaching and learning concepts. In total, six quizzes as well as 20 videos were provided. The project work deals with
the idea to prepare a lesson with a microcontroller (in our case the Microbit) in a non-computer-science subject in a
group of up to four students. The students have to prepare a lesson template as well as a solution to the task (e.g. the
program for the Microbit) and present it in the final lecture.
4. Results
4.1 General Participation
The MOOC started in October 2020 and was available online for self-study until early July 2021. Within
this timeframe, 1,205 registrations for this MOOC and 475 successful participations were counted. For our analysis,
the data from the end of June 2021 were taken.
4.2 General quiz behavior
At the end of June 2021, 605 of the 1,200 registered course participants had taken at least one quiz (50.4%)
and 472 people (39.3%) had taken all 6 quizzes. 595, that is 49.6% of the 1,200 registered course participants, did
not complete single quiz. Fig. 1 gives an overview of the number of people who finished the quizzes successfully.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
Figure 1: Attempts of quizzes: Distribution over the six quizzes (in relation to 1,205 registered participants).
Source: iMooX.at creator’s dashboard. Please note: The data used in our following analysis slightly deviates
from this image: Quiz 1: 564; Quiz 2: 507; Quiz 3: 494; Quiz 4: 491; Quiz 5: 484; Quiz 6: 473
As described, there are up to 5 attempts possible for each quiz, which is a maximum of 30 possible attempts
throughout the entire course. In sum, we count 3,069 quiz attempts, so on average, everyone who has taken quizzes
made 5.07 attempts throughout the whole course. We have now analyzed across all quizzes how often attempts were
concluded with 10 points (green), or successfully but not with full points (blue) or not successfully. Regardless of
the result, the quizzes can be repeated up to five times. This was rarely done when 10 points were achieved, but if
full marks were not achieved, another attempt was very often started. We sorted and visualized the attempts and
results for all quizzes. Figure 2 shows that the proportion of people with full marks at their first attempt is still quite
similar in the second attempt, but by the third attempt it is already very high: Around 53% of those who made a third
attempt complete this attempt with full marks. There are only very few who make five attempts but still do not pass
the quiz. In absolute numbers, only one person failed quiz 1 despite 5 attempts and only two people failed quiz 2
despite 5 attempts.
Figure 2. Overview of (up to five) quiz attempts and their results across all six quizzes in the MOOC (n=605).
The percentages are the mean values from all six quizzes.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
4.3 Identified patterns and quantitative analysis of their frequency concerning failure and success
In this section, we describe the found patterns of quiz behavior. As a first step in identifying possible
pattern trajectories of completing each quiz attempt in the MOOC, we first differentiated all course participants into
three groups as in our description of the general quiz behavior above: Group A - Full Score: 10 points in each quiz,
Group B - Passed: at least 7.5 points in each quiz, and Group C - Failed: less than 7.5 points in at least one quiz or
no participation in at least one quiz. Based on our observations of other MOOC quiz results in the past, we then
looked for a presumed pattern of quiz behavior within these groups. Are there certain groups of people that always
work consistently toward a specific result? We also investigated the question of whether there are patterns in which
a certain number of attempts is always made in each quiz.
Group A - Full Score: 10 points in each quiz
Of all the people who made attempts in each quiz (n=472), 219 people (46.4%) scored 10 points on all
quizzes after one or more attempts. Theoretically, each participant always has 5 attempts available for each quiz,
even if he/she has already achieved the full score earlier. Since the best attempt is the result and for practice reasons,
it could well be assumed that course participants will make further attempts even after achieving the full score once.
Regarding this fact we analyzed how often participants do not make any further attempt after they have reached the
full points once and how often they keep on going even though they have already reached 10 points. Within our
data, the vast majority of the 219 people with 10 points in each quiz, 198 people (90.4%) stopped taking further quiz
attempts once they reached 10 points. Figure 3 illustrates this pattern 1 “10 points are enough” schematically.
Figure 3: Attempts until 10 points are reached (“10 points are enough”)
But there are some people who also make further quiz attempts after 10-point quizzes, in which they then
also score worse in some cases: These are 21 people in total, and they show a certain consistency in their behavior:
15 of these 21 people have done so in all six quizzes: they have thus scored 10 points in all quizzes but (later)
carried out further quiz attempts. Another 2 of these 21 people have registered at least one additional attempt after
reaching the full score in 5 of the 6 quizzes. The one time they broke the pattern, they almost certainly did so
because they did not reach 10 points until the last attempt, so they had no further attempts left. Figure 7 is a slight
modification of the pattern illustrated in Figure 4, as here the participants at least once repeated a quiz where they
already got 10 points. We call this pattern II “training with a quiz is more important than 10 points”.
Figure 4: Attempt pattern “training with a quiz is more important than 10 points”
Group B - Passed: at least 7.5 points in each quiz
252 (53.4%) of the 472 people who participated in all 6 quizzes passed each quiz but did not score the full
points in each quiz. According to the data of group A, the score histories of these participants were examined to see
whether attempts were only made until the quiz was passed (as a minimum of 7.5 points) or if they carried on after
passing the quiz once. Again, different patterns emerge: 97 participants make attempts until they have passed the
respective quiz for the first time and then stop. We call this pattern III “minimal success”, it is illustrated in Fig. 5.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
Figure 5: Attempt pattern “minimal success”
Then, there are several people who did not stop immediately after a successful attempt in each quiz. So,
they made further attempts without always getting full marks in the last quiz (as opposed to the people in group A).
This pattern IV called “passing is ok – but better result would be nice” is shown in Fig. 6. 42 participants made
another attempt after an already successful attempt in just one of the six quizzes. So, they broke pattern III just once.
This often happened when the first pass was a very close pass (between 7.5 and 8 points). 113 of the participants
made one or more further attempts in two or more quizzes even after passing for the first time. If we look at the
score progression, we see that even this group might work towards 10 points, but this strategy was not consistently
pursued in all 6 quizzes (as opposed to group A). It seems that 8 of these 113 people always worked towards 10
points but had no more attempts to complete that goal. 7 of them scored 10 points in 5 of the 6 quizzes and in the
one they did not score 10 points; all 5 attempts were used. One person scored 10 points in 4 quizzes and used all 5
attempts in the other two quizzes. So perhaps they just failed in scoring 10 points although they strived to achieve a
full score.
Figure 6: Attempt pattern “passing is okay - but a better result would be nice”
Group C - Failed: less than 7.5 points in at least one quiz or no participation in at least one quiz
Finally, we looked at the group of 134 people who did not successfully complete the overall course. There
may be two different reasons for this. On the one hand, it may be that a person simply did not take all six quizzes
and therefore failed (since all six quizzes must be completed with at least 7.5 points for successful completion of the
course). On the other hand, a person cannot successfully complete the course if he or she took all quizzes but scored
less than 7.5 points in at least one quiz. In fact, in our MOOC, only one person who took all six quizzes failed one
quiz and thus failed overall. This person failed quiz 2 and used all five possible attempts. So, we can assume that
this person has been working towards a successful completion of the whole quiz but failed. Furthermore, this person
could probably be assigned to pattern III, since in each of the remaining five quizzes the attempts immediately
stopped when the respective quiz was passed (greater than or equal to 7.5 points). The other 133 people in Group C
did not participate in each quiz and therefore failed the whole MOOC. 83 of these people (63.2%) participated in
only one quiz at all and 55 people (41.4%) made just one attempt in the entire course. We can assume that those
people were not really interested in a successful completion of the MOOC and just gave it a try. There is no
discernible pattern to their actions.
4.4 Identified patterns and quantitative analysis of their frequency concerning a fixed number of attempts
While searching for the patterns described above, we came across another possibility of a specific quiz
approach. We conjectured that the participants may always use a certain number of attempts in all quizzes regardless
of their respective performance in the quiz. The analysis revealed that a total of 76 of the 472 participants who did
all quizzes (16.1%) made only one attempt in all 6 quizzes and passed the quiz - so “one single attempt is enough”.
Figure 7 illustrates this simple pattern V.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
Figure 7: The single attempt pattern
If we look at the quiz performance of these 76 people in Table 4, we see that not even one of these
individual attempts was negative (< 7.5 points). All attempts registered a score of 7.5 or higher, so none of these 76
people failed any quiz. The situation is different concerning a potential strict approach to only make more than one,
but a certain number of attempts per quiz. The data analysis revealed that in the present MOOC not a single person
took 5 attempts in all 6 quizzes. Thus, not a single course participant made use of the full number of available
attempts. Furthermore, among the 472 participants who took all 6 quizzes, there was only one person who always
took 2 or 3 attempts and only 2 people who took 4 attempts in each quiz. So, we could not see a pattern regarding 2
to 6 attempts in each quiz.
4.5 Relation of patterns and overlaps in numbers
Thus, by looking at and sorting the data, we have identified different groups of patterns. There are clear
demarcations between the patterns related to the result (best quiz attempt) (all with 10 points, all successful but less
than 10 points or at least one unsuccessful attempt, i.e., Pattern I - IV). We also found a pattern of people who
always made only one attempt in each quiz (Pattern V). The latter pattern overlaps with two groups that relate to the
success of the quiz. We have illustrated this relationship in Figure 8.
Figure 8: Different groups of patterns were identified and found
The participants with pattern V, so one single attempt, can be assigned to 2 of the 3 groups listed above: (a)
Group A (10 points in each quiz): 44.7% of the 76 participants that only used one attempt in each quiz achieved the
full score - i.e., 10 points - in all 6 quizzes with this one attempt. (b) Group B (at least 7.5 points in each quiz):
55.3% of the 76 participants that only used one attempt in each quiz passed every quiz (7.5 or more points) but did
not achieve the full score in all 6 quizzes. More precisely, pattern VI (One attempt is enough) overlaps with the two
described patterns I (10 points are enough) and III (Passing is enough). 17.7% of the participants in Pattern I and
43.3% of those in pattern III can also be assigned to the one-attempt-pattern. The one attempt pattern would also be
possible in group C (i.e., only one attempt, which failed), but there is no such case in the MOOC.
4.6 The five patterns briefly
The following Table 1 presents all the main descriptive characteristics in terms of the number of quizzes
attempted, the results and the proportion of all participants in the MOOC.
Table 1: The identified five patterns and their average quiz attempts, highest score, and percentage of
participants (n=605, everyone with at least one quiz)
Pattern Description Average number of
quiz attempts
Average
highest score
Percentage of
participants
I 10 points are enough 1.9 10 32.7%
II Training with a quiz is more important
than 10 points
3.5 10 3.5%
III Minimal Success - passing is enough 1.4 9.38 16.0%
IV Passing is ok - but a better result would 2.3 9.63 25.6%
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
be nicer
V One single attempt is enough 1 9.80 12.6%
4.7 Quiz patterns and course success: Insights from a subgroup of students
Some of the MOOC participants provided us with data on their learning success in the course in which the
MOOC was the lecture part (see section 3.5). These were 117 students from two University Colleges of Teacher
Education who did the whole lecture. All students passed it. We were able to clearly link 80 of them to the data in
the MOOC. Table 2 presents the distribution of quiz patterns of these students within the MOOC and their course
result amongst these 80 students.
Table 2: Quiz pattern of all MOOC participants and 80 students who successfully completed a course with
the successful MOOC as obligatory lecture (best result: 100 points).
Group all
(n=605)
Subgroup
(n=80)
Pattern all
(n=605)
subgroup
(n=80)
Exam results
of subgroup
(mean)
95%-confidence-interval
lower bound upper bound
A - Full
Score
36,2% 55,0% I 32,7% 43,8% 80,12 76,89 83,34
II 3,5% 11,3% 88,32 80,67 95,97
B - Passed 41,6% 45,0% III 16,0% 11,3% 77,32 69,77 84,86
IV 25,6% 33,8% 81,70 77,73 85,68
C - Failed 22,2% 0,0% none 22,2% 0,0%
Sum 100,0% 100,0% Sum 100,0% 100,0%
As can be seen, the distribution of the quiz patterns shows a different distribution of the identified quiz
pattern in this subgroup. Practically, we see especially that these students have bigger shares of the full score pattern
than MOOC participants in general. They as well more often make more attempts even after a full score (pattern II)
or after they passed (IV) than MOOC learners. Interestingly, we as well see that the learners in pattern I do not have
an expected high average exam. This leads to the suspicion that some of these are learners who have cheated on the
quizzes. Additionally, we see that pattern II is connected to a higher final grade in the lecture. This could be
interpreted in the way, the learning with quizzes is helpful with the final course exam. Since questions from the
MOOC are also used in the exam, this is not very surprising. At the same time, it could also be that the students who
continue to practice with the quizzes even with full scores are particularly ambitious learners.
5. Summary and discussion
Finally, we would like to summarize the answers found to the research questions and then discuss them.
Based on the data available to us on the quiz attempts and results in the MOOC we selected, five patterns have
emerged. These are specific sequences of attempts and results that are detectable in the same way for several
participants. We were interested in patterns that led to successful completion of the MOOC. We were able to
identify four patterns in relation to the success of the attempts, in terms of the frequency of attempts, we could only
see one pattern with single attempts (see Table 4).
A subgroup of 80 learners of the MOOC were students in a course where the participation at the MOOC
was obligatory and where we could combine their quiz pattern with their exam result. There we can see that the
proportion of learners attributed to patterns that make more attempts is generally bigger. It can be shown as well that
learners who uses several attempts, even after a full score results, tend to get better exam.
In general, we think that our approach and results are original and may give interesting insights into how
(different) people use MOOC infrastructure and learning opportunities. Also, not all MOOC platforms use the same
approach of quizzes that can be repeated, and which are as well part of the final assessment are not usual. However,
this raises some questions for further use and investigation. The first question is if these patterns of behaviors might
as well be patterns of strategies. Throughout this article we have only talked about patterns of behavior and the
question arises whether there are also certain intentions and strategies behind them. The question is whether the
participants are aware of these patterns and implicitly or explicitly pursue strategies such as "as little effort as
possible" or "always 10 points". A second question might be whether these patterns are distinctive and original
(enough). The first four patterns all describe variants of behavioral patterns in relation to success, i.e. each
participant can be assigned to exactly one pattern. However, the one-attempt pattern in particular shows that there
are also corresponding or competitive patterns. In this context, we cannot say which is the stronger pattern in each
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
case. We do not know if the people in question were primarily trying to achieve a certain score or if they were just
strictly using one attempt in each quiz. A third question touches on the generalizability of the results, for example
concerning the frequency of these patterns amongst quiz participants or MOOC participants in general. This is to be
questioned critically for several reasons. Most participants were students for whom participation is obligatory, so
there was also particularly high motivation to pass the quizzes. Also, because of this, the present MOOC was
completed very successfully by most participants (there are hardly any people who failed a quiz and many who
scored 10 points in all quizzes). Then as well, such strategies might be culturally influenced: Papthoma et al. (2015)
emphasize that - as qualitative interviews show - the differences in the perception of different assessments in
MOOCs, especially in the perception of automatic feedback or peer assessment, may also be related to different
cultural backgrounds and previous experiences.
6. Outlook
We are encouraged by the results of our research to investigate to what extent we can find the identified
patterns in the other MOOCs on the iMooX.at platform, and in what distribution. At the same time, we are aware
that interviews are also needed to determine whether and in what way participants are aware of corresponding
behaviors or use them strategically. Such results would of course be very exciting in comparison with other
MOOCs.
Acknowledgement and Funding Information
Contributions and development were partly delivered within the project “Learning Analytics: Effects of
data analysis on learning success” (01/2020-12/2021) with Graz University of Technology and University of Graz as
partners and the Province of Styria as funding body (12. Zukunftsfonds Steiermark). We would especially like to
thank the colleagues and students at the University College of Teacher Education Styria (Pädagogische Hochschule
Steiermark) and the Private University College of Teacher Education (Private Pädagogische Hochschule
Augustinum), namely Martin Teufel and Angela Schlager, for providing us with data sets for the evaluations of the
lecture successes.
References
Admiraal, W, Huisman, B and Pilli, O. 2015. Assessment in Massive Open Online Courses. Electronic Journal of e-
Learning, 13(4): 207-216.
Anderson, A, Huttenlocher, D, Kleinberg, J, and Leskovec, J. 2014. Engaging with Massive Online Courses.
International World Wide Web Conference 2014, April 7-11, Seoul, Korea.
Anggraini, A, Tanuwijaya, NT, Oktavia, T, Meyliana, M, Prabowo, H and Supangkat, SH. 2018. Analyzing MOOC
Features for Enhancing Students Learning Satisfaction. Journal of Telecommunication, Electronic and Computer
Engineering, Vol 10, No 1-4, https://jtec.utem.edu.my/jtec/article/view/3578
Chauhan, J and Goel, A. 2016. An analysis of quiz in MOOC. Ninth International Conference on Contemporary
Computing (IC3).
Coffrin, C, Corrin, L, de Barba, P and Kennedy, G. 2014. Visualizing patterns of student engagement and
performance in MOOCs. In Proceedings of the Fourth International Conference on Learning Analytics And
Knowledge (LAK '14). Association for Computing Machinery, New York, NY, USA, 83–92. DOI:
https://doi.org/10.1145/2567574.2567586
Cohen, A, Shimony, U, Nachmias, R and Soffer, T 2019. Active learners’ characterization in MOOC forums and
their generated knowledge. British Journal of Educational Technology, Vol. 50(1), pp. 177–198.
Davis, D, Chen, G, Hauff, C and Houben, G. 2016. Gauging MOOC Learners' Adherence to the Designed Learning
Path. In: International Educational Data Mining Society, Paper presented at the International Conference on
Educational Data Mining (EDM) (9th, Raleigh, NC, Jun 29-Jul 2, 2016), URL:
https://files.eric.ed.gov/fulltext/ED592664.pdf
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
Ebner, M, Adams, S, Bollin, A, Kopp, M and Teufel, M. 2020. Digital gestütztes Lehren mittels innovativem
MOOC-Konzept. journal für lehrerInnenbildung, 20 (1), 68-77. https://doi.org/10.35468/jlb-01-2020_05
Ebner, M, Lorenz, A, Lackner, E, Kopp, M, Kumar, S, Schön, S and Wittke, A. 2016. How OER enhance MOOCs –
A Perspective from German-speaking Europe. In: Open Education: from OERs to MOOCs. Jemni, M., Kinshuk,
Khribi, M. K. (Eds.). Springer. Lecture Notes in Educational Technology. pp. 205-220.
Ebner M, Schön S and Braun C. 2020. More Than a MOOC. Seven Learning and Teaching Scenarios to Use
MOOCs in Higher Education and Beyond. In: S. Yu, M. Ally, A. Tsinakos (eds.), Emerging Technologies and
Pedagogies in the Curriculum. Bridging Human and Machine: Future Education with Intelligence, Singapore:
Springer, pp. 75-87.
Ebner, M and Schön, S. 2019. Inverse Blended Learning – a didactical concept for MOOCs and its positive effects
on dropout-rates. In: M. Ally, M. Amin Embi & H. Norman (eds.), The Impact of MOOCs on Distance Education in
Malaysia and Beyond. Routledge.
Ebner, M and Schön, S. 2020. Future Teacher Training of Several Universities with MOOCs as OER. In: R.E.
Ferdig, E. Baumgartner, E., R. Hartshorne, E. Kaplan-Rakowski, & C. Mouza, C. (ed). Teaching, Technology, and
Teacher Education during the COVID-19 Pandemic: Stories from the Field. Association for the Advancement of
Computing in Education (AACE), pp. 493-497.
Greller, W and Drachsler, H. 2012. Translating Learning into Numbers: A Generic Framework for Learning
Analytics. Educational Technology & Society, 15, pp. 42–57.
Hill, P. 2013. Emerging student patterns in MOOCs: A (revised) graphical view, e-Literate.
https://eliterate.us/emerging_student_patterns_in_moocs_graphical_view/
Judd, TS and Kennedy, GE. 2004. More sense from audit trails: Exploratory sequential data analysis. In Beyond the
comfort zone: Proceedings of the 21st ASCILITE Conference, pages 476–484, 2004.
Khalil, M and Ebner, M. 2015. A STEM MOOC for School Children - What Does Learning Analytics Tell us? In:
Proceedings of 2015 International Conference on Interactive Collaborative Learning (ICL), Florence, Italy, pp.
1217-1221.
Lipp, S, Dreisiebner, G, Leitner, P, Ebner, M, Kopp, M and Stock, M 2021. Learning Analytics – Didaktischer
Benefit zur Verbesserung von Lehr-Lernprozessen? Implikationen aus dem Einsatz von Learning Analytics im
Hochschulkontext. In: bwp@ Berufs- und Wirtschaftspädagogik – online, Issue 40.
https://www.bwpat.de/ausgabe40/lipp_etal_bwpat40.pdf
Littlejohn, A, Hood, N, Milligan, C and Mustain, P. 2016. Learning in MOOCs: Motivations and self-regulated
learning in MOOCs. The Internet and Higher Education, Vol. 29, pp. 40–48.
Milligan, C, Littlejohn, A and Margaryan, A. 2013. Patterns of engagement in connectivist MOOCs. In: Journal of
Online Learning and Teaching, Vol. 9, Issue 2. (2013)
Rieber, LP. 2017. Participation patterns in a massive open online course (MOOC) about statistics. British Journal
of Educational Technology, 48, 6, 1295-1304, doi:10.1111/bjet.12504
Rizvi, S, Rienties, B, Rogaten, J, Kizilcec, RF. 2020. Investigating variation in learning processes in a FutureLearn
MOOC. J Comput High Educ 32, 162–181. https://doi.org/10.1007/s12528-019-09231-0
Schaffert, S and Geser, G. 2008. Open Educational Resources and Practices. In: eLearning Papers, 7, Februar 2008.
Schön, S, Leitner, P, Ebner, P, Edelsbrunner, S and Hohla, K. 2021. Quiz Feedback in Massive Open Online
Courses from the Perspective of Learning Analytics: Role of First Quiz Attempts, In: Proceedings of the ICL2021 –
24th International Conference on Interactive Collaborative Learning, 22-24 September 2021, TU Dresden and HTW
Dresden, Germany, pp. 1946-1957.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2022 - New York City, NY, United States, June 20-23, 2022
Chapter
Full-text available
Many MOOCs use units with videos and quizzes, where a successful attempt after several tries is the basis for a MOOC certificate. A first in-depth analysis of quiz behavior within a MOOC at the Austrian MOOC platform iMooX.at had shown several quiz attempts patterns (Mair et al. 2022). As a next step, the researchers now collected details on video watching within a new edition of the same MOOC and therefore could combine data on quiz and video behavior. This analysis shows similar distribution of the quiz attempt patterns as in our first analysis. Additionally, the analysis indicates that learners who completed more quiz attempts than needed for full point results or passing have a higher average video watching time than learners who only made attempts until reaching a full score or passing.KeywordsMOOC; quiz behaviorVideo behaviorLearningLearning analytics
Article
Full-text available
Die Zunahme an digitalen Lernsettings führt zu einer Fülle an verfügbaren Daten zu Lehr- Lernprozessen. Gleichzeitig stellt die Interpretation dieser Daten die Akteurinnen und Akteure vor neue Herausforderungen. Learning Analytics umfassen die Aggregation und Interpretation von Lernendendaten mit dem Ziel, Lehr-Lernprozesse zu verbessern. Im Zentrum dieses Beitrages stehen die Implikationen, welche der Einsatz von Learning Analytics im Hochschulkontext mit sich bringt, exemplifiziert anhand der Implementierung in einem konkreten Forschungsprojekt. In einem ersten Schritt erfolgt eine literaturbasierte Aufarbeitung des didaktischen Potenzials aus dem Blickwinkel der Lernenden, Lehrenden sowie des Inhalts und des Learning Designs. Anhand einer qualitativen und quantitativen Begleitforschung eines aktuellen Projektes werden anschließend didaktische Implikationen für den Einsatz von Learning Analytics im Hochschulkontext abgeleitet. Deutlich wird, dass das in der Literatur verortete didaktische Potenzial auch in dieser Implementation zutage tritt, jedoch Learning Analytics gleichzeitig auch erhöhte Anforderungen an Lehrende und Lernende stellt.
Chapter
Full-text available
To train future Austrian teachers in using digital media, a novel didactic design was implemented at several universities in Austria in summer semester 2019: The course includes the participation in a MOOC (massive open online course) on the topic, an accompanying group work at the universities and multiple-choice tests conducted at the universities. In the summer semester of 2020, due to the COVID-19 crisis, the group work and exams had to be switched to virtual space as well. Because the course materials are available under an open license, i.e. as open educational resources, further use is possible and offered. Rationale
Article
Full-text available
Die Integration digitaler Medien in den Unterricht ist eine Herausforderung unserer Zeit. Daher scheint es naheliegend, dass Lehrpersonen entsprechende Bildungsmaßnahmen erfahren und insbesondere, dass dies auch in die Curricula für Lehramtsstudierende Einzug findet. Der Entwicklungsverbund Süd-Ost in Österreich hat sich dieser Herausforderung gestellt, ein Konzept zur Integration digitaler Medien entwickelt und mit Sommersemester 2019 erstmalig erfolgreich umgesetzt. Dabei wurde nach dem didaktischen Konzept Inverse-Blended-Learning ein Massive Open Online Course (MOOC) in Kombination mit Übungen an den beteiligten Standorten entwickelt. Die statistischen Eckdaten und das Feedback der Studierenden zeigen, dass das Konzept als sehr positiv und zeitgemäß wahrgenommen wird und es daher auch weiterverfolgt werden sollte.
Chapter
Full-text available
Since 2010, Massive Open Online Courses (MOOCs) have been one of the most discussed and researched topics in the area of educational technology. Due to their open nature such courses attract thousands of learners worldwide and more and more higher education institutions begin to produce their own MOOCs. Even the (international) press is full of reports and articles of how MOOCs can revolutionize education. In this chapter, we will take a look from a meta-level. After years of experiences with different MOOCs, we recognize that many MOOCs are used in different ways by teachers, lecturers, trainers and learners. So, there are different learning and teaching scenarios in the background often not visible to the broader public. Therefore, we like to address the following research question: “How can MOOCs be used in Higher Education learning and teaching scenarios and beyond?” In the study, the authors will focus on the seven identified scenarios how particular MOOCs were used for teaching and learning and therefore illustrate, that a MOOC can be “more than a MOOC”. MOOCs are one of the key drivers for open education using Open Educational Resources. The use of open licenses for MOOC resources are the mechanism for potential innovations in learning and teachings scenarios.
Article
Full-text available
Studies on engagement and learning design in Massive Open Online Courses (MOOCs) have laid the groundwork for understanding how people learn in this relatively new type of informal learning environment. To advance our understanding of how people learn in MOOCs, we investigate the intersection between learning design and the temporal process of engagement in the course. This study investigates the detailed processes of engagement using educational process mining in a FutureLearn science course (N = 2086 learners) and applying an established taxonomy of learning design to classify learning activities. The analyses were performed on three groups of learners categorised based upon their clicking behaviour. The process-mining results show at least one dominant pathway in each of the three groups, though multiple popular additional pathways were identified within each group. All three groups remained interested and engaged in the various learning and assessment activities. The findings from this study suggest that in the analysis of voluminous MOOC data there is value in first clustering learners and then investigating detailed progressions within each cluster that take the order and type of learning activities into account. The approach is promising because it provides insight into variation in behavioural sequences based on learners’ intentions for earning a course certificate. These insights can inform the targeting of analytics-based interventions to support learners and inform MOOC designers about adapting learning activities to different groups of learners based on their goals.
Chapter
Full-text available
Massive Open Online Courses, shortly MOOS, are one important trend of technology-enhanced learning of the last years. In this contribution we introduce a new didactical approach that we call "inverse blended learning" (IBL). Whereas "blended learning" is the enrichment of traditional learning settings through online inputs or phases, the IBL approach aims to enhance a pure online course with additional offline meetings for exchange and practising. Within two case studies the concept was tested and evaluated. The research study points out that the typical high dropout rate for MOOCs decreased arbitrarily. Therefore we recommend introducing the didactical approach of inverse blended learning in future MOOCs, if applicable.
Article
Full-text available
This study explores and characterizes learners' participation patterns in MOOC forums, as well as the factors that correlate with learners' participation. Educational data mining and learning analytics methods were used to retrieve and analyze the learners' interpersonal interaction data, which had accumulated in the Coursera log files. The content in the forums was categorized based on Henri's criteria and converted into quantitative values that could be compared and visualized. It was found that only 20% of the learners were collaborating in the forums throughout the entire course and were responsible for 50% of the total posts. A portion of them earned the name “Super Active.” The analyses not only demonstrated the volume of activity and its pattern but also revealed the content of the discussions, which helped to highlight the needs and reasons for students' usage of the forums. The content analysis showed intensity in the “Cognitive” and “Discipline” categories. Thus, forum participants benefit from discussions not only socially but disciplinarily and cognitively as well. Furthermore, even though a strong significant correlation was found between the learners’ completion status and their activity in the forums, a group of learners, who did not complete the course, was highly active.
Conference Paper
Full-text available
Chapter
Full-text available
In this chapter, we discuss why open educational resources (OER) and MOOCs are a necessary and powerful combination, especially in German-speaking Europe. We begin with an introduction to open online courses and an overview of copyright law in Germany and Austria. We then describe the evolution of OER MOOCs in Austria and Germany, especially the development of two MOOC platforms. Finally, we present examples of the impact of OER on MOOCs to conclude that an approach combining OER and MOOCs can be very valuable to foster new and innovative didactical approaches as well as future education.
Article
A massive open online course (MOOC) was designed to provide an introduction to statistics used in educational research and evaluation. The purpose of this research was to explore people's motivations for joining and participating in a MOOC and their behaviors and patterns of participation within the MOOC. Also studied were factors that the participants reported for completing or not completing the MOOC and what they perceive as criteria for quality in an online course. Participants also expressed their opinions about what they perceive as a reasonable balance between access to, cost, and quality of MOOCs. This study used a descriptive research design involving survey, quiz, and participation data. A total of 5079 people enrolled in the MOOC across six sections. When viewed from the point of view of the participants, the results suggest that even highly structured, instructionist MOOCs can offer flexible learning environments for participants with varied goals and needs.