Content uploaded by Markus Krause
Author content
All content in this area was uploaded by Markus Krause on Apr 08, 2015
Content may be subject to copyright.
A Playful Game Changer: Fostering Student Retention
in Online Education with Social Gamification
Markus Krause
Leibniz University
Hannover, Germany
Marc Mogalle
Leibniz University
Hannover, Germany
Henning Pohl
Leibniz University
Hannover, Germany
Joseph Jay Williams
Harvard University
Cambridge, MA, USA
{markus, marc, henning}@hci.uni-hannover.de, joseph_jay_williams@harvard.edu
ABSTRACT
Many MOOCs report high drop off rates for their students.
Among the factors reportedly contributing to this picture are
lack of motivation, feelings of isolation, and lack of interac-
tivity in MOOCs. This paper investigates the potential of
gamification with social game elements for increasing reten-
tion and learning success. Students in our experiment
showed a significant increase of 25% in retention period
(videos watched) and 23% higher average scores when the
course interface was gamified. Social game elements amplify
this effect significantly – students in this condition showed
an increase of 50% in retention period and 40% higher aver-
age test scores.
Author Keywords
Massive Open Online Courses; MOOC; Gamification; So-
cial Engagement
ACM Classification Keywords
J.1 [Administrative Data Processing]: Education
K.3.1 [Computer Uses in Education]: Distance Learning
INTRODUCTION
Low retention rates are a widely discussed issue of learning
at scale. Students of online courses report various inconven-
iences resulting in their dropout. Gamification is a promising
method to strengthen student engagement and ease some of
the disadvantages connected to online education [27]. High
dropout rates are often attributed to feelings of isolation and
lack of interactivity [15]—reasons directly relating to miss-
ing social engagement. Thus, we use social game elements
to strengthen social engagement in students of online
courses.
We conducted a controlled experiment with 213 students ma-
joring in psychology or computer science. To discriminate
the effects of gamification from effects induced through so-
cial elements, we compare three conditions: (1) a baseline
condition without explicit gamification, (2) a version with
game elements, but no social elements, and (3) a gamified
version with social elements. We hypothesize that gamifica-
tion has a stronger effect when it exhibits social elements.
We found that students in our gamification condition had
23% higher average scores and an increase of 25% in reten-
tion. With social elements, the increase was almost 40% for
final scores and 50% for retention period.
RELATED WORK
Retention remains a major challenge in MOOCs. Jordan re-
ported that although completion rates occasionally exceed
40% the average rate is below 13% and sometimes even be-
low 1% [13]. When analyzing retention it is important to note
that student motivation is more diverse in online courses than
traditional courses. The motivation also varies significantly
between courses [17]. Willkowski et al. [32] asked students
about their motivation to take part in the MOOC Mapping
with Google. Only 10% of students reported that they wanted
to earn a certificate. Kizilcec et. al [17] investigates this di-
versity in student motivation more closely. Their findings
also suggest that getting a certificate is not the primary inten-
tion of students in MOOCs. Kizilcec et. al [17] further illus-
trate that the retention rate of students actually taking a
course for a certificate are still low. According to Willkowski
et. al [32] only 25% percent of those students aiming for a
certificate actually achieved their goal. These results show
that even students with the goal to complete the course strug-
gle to achieve it.
Permission to make digital or hard copies of all or part of this work for personal or class-
room use is granted without fee provided that copies are not made or distributed for profi
t
or commercial advantage and that copies bear this notice and the full citation on the firs
t
page. Copyrights for components of this work owned by others than ACM must be hon-
ored. Abstracting with credit is permitted. To copy otherwise, or republish, to post o
n
servers or to redistribute to lists, requires prior specific permission and/or a fee. Reques
t
permissions from permissions@acm.org.
L
@S 2015, March 14–18, 2015, Vancouver, BC, Canada.
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-3411-2/15/03…$15.00.
http://dx.doi.org/10.1145/2724660.2724665
Figure 1. We investigate the impact of social gaming ele-
ments on student retention in online learnin
g
. Shown here
is the interface of our s
y
stem on a tablet. In particular, the
lesson panel of the social game condition.
There has been work on using games or game-like elements
for education [1,27,30] and training [31] and explorative
studies on the effects of game elements in online courses
[10]. Domínguez et al. [8] investigated the effects of gamifi-
cation on adult learners. Their study indicates that gamifica-
tion may have an effect on students. In contrast to these ex-
plorative studies, we investigate a more specific question:
RQ1: Does gamification have a positive effect on retention?
Khalil et al. [15] investigates why students drop off from
online courses and proposes strategies how to increase reten-
tion. They report that feelings of isolation and a lack of in-
teractivity are both factors for high drop of rates in MOOCs.
This indicates that social engagement is an important aspect
of students’ success. This is also consistent with other studies
[7,17,25], which hypothesize that social engagement is a rel-
evant factor for student retention, success, and satisfaction.
Research on social factors of learning is a well-established
field [3,28,34] and there are attempts to strengthen social in-
teractions in MOOCs [29] and various endeavors to under-
stand social elements in online and distance education
[4,18,19]. Simões et al. [25] use social elements to support
younger students (K-6) in an offline environment with social
game elements. Thus, there is evidence that social elements
may amplify the effect of gamification. Therefore, we extend
our initial research question and ask:
RQ2: Is gamification with social elements more efficient?
STUDY DESIGN
Our study follows a between subjects design with three dif-
ferent conditions. The first condition has no game elements
(plain condition), while the second condition has game ele-
ments (game condition), and the third condition has social
game elements (social condition). Apart from the main factor
(condition) we also collect information on two more fixed
factors: gender and major. As we required verified infor-
mation of these factors, we integrated our experiment with
offline courses at a university. Student retention is valuable
and even more when it is correlated with learning success.
Therefore, we also analyze how much students can recall
from a lecture. We hypothesize that:
H1: Gamification increases retention.
H1a: Gamification increases learning success.
H2: Social game elements amplify retention gain.
H2a: Social game elements amplify learning success.
The course used for the experiment introduces Python as a
programming language for statistical analysis and has an ex-
pected duration of four-weeks with an average workload of
4 hours per week. We organized the course into four lectures,
each lecture containing 15 video lessons on average. No
video lesson was longer than 4 minutes and videos showed
either tablet drawings or code examples. Each video also fea-
tures a short quiz. Quizzes are either multiple-choice ques-
tionnaires or free text input.
The online course was not required for any course of the uni-
versity but lecturers of graduate and undergraduate courses
in computer science (human-computer interaction) and psy-
chology promoted the course in their lectures as a valuable
addition.
PARTICIPANTS
We collected data from 71 students in the plain condition, 67
students in the game condition, and 68 students in the social
condition. Students were studying either computer science
(113) or psychology (93) for their undergraduate (87), master
(116), or Ph.D. (3). Only 12 participants reported prior
knowledge in python. Many students, however, had basic
knowledge of statistics (85) and sometimes experience with
R (56), Java (95), or C/C++ (56).
PROCEDURE
Students had to sign up for the course via an online system.
As we had verified information about gender and major of
students, we were able to balance condition assignment of
students based on gender and major. Table 1 shows the final
distribution of students in each condition:
Plain Condition
The plain condition does not have explicit game elements.
After logging in with their university credentials students can
choose one of the four lectures from a dashboard. The dash-
board also provides an overview of the progress in each
course and lesson. Figure 2 shows a screenshot of the dash-
board. Selecting a lecture in the dashboard opens the respec-
tive lesson panel.
The lesson panel has four central elements: a video in the
center, an element to navigate over the video, and a quiz
panel on the left. Quizzes could take one of two forms either
a multiple-choice quiz or a free text question. Students only
need to type one or two words to answer free text questions.
Gender Ma
j
or
Condition
F
emal
e
Male Ps
y
ch. C
S
Plain 30 41 32 39
Game 28 39 30 37
Social 28 40 31 37
Table 1. Demographics of students in each condition.
Figure 2. The dashboard of the plain condition. Students
get an overview of their progress in courses and lessons.
The system is able to deal with common spelling errors. The
lesson panel shows a discussion board below the video as
seen in Figure 2.
Game Condition
We designed the game condition with all features of the so-
cial condition omitting only those that incorporate social el-
ements. We only use gamification instead of more sophisti-
cated game concepts allowing an easier interpretation of ef-
fects. We deliberately chose to leave out complex game me-
chanics. Complex game mechanics themselves have a learn-
ing curve and thereby introduce noise.
We restructured the dashboard. A panel on the left side
shows players an overview of their achievements. This panel
also contains a customizable avatar in the social condition.
We show the details in Figure 5. In the game condition, the
panel does not show this avatar but is otherwise identical.
The full lesson panel is depicted in Figure 5. As the game
condition is already very close to the social condition the fig-
ure shows only the final version of the panel. The lessons
panels of the game and the social condition differ only in the
design of their quiz elements. We show these differences in
more detail in Figure 7.
For the game condition, we use basic gamification mechan-
ics [33]. Achievements or badges are widespread elements
in gamification. They are a representation of an accomplish-
ment. In the game condition, students can acquire achieve-
ments for answering a number of quizzes correctly, taking a
number of lessons, or being ranked among the top ten stu-
dents of a lesson or the entire course. We award five achieve-
ments with three levels for each achievement.
Players also earn score points for each correct answer to a
quiz question. We use these score points to place players in
a leaderboard for each course and each lesson in a course.
We are aware of the fact that leaderboards already constitute
a social element and that this decision might tone down the
effects between the game and the social condition.
We deliberately chose to integrate leaderboards in the game
condition as they are an essential element in almost every
gamification approach and do not connect players directly
with one another. Aesthetics do play a relevant role in gamifi-
cation using visually appealing graphics and representations one
can create a sense of pleasurable satisfaction [9,14,23,24]. For
the experiment, we chose a neutral comic like design. Players
can design their own avatar. Later during the course, students
can turn in their achievements and score points to acquire dif-
ferent visual add-ons to place on their avatar such as hats and
other items. Figure 4 shows a screenshot of the dashboard of the
game condition showing an avatar and a list of achievements.
The dashboard also provides an overview of the progress in
each course and lesson. Beside the integration of a customi-
zable avatar, we overhauled the general appearance of the
interface to fit the theme laid out with the avatar. Figure 6
shows the customization interface for the avatar and some of
the avatars created by students during the course.
Finally, we added a countdown to each quiz in a lesson. This
restricts student to a certain amount of time when solving a
quiz, effectively inducing tension with this time limit. To al-
low students to watch the video without time pressure, quiz-
zes have to be explicitly activated via clicking. Figure 7
shows two screenshots of the quiz.
Figure 5. Main elements of the lesson panel of the socia
l
and game condition. The onl
y
differences between social
and game condition is the presence of the quiz (Figure 7
shows this differences in more detail).
Figure 4. Dashboard of the social condition. Players can
customize their avatar and get an overview of their pro-
gress in courses and lessons.
Figure 3. Main components of the lesson panel of the base-
line condition (plain). We omitted the discussion board un-
der the video, as it we did not change it.
Social Game Condition
The social condition is identical to the game condition except
for an additional set of social game elements. Prior to starting
a new lecture, participants were able to choose an opponent.
Students could choose a person they know from the list of
participants, or play against a randomly chosen opponent.
Although typical for games that highlight social aspects, we
do not allow participants to invite friends via social networks
or e-mail at this point. When a player starts a lesson with a
random opponent, a screen illustrates the search for another
player (see Figure 8).
The system does not require two players to be online at the
same time. A student can play against pre-recorded actions
of another student. To ensure that the system always has
enough recordings we pre-recorded some sessions.
The system pairs a student with a recorded session if it can-
not find another player within 30 seconds of the student
choosing a random opponent. Students can also choose a spe-
cific player from a list of students currently online or with
recorded play sessions. If the second student is online, the
system waits until both students start the lesson. If the second
student is not online but has a recorded session for this lesson
the first student plays against this recording.
To highlight the social interaction we add a status bar to the
quiz panel. This bar illustrates the progress of ones opponent
on the quiz as seen in Figure 7. Students get a notification if
their opponent solves a quiz if they as well have an active
quiz. Finally, we added a summary screen that compares the
results of both students (see Figure 9).
MEASUREMENTS
To estimate student retention we measure the number of
video lessons a student watches. We will refer to this meas-
ure as retention period. For this experiment we consider
video as watched if it played completely and the continue
button was clicked afterwards. As basic knowledge of statis-
tics and programming languages was present in our user
group, we assumed that some participants skip lectures due
to prior knowledge. For the experiment, we define skipping
a lecture as starting the video but moving on to the next one
before the video has finished. We asked participants to indi-
cate if they skip a lecture because of prior knowledge. If a
student skips a video because of prior knowledge, we still
consider the video lesson as completed.
To measure learning success we conducted an exam with
some of our students. We invited students that took part in
our experiment to an offline test. We scheduled this test 3
month after the start of the course. 101 Students took part in
Figure 7. We chan
g
ed the appearance and the behavior o
f
the quiz from the plain condition. The quiz in the game
condition (center) features a countdown (
g
reen bar) and
students activate the quiz manually (left). In the social con-
dition, students see the status of their opponent (right).
Figure 8. To hi
g
hli
g
ht the social aspect we visualize the
search process for a competitor (top). Students can skip
this process (button in the center). The s
y
stem then pairs
students with a recorded session of another student. Oth-
erwise, a notification window shows that the s
y
stem found
another student (lower part of the figure).
Figure 9. The resume screen of the social condition. Illus-
trates the pla
y
er performance compared to her opponent.
On the left is a leaderboard showing ranking and badges.
Figure 6. Students can customize their avatar. Later dur-
in
g
the course, students can collect visual
g
ad
g
ets, e.
g
., hats
or other props. Some possible avatars are shown on the
right.
this test. On average, students took the test one week after
the course (self-assessed). We asked students to write a py-
thon script that calculates the 95% confidence interval of the
distribution of means using only standard functions.
We reviewed and graded all submissions in a blind review
process. At least three different reviewers graded each sub-
mission on a range from 5 (excellent) to 1 (a lot of room for
improvement). We averaged the test performance of all re-
viewers to measure learning success for each participant.
Besides test scores, we also use a second measure to estimate
learning success. Students solve quizzes throughout the
course; the average ratio of correctly solved quizzes is our
last measurement: quiz accuracy.
RESULTS
In accordance with Cramer and Bock [5], we performed a
MANOVA on the means to help protect against inflating our
Type-I error rate in the follow-up ANOVAs and post-hoc
comparisons. Prior to conducting the MANOVA, we calcu-
lated a series of Pearson correlations between all dependent
variables in order to test the MANOVA assumption that the
dependent variables correlate. We found correlations ranging
between 0.2 and 0.4 all correlations are significant at an -
level of 0.01 or lower. These values are within acceptable
ranges according to Meyers et al. [26].
Additionally, the Box’s M value of 16.25 was associated
with a p value of 0.012, which was interpreted as non-signif-
icant in accordance with Huberty and Petoskey’s (i.e., p <
.005) [12]. Thus, the covariance matrices between the groups
were assumed equal for the purpose of the MANOVA. We
conducted a three-way multivariate analysis of variance
(MANOVA) to test the hypothesis that there would be one
or more mean differences between our conditions (plain,
game, and social) and our measurements (retention period,
quiz accuracy, and test score). A significant MANOVA ef-
fect was obtained, Pillais’ Trace = 0.22, F(6, 176) = 3.38, p
= 0.003 with an estimated multivariate effect size of 0.112.
Retention Period
We calculate the average retention period for each condition
and the 95% confidence interval of the distribution of means
as shown in Table 2. To estimate confidence intervals we
draw 10,000 bootstrap samples from each condition using
sampling with replacement.
All three conditions differ by their mean but show a slight
overlap of their confidence intervals. As hypothesized the
game and social conditions have a higher average retention
period than the version without game elements. The game
condition shows a 25% increase in retention period com-
pared to the plain condition. The social condition more than
doubles this effect showing an increase of the average reten-
tion period of more than 55% compared to plain.
Before testing our results for significance, we ensured that
our data is suitable for parametric tests as hypothesized. We
used an omnibus test for normality [6] for each condition and
did not find significant differences from a normal distribu-
tion. As we have different numbers of participants in our con-
ditions, we also verified that our conditions have equal vari-
ance for the dependent variable prior to executing an analysis
of variance (ANOVA).
As the distributions do not differ significantly from normal
distributions, we use Bartlett’s test for homoscedasticity
(equal variance) [2]. We found, as assumed, that the variance
does not differ significantly between our conditions t(2) =
2.649, p = 0.265. As our data does not hold evidence that it
violates the assumptions of the ANOVA we analyze main
and interaction effects with a three-way independent analysis
of variance. Table 3 shows the results of this test.
We found that our conditions are the only significant factor
in our experiment. We use post-hoc one-tailed t-tests to test
for significance between individual levels of this factor and
use the Holm–Bonferroni [11] method to control for family
wise error rates. Figure 10 shows a violin plot of the retention
periods found in our experiment. From the results we can
support hypothesis H1 as the game condition has a signifi-
cantly higher average retention period (M = 14.9, SD = 8.2)
than the plain condition (M = 11.9, SD = 6.6), with t(137) =
2.22, p = 0.02, d = 0.38. We can also support H2 as the social
condition has a significantly higher average retention period
(M = 18.5, SD = 8.7) than the game condition (M = 14.9, SD
= 8.2), with t(134) = 2.48, p = 0.01, d = 0.42. Accordingly
the social condition also has a significantly higher average
retention period (μ=18.5, σ=8.7) than the plain condition (M
= 11.9, SD = 6.6), with t(138) = 4.82, p<0.001, d = 0.79.
Quiz Accuracy
In order to assess the influence of our conditions on students
success we calculate average quiz accuracy for each condi-
tion and the 95% confidence interval of the distribution of
means as shown in Table 4.
Source df SS MS F p
(C)ondition 2 1559.1 779.5 11.53 <0.001
(G)ender 1 194.3 194.3 3.07 0.081
(M)ajor 1 52.6 52.6 0.83 0.362
CG 2 187.5 187.5 1.48 0.229
CM 2 14.1 14.1 0.11 0.894
CGM 2 133.9 133.9 1.05 0.348
Residual 194 12270.0 63.24
Table 3. Interaction and main effects on retention period.
Abbreviations: df (de
g
rees of freedom), SS (sum o
f
squares), MS (mean squares).
Cond. n
95% CI
plain 71 11.9 [10.2, 13.6]
game 67 14.9 [12.9, 16.9]
social 68 18.5 [16.5, 20.7]
Table 2. Number of participants (n), mean retention pe-
riod (
), and the 95% confidence interval of the distribu-
tion of retention period means. The CI and the boxplot o
f
means are calculated from 10,000 bootstrap samples.
To estimate confidence intervals we draw 10,000 bootstrap
samples from each condition using sampling with replace-
ment. All three conditions differ by their mean and overlap
in their confidence intervals. The game and social conditions
have a higher average quiz accuracy than the plain condition.
The game condition shows a 12.5% increase in quiz accuracy
compared to the plain condition. The social condition again
doubles this effect showing an increase of quiz accuracy of
more than 31%. As for the previous measures, we conducted
an omnibus test for normality and Bartlett’s test for homo-
scedasticity. We did not find our distributions to differ sig-
nificantly from the ANOVA assumptions. We therefore an-
alyze main and interaction effects again with a three-way in-
dependent ANOVA. Table 5 shows the results of this test.
Condition is the only significant factor for quiz accuracy.
Figure 11 shows a violin plot of the quiz accuracy found in
our experiment. From the results we cannot entirely support
hypothesis H1a as the difference between the plain (M =
0.46, SD = 0.25) and the game condition (M = 0.52, SD =
0.25) is not significant t(137) = 1.34, p=0.09, d = 0.23. We
can however support our hypothesis H2a as the social con-
dition has a significantly higher average quiz accuracy (M =
0.61, SD = 0.21) than the game condition, with t(134) = 2.19,
p = 0.031, d = 0.37 and the plain condition, with t(138) =
3.63, p<0.001, d = 0.59.
Final Test Performance
Our finale measurement for student success is test perfor-
mance in our offline exam. We again calculate the mean and
the 95% confidence interval as shown in Table 5. Again, all
three conditions differ by their mean and overlap in their con-
fidence intervals. Students in the game and social conditions
have higher test scores than students in the plain condition.
Students in the game condition have a 22.5% higher test per-
formance than those in the plain condition. Students in the
social condition show 40% higher scores on average.
We conducted an omnibus test for normality and Bartlett’s
test for homoscedasticity and analyzed main and interaction
effects with a three-way ANOVA. Table 6 shows the results
of this test. Condition is, as expected, the only significant
factor for quiz accuracy. Figure 12 shows a violin plot of the
quiz accuracies found in our experiment.
From the results we can support hypothesis H1a as the dif-
ference between the plain (M = 2.5, SD = 1.12) and the game
condition (M = 3.06, SD = 1.10) is significant t(64) = 2.01, p
= 0.049, d = 0.49. We cannot directly support our hypothesis
H2a as the social condition does not show significantly higher
average test scores (M = 3.50, SD = 1.12) than the game con-
dition, with t(68) = 2.19, p = 0.055, d = 0.39 and the plain
condition, with t(67) = 3.63, p < 0.01, d = 0.81.
Cond. n
95% CI
plain 71 0.46 [0.40, 0.52]
game 67 0.52 [0.46, 0.58]
social 68 0.62 [0.55, 0.66]
Table 4. Number of participants (n), average quiz accurac
y
(
), and the 95% confidence interval of the distribution o
f
means. The CI and the boxplot are calculated from 10,000
bootstrap samples.
Figure 10. The social condition shows a si
g
nificantl
y
higher retention period than the game and plain condi-
tions. The game condition also shows si
g
nificantl
y
hi
g
her
retention rates than the plain condition. The
g
re
y
lines in
the violin plot indicate min and max values the
g
re
y
lines
on top significant differences: ∗∗∗: p < 0.001, ∗: p < 0.05.
Figure 11. Student success measured with quiz accuracy.
The
g
re
y
lines in the plot indicate min and max values the
grey lines on top significant differences: ∗∗: p < 0.01, ∗:p
< 0.05, ‘:p < 0.1.
Source df SS MS F p
(C)ondition 2 76.75 76.75 6.54 0.002
(G)ender 1 5.36 5.36 0.91 0.340
(M)ajor 1 15.92 15.92 2.71 0.101
CG 2 4.87 2.43 0.41 0.661
CM 2 27.74 13.87 2.36 0.097
CGM 2 28.06 14.03 2.39 0.094
Residual 194 113.81 5.86
Table 5. Interaction and main effects of quiz accuracy. Ab-
breviations: df (degrees of freedom), SS (sum of squares),
MS (mean squares).
CONCLUSION
This work presented a systematic analysis of the impact of
social gamification on student retention and learning success
in online courses. We posed two research questions: RQ1:
does gamification support students of our online course and
RQ2: do social elements amplify possible positive effects.
We hypothesized that social gamification increases retention
as well as strengthening learning success.
In order to assess the impact of social gamification, we ana-
lyzed three dependent variables (DVs) (retention period,
quiz accuracy, and test scores) on three independent varia-
bles (IVs) (condition, gender, and major). All DVs showed
a significant increase for the factor condition between the
plain and social level. For retention period and quiz accu-
racy, we found two differences that were not significant on a
level of 0.05 but on a level of 0.1. However, given the p-
values from both ANOVA and MANOVA a Type-I error
seems unlikely in both cases. In response to our hypothesis
H1 we analyzed retention period and observed a significant
increase of 25% between our plain and our game condition
and a significant increase of 55% from plain to social. This
lends strong support to our initial hypothesis that gamifica-
tion can increase retention and that social gamification am-
plifies this effect.
For quiz accuracy, we found an increase of 12.5% between
our plain and game conditions and an increase of 31% be-
tween plain and social, in the final test students in the game
condition had a 22.5% higher test score compared to students
in the plain condition. Students in the social condition
showed an even stronger increase of almost 40% compared
to students in the plain condition. This again lends support
for our hypothesis H2 that we can amplify beneficial effects
with social gamification.
FUTURE WORK
In our experiment, social elements showed a significant im-
pact. To control our population variables and to reduce noise
we restricted our experiment in terms of the user pool and the
used game mechanics. Based on our findings and previous
experiments we expect positive effects to be much stronger
when we apply more sophisticated design concepts. Badges,
achievements, and leaderboards are visually pleasing and
provide a certain engagement. However, we do not expect
these basic mechanics to uphold student motivation for a
complete online curriculum.
In the future, we plan to investigate different social game me-
chanics and their impact on student success. In this paper we
explored the effects of a competitive setting were students
challenge each other. In the future, we also want to investi-
gate the differences between playful elements that foster col-
laboration instead of competition and both methods com-
bined. We also aim at providing sophisticated feedback to
support students. However, complex game mechanics may
require more advanced interaction techniques. For instance,
the game could allow students who earned a certain status to
add questions to the system. Thereby expanding the learning
materials and allowing advanced students to compete on an-
other level. Tools [20] and methods [16,21] that support such
games already exist [22].
ACKNOWLEDGEMENTS
Many thanks to Rene Kizilcec and Gabriel Barata for sharing
their research results and papers early on. Their work helped
us a lot in shaping our paper. Special thanks go to Claus
Brenner and Daniel Eggert from the Institute of Cartography
and Geoinformatics for an insight into their MOOC platform.
REFERENCES
1. Barata, G., Gamma, S., Jorge, J., and Gonçalves, D. Re-
lating Gaming Habits with Student Performance in a
Gamified Learning Experience. Proc. CHI PLAY'14.
Figure 12. We measured learning success with a final of-
fline test. In this test, students wrote a short P
y
thon script
to estimate confidence intervals of means. Three inde-
pendent reviewers
g
raded each submission on a scale from
5 (excellent) to 1 (underperformed). The grey lines indi-
cate min and max values, the
g
re
y
lines on top si
g
nificant
differences: ∗∗: p < 0.01, ∗:p < 0.05, ‘:p < 0.1.
Cond. n
95% CI
plain 32 2.50 [2.12, 2.87]
game 33 3.06 [2.69, 3.45]
social 36 3.50 [3.13, 3.86]
Table 6. Number of participants (n), average test score (
),
and 95% confidence interval of the distribution of means
(CI is estimated from 10,000 bootstrap samples).
Source df SS MS F p
(C)ondition 2 76.75 76.75 6.54 0.002
(G)ender 1 5.36 5.36 0.91 0.340
(M)ajor 1 15.92 15.92 2.71 0.101
CG 2 4.87 2.43 0.41 0.661
CM 2 27.74 13.87 2.36 0.097
CGM 2 28.06 14.03 2.39 0.094
Residual 194 113.81 5.86
Table 7. Interaction and main effects of quiz accuracy. Ab-
breviations: df (de
g
rees of freedom), SS (sum of squares),
MS (mean squares).
2. Bartlett, M. Properties of sufficiency and statistical tests.
Proceedings of the Royal Statistical Society Series A,
160 (1937), 268–282.
3. Blumenfeld, P.C. Classroom learning and motivation:
Clarifying and expanding goal theory. Journal of Educa-
tional Psychology 84, 3 (1992), 272–281.
4. Brinton, C., Chiang, M., Jain, S., Lam, H., Liu, Z., and
Wong, F. Learning about social learning in MOOCs:
From statistical analysis to generative model. IEEE
Trans. on Learning Technologies PP, 99 (2014), 1–14.
5. Cramer, E.M. and Bock, R.D. Multivariate Analysis.
Review of Educational Research 36, (1966), 604–617.
6. D’Agostino, R. and Pearson, E.S. Testing for departures
from normality. Biometrika 60, (1973), 613–622.
7. Dietz-Uhler, B., Fisher, A., and Han, A. Designing
online courses to promote student retention. Journal of
Educational Technology Systems 36, 1 (2007), 105–
112.
8. Domínguez, A., Saenz-de-Navarrete, J., De-Marcos, L.,
Fernández-Sanz, L., Pagés, C., and Martínez-Herráiz, J.-
J. Gamifying learning experiences: Practical implica-
tions and outcomes. Computers & Education 63, (2013),
380–392.
9. De Freitas, A. a. and de Freitas, M.M. Classroom Live:
a software-assisted gamification tool. Computer Science
Education 23, 2 (2013), 186–206.
10. Grünewald, F., Meinel, C., Totschnig, M., and Willems,
C. Designing MOOCs for the Support of Multiple
Learning Styles. In Scaling up Learning for Sustained
Impact. 2013, 371–382.
11. Holm, S. A simple sequentially rejective multiple test
procedure. Scandinavian Journal of Statistics 6, 2
(1979), 65–70.
12. Huberty, C.J. and Petoskey, M.D. Multivariate Analysis
of Variance and Covariance. In H. Tinsley and S.
Brown, eds., Handbook of applied multivariate statistics
and mathematical modeling. 2000.
13. Jordan, K. MOOC Completion Rates: The Data. MOOC
Completion Rates: The Data, 2013. http://www.katyjor-
dan.com/MOOCproject.html.
14. Kapp, K.M. The Gamification of Learning and Instruc-
tion: Game-based Methods and Strategies for Training
and Education. Pfeiffer, 2012.
15. Khalil, H. and Ebner, M. MOOCs Completion Rates and
Possible Methods to Improve Retention-A Literature
Review. World Conference on Educational Multimedia,
Hypermedia and Telecommunications 2014, 1236–
1244.
16. Kilian, N., Krause, M., Runge, N., and Smeddinck, J.
Predicting Crowd-based Translation Quality with Lan-
guage-independent Feature Vectors. Proc. HComp’12,
114–115.
17. Kizilcec, R. and Al, E. Motivation as a Lens for Under-
standing Online Learners : Introducing and Applying the
Online Learner Enrollment Intentions (OLEI ) Scale. in
press 0, 0 (2014).
18. Kizilcec, R.F., Piech, C., and Schneider, E. Deconstruct-
ing disengagement. Proc. LAK ’13, 170.
19. Koutropoulos, A., Gallagher, M.S., Abajian, S.C., et al.
Emotive Vocabulary in MOOCs: Context & Participant
Retention. European Journal of Open, Distance, and E-
Learning, (2012), 1–22.
20. Krause, M. and Porzel, R. It is about time. Proc. CHI
EA ’13, 163–168.
21. Krause, M. A behavioral biometrics based authentica-
tion method for MOOC’s that is robust against imitation
attempts. Proc. L@S’14 (Work in Progress), 201–202.
22. Krause, M. Homo Ludens in the Loop: Playful Human
Computation Systems. tredition GmbH, Hamburg, Ham-
burg, Germany, 2014.
23. Liu, Y., Alexandrova, T., and Nakajima, T. Gamifying
intelligent environments. Proc. Ubi-MUI’11, , 7–12.
24. Marache-francisco, C. and Brangier, E. Perception of
Gamification : Between Graphical Design. (2013), 558–
567.
25. Martz, W.B., Reddy, V.K., and Sangermano, K. Look-
ing for indicators of success for distance education. In
Distance learning and university effectiveness: changing
education paradigms for online learning. Hershey, PA:
Information Science Pub, 2004.
26. Meyers, L., Gamst, G., and Guarino, A. Applied multi-
variate research: Design and interpretation. Sage Pub-
lishers, Thousand Oaks, CA, USA, 2006.
27. Oblinger, D. The next generation of educational engage-
ment. Journal of interactive media in education 8,
(2004), 1–18.
28. Ryan, A.M. and Patrick, H. The Classroom Social Envi-
ronment and Changes in Adolescents’ Motivation and
Engagement During Middle School. American Educa-
tional Research Journal 38, 2 (2001), 437–460.
29. Staubitz, T. and Renz, J. Supporting Social Interaction
and Collaboration on an xMOOC Platform. Proc.
EDULEARN14, 6667–6677.
30. Steinkuehler, C.A. Learning in Massively Multiplayer
Online Games. Proc. ICLS ’04, 521–528.
31. Stone, B., Factors, H., Defence, I., and Centre, T. Seri-
ous gaming. Education and Training, (2001), 142–144.
32. Wilkowski, J., Deutsch, A., and Russell, D. Student skill
and goal achievement in the mapping with google
MOOC. Proc. L@S ’14, 3–9.
33. Zichermann, G. and Cunningham, C. Gamification by
Design. 2011.
34. Zimmerman, B.J. A social cognitive view of self-regu-
lated academic learning. Journal of Educational Psy-
chology 81, 3 (1989), 329–339.