Conference PaperPDF Available

Developing an Addition and Subtraction Trainer with Automated Categorization of Errors for Learners in Their First Two Years of Primary School

Authors:

Abstract and Figures

This paper discusses the development and testing of a gamified mathematical learning app. Said app is designed for primary school children in their first two years of learning, so that they can practice the arithmetic operations addition and subtraction in a playful way. Predefined error categories are used to specifically eliminate weaknesses. As an incentive for the learners, mascots, stars and a child-oriented look are offered, as well as a mode in which the players can compete with others worldwide and, in the case of good performance, find themselves on the leaderboard. A test run and a subsequently conducted survey with 49 school children showed that the app generally works well and that the subjective perceptions regarding the motivational ability of the mascots correlate with the overall perception of the app.
Content may be subject to copyright.
Developing an Addition and Subtraction Trainer with Automated
Categorization of Errors for Learners in Their First Two Years of Primary
School
Helmut Zöhrer
Graz University of Technology
Austria
helmut@zoehrer.online
Josef Wachtler
Educational Technology
Graz University of Technology
Austria
josef.wachtler@tugraz.at
Martin Ebner
Educational Technology
Graz University of Technology
Austria
martin.ebner@tugraz.at
Abstract: This paper discusses the development and testing of a gamified mathematical learning
app. Said app is designed for primary school children in their first two years of learning, so that
they can practice the arithmetic operations addition and subtraction in a playful way. Predefined
error categories are used to specifically eliminate weaknesses. As an incentive for the learners,
mascots, stars and a child-oriented look are offered, as well as a mode in which the players can
compete with others worldwide and, in the case of good performance, find themselves on the
leaderboard. A test run and a subsequently conducted survey with 49 school children showed that
the app generally works well and that the subjective perceptions regarding the motivational ability
of the mascots correlate with the overall perception of the app.
Introduction
Student motivation in today's schooling is on the decline, as (Alsawaier 2018) states. Digital games, which
have been appealing to the masses for decades, can offer a solution, because on the one hand they provide the fun
factor and on the other hand they can supply automated feedback in real time. They can also enable location-
independent learning, which has experienced notable expansion since the outbreak of the Corona pandemic, as
(Dash et al. 2022) report.
The learning app [
1
] presented in this document, which includes game elements to take advantage of the
benefits just mentioned and create an engaging learning environment for young learners, features several game
modes which are either competitive or self-paced. To be in line with the principle of learning analytics (LA), one
mode is designed to use learner data collected in the background to provoke the most frequently committed types of
errors so that learners can specifically work on them. One of the app’s main attractions is the use of mascots in
different costumes as loyal companions in the learning process.
At the beginning of this document, the theory behind LA and games in an educational context is discussed.
This is followed by an introduction of the developed learning application called Junior Plus-Minus-Trainer. In the
study presented afterwards, which was conducted in a secondary school with 49 students, the functionality of the
app on the one hand and the impression it makes on the learners on the other hand are examined. This study’s main
[
1
] https://schule.learninglab.tugraz.at/juniorplusminus/, retrieved April 30, 2023
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
Preprint - originally published in:
Zöhrer, H., Wachtler, J. & Ebner, M.
(2023). Developing an Addition and
Subtraction Trainer with Automated
Categorization of Errors for Learners in
Their First Two Years of Primary School.
In T. Bastiaens (Ed.), Proceedings of
EdMedia + Innovate Learning (pp.
832-843). Vienna, Austria: Association
for the Advancement of Computing in
Education (AACE). Retrieved July 19,
2023 from https://
www.learntechlib.org/primary/p/
222586/.
aim is to answer the following research question: Which aspects of an arithmetic training app are beneficial for
motivating students? In particular, the effect of the ever-present and evolving mascots and the different kinds of
game modes are investigated, and the corresponding results are evaluated and discussed in the last part of this work.
Theoretical Background
Learning Analytics (LA)
Evolution and Definition of Learning Analytics
(Joksimovic et al. 2019) state that big data analytics has long been established in several industries, such as
banking, healthcare, entertainment, and telecommunications. In contrast, the education sector has barely paid
attention to the amount of data generated during learning for a long time, let alone actively used it for successful
learning, they claim. As defined for the first International Conference on Learning Analytics & Knowledge in 2011,
LA means “the measurement, collection, analysis and reporting of data about learners and their contexts, for
purposes of understanding and optimising learning and the environments in which it occurs” [
1
].
After this initial meeting, which resulted in the creation of the Society for Learning Analytics Research
(SoLAR), there was a tremendous upswing in interest with increased research funding, proliferation of publications,
and commercialization of related technologies, as (Joksimovic et al. 2019) further describe. They state that it was
recognized that various weaknesses of traditional learning could be eliminated, and learners could be provided with
customized and timely feedback. In addition to the intended reduction in workload and increase in efficiency, LA
could also address broader issues related to global learning, such as the problems of imbalance and access, referring
to the views of (Lang et al. 2022). According to (Khalil et al. 2022), the importance of LA as a research field and in
practice has developed well in recent years. Consequently, numerous institutions covering a wide range of
disciplines have focused on LA, and various factors influencing the successful establishment have emerged and have
also been further developed. Moreover, increased attention has been paid recently to previously understudied
considerations, such as the growing influence of artificial intelligence and the forms of visualization, they state.
The Learning Analytics Cycle
"The importance of interventions in learning analytics to close the feedback loop has been clear in the
literature (if not always the practice) from the birth of the field." (Clow 2012) explains. This statement suggests that
LA requires a self-contained cycle instead of a process flow that is run through once and is then completed. Said
cycle, determines four interlinked steps (marked in parentheses in the following) for a successful implementation of
LA, as he elaborates further. Learners (1) are at the starting point of the loop and are crucial for the generation of
data (2). Said received data is not only stored, but also processed in terms of metrics (3). In this step, the point is to
have overview pages, visualizations, comparisons, listings of at-risk students, and the like prepared, which is the
main attraction of some LA projects. The final step, interventions (4), which reconnects with learners, is intended to
have an impact on those exact or other learners. This could be achieved by a dashboard where students may view
their ranking within their peer group or by a teacher directly contacting them in order to discuss specific anomalies
in their problem-solving strategies which may have been automatically detected, (Clow 2012) indicates.
As (Gašević et al. 2015) suggest, this circular arrangement can therefore achieve what many instructors
want: They gain insight into the learning processes and can easily identify which weaknesses are present at which
time and with whom. Using methods that go beyond simple recording of performance, conclusions can thus be
drawn about which subject areas have caused problems for individual learners or an entire group. In this way, the
teacher can provide targeted feedback on how learning could be improved or adapt their teaching.
Games in Education
Ever since the prices of personal computers (PCs) dropped to affordable levels in the 1980s, scientists have
been impressed by the motivational boost digital games can exert, as (Laine and Lindberg 2020) explain. This led to
[
1
] https://www.solaresearch.org/about/what-is-learning-analytics/, retrieved April 10, 2023
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
the idea early on that the motivating effects of computer games could also be used for learning. Since many students,
given the choice, would prefer to play video games than engage in homework, or read a book, the merging of
education and entertainment was seen as a solution to the problem of lacking motivation, (Alsawaier 2018) sets
forth. Thus, making use of game elements in the classroom was seen to increase engagement.
When it comes to games in education, two terms have become established: gamification and game-based
learning (GBL). While gamification is about integrating game elements into a non-game context to boost motivation
and engagement, GBL means that learners play a game in order to learn some content. GBL thus requires the
existence of a learning game that comprises a starting and an ending, as (Alsawaier 2018) claims. (Al-Azawi et al.
2016) discuss that gamification seeks to transform the whole learning process into a game through adding game
logic, game mechanics and game design, and GBL, on the other hand, needs a game as foundation of the learning
process. GBL further gives the learner the feeling of playing a computer game, they indicate.
Although the two concepts represent different approaches, both pursue the goal of transferring positive
experiences from games to serious situations, (Krath et al. 2021) elucidate. “Through game elements such as points,
levels, badges, quests, and many more, gamification can transparently illustrate goals and their relevance, lead users
through guided paths to goal-oriented activities, give users immediate feedback and reinforce good performance
positively, and simplify content to manageable tasks.”, they explain. Games can thus not only supply the previously
described cycle of LA with data, as (Kim et al. 2022) describe, they can also influence a learner’s attitude towards
learning due to the fun factor of games. Fun plays a motivating as well as a relaxing role, (Alsawaier 2018)
illustrates. Through relaxation, knowledge content can be better absorbed and through the increased motivation,
learners make more effort, he says. Thus, in order to make dull activities more exciting, the fun factor is put into
play. The perceived fun itself emerges within a player from accomplishment, investigation and rewards, he further
indicates.
Common Mistakes in Early Stages of Learning Addition and Subtraction
A usual counting sequence, one of the earliest mathematical encounters, starts with the number one. Hence,
in the early stages of learning, learners might suggest that no smaller number exists. Still, "children seem to have an
early understanding of quantifiers [...] that express a cardinality of zero items", (Kadosh and Walsh 2008) claim.
Some young learners nevertheless are not able to establish an idea of zero and hence come up with solutions like
7+0=0 because calculating with zero amounts to nothing in their reasoning, (Chaudhuri 2009) suggests. He gives
two more examples, such as 0+0=2 due to the learners simply counting the zeroes or 8-8=1 because some children
think that something simply has to remain.
Especially for learners in their first two years of school, transition of tens poses a major challenge. As
(Gasteiger 2019) states, children at this stage can be divided into four categories concerning their approach of
solving additions and subtractions for the range of numbers up to 20: Counters make use of a simple counting-on
strategy, mechanical operators can solve additions by a completion step to the next ten, flexible operators are
capable of using a variety of strategies to solve a problem and so-called experts have already automated addition and
subtraction tasks up to 20. When investigating incorrectly solved problems, experts, namely children who claim to
know the answers to problems including a transition of tens without availing themselves of any specific strategy,
make the highest proportion, closely followed by counters, she discusses in detail.
Solutions like 37+26=53 may stem from students correctly adding up the unit digits to 13, but then
forgetting to add the ten from this intermediate result as a one to the summation of the digits of the tens, (Sujarwo
et al. 2020) report. Hence, problems of this kind will result in solutions ten short of the correct one if a learner
experiences this misconception and omits the carry. Incorrect subtractions, like 63-26=47, on the other hand, will
rather lead to too high solutions instead because of a left out borrow.
Some young learners tend to overturn digits to facilitate subtractions. In order to avoid a borrow, learners
simply flip the unit digits of the minuend and the subtrahend, as described by (Sujarwo et al. 2020). This can be seen
in the following incorrect example: 43-19=36. Learners who are under the delusion that this does not change the
outcome of their calculation thus treat it as if the left side was 49-13, which explains the false result 36.
Some learners have the tendency to double count a number when trying to count on as a way of solving an
addition. Hence, an example such as 7+2 would erroneously yield 8, because the counting sequence for the second
summand would start at seven instead of eight. This would lead to an error by one, as observed by (Gaidoschik
2010). It is to be questioned whether the human brain is qualified for learning addition or multiplication tables by
heart. Due to the similarities of 6+2 and 6·2 regarding appearance and sound, one's brain is prone to unintended
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
associations, (Gaidoschik 2010) reveals. This provides an explanation why a learner would come up with an
incorrect solution such as 6+2=12.
The Junior Plus-Minus-Trainer Web App
The Mascot as a Loyal Companion
Learners are supposed to feel accompanied through the practice process in the application, which is why
cartoon mascots are used. Users are confronted with all three available mascots already before the first login to the
Junior Plus-Minus-Trainer. The drawn animals all seem to try to get a glimpse of the learner who is about to enter
the main menu. After the first successful login, a user may choose between a giraffe, an alpaca and a gnu as their
personal mascot. Of course, a mascot could be changed at any time in the main menu. The chosen animal, however,
is meant to be a loyal companion throughout the entire game. It does not only appear in the main menu to greet a
learner or explain the game modes, it also plays an important role in the game modes. Giving feedback on the user's
input and encouraging the learner are its main field of duty.
An additional feature is each mascot's ability to evolve. After every 20 correctly solved problems, except
for one competitive mode, a mascot appears in a different costume. By solving questions correctly, one can thus
collect stickers. Each already unlocked sticker may be viewed collectively via the main menu respectively on a
separate page. Altogether a user can unlock 47 stickers for all mascots. (Fig. 1) shows the main menu of a test user
with the alpaca in an unlocked costume.
Figure 1: Screenshot of a test user's view of the main screen
The Four Game Modes
The app comes with four different game modes which have been created to attract a wide range of learners.
One mode focuses on targeted practice of a learner's weakest kinds of problems from predefined error categories,
while another one especially encourages competition. Three of the modes are self-paced with no time limit and one
is constrained to last exactly one minute. Each question presented in a game session has a distinct difficulty. There
are four levels of difficulty, which merely depend on the possible range of the natural numbers occurring in the
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
question: level 1 (up to 10), level 2 (up to 20), level 3 (up to 50) and level 4 (up to 100). (Fig. 2) shows a screenshot
of a game session with the mascot in its supporting role.
Figure 2: Screenshot of a game session
Mix-Mode
First, a pseudo-random calculation (difficulty level 2) is generated. If this created problem fulfills the
requirements for one (or more) error categories, the number of traps laid out in the corresponding error categories is
incremented. If a learner commits an error, the number of committed errors is incremented by one in the respective
error category. In this way, the quotient of the committed errors divided by the laid out traps for an error category
can be used to find out how often a child has "fallen into the trap" in relative terms. This, however, does not work
for categories which have no prerequisites, because traps cannot be laid out in a meaningful way in these cases.
Errors reduce the degree of difficulty for the next task (except for level 1), correct solutions increase it
(except for level 4). A total of 20 problems has to be overcome in order to finish a game in this mode. Care is taken
to ensure an equal distribution of the laid out traps. The difference between the most frequent and the rarest
occurring trap is therefore one. If, for whatever reason, a user abortively ends a game session, the last problem,
which has already been presented to the user, has no user solution. These cases will show up in the user's statistics as
unanswered problems.
Master-Mode
Game sessions in this mode work the same as those in the mix-mode with the sole difference that the
difficulty always remains at level 4.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
Highscore-Mode
Here, no attention is paid to equal distribution of error categories. A not predetermined number of pseudo-
random tasks of difficulty level 4 appear. Learners have exactly one minute to provide as many correct results as
possible. Correct results increment the points achieved, while incorrect results decrement the score. The more points
they can achieve after the countdown from 60 seconds has reached zero, the better. Point counts cannot be negative
at any time.
Learners, precisely only users of the role student, can end up in the leaderboard with good performances.
Three different rankings exist: an all-time, a monthly and a daily ranking. Users should not have to worry about
spoiling their statistics in this mode. The focus here is rather on speed than on mere accuracy. Thus, this mode is not
counted for statistics.
Training-Mode
Here, the ratio of errors to corresponding laid out traps comes into consideration. The higher this ratio for a
certain error category is, the earlier problems of this kind are trained in this mode. Therefore, this mode becomes
increasingly meaningful the more data is already available in a user's statistics. The session starts with the weakest
category in difficulty level 1. As in the mix-mode, correct solutions increase the difficulty level and wrong ones
decrease it (except in level 1). Only if a task of difficulty 4 has been solved correctly, the error category of the set
task is changed, even if the just practiced error category would still be the user’s weakest one. The error category
just practiced will then no longer occur in this training session. In order to prevent potential frustration about not
being able to move on to another category, the provoked errors are automatically changed after a user has seen ten
questions of the same category.
Problems from the "weakest" categories are asked one after the other, always starting with difficulty level
1, until the child has mastered the last one, i.e. the most successful one so far. It is pointed out, however, that the
training is a quasi-endless mode, because a complete session can be lengthy. For this reason, it is made easy and
considered legitimate for the child to end the current training session without feeling that they have quit or given up.
Learners can see the overview of the training session after they end it at any time. Depending on their performance,
encouraging messages appear. Ending a game prematurely without influencing one's statistics is only possible in
training mode. Here, a question that is not answered is deleted from the database at the end and therefore does not
appear as an unanswered question in the statistics.
Statistics and History
The app promotes the concept of LA. Due to the amount of data gathered from game sessions, valuable
statistics can be generated from it. A user’s errors are split up into 10 predefined error categories, which are tailored
to the already discussed most common errors of young learners and stored in a database. Players can view their own
statistics and detailed record at any time. An overview of all additions, subtractions and total calculations is
available. Furthermore, a list of the most recent tasks with a clear indication of whether the solution was correct or
incorrect, a possible correction and further details can be viewed. Moreover, users can see how many times they
have fallen into which laid out trap and which error category has thus caused the most difficulties. Teachers also
have access to the overall and individual statistics of all their students. This allows them to close the LA cycle by
reacting to the results. An exemplary screenshot of an overview page of a test user’s error categories from a
teacher’s point of view can be seen in (Fig. 3).
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
Figure 3: Screenshot of a test user's committed errors for each predefined error category
Implementation
The implemented web app makes use of the commonly used model-view-controller (MVC) pattern of
software technology. The point behind this form of development is that by concretely separating what is displayed
from what happens in terms of software logic, both collaboration and maintenance are simplified. [
1
]
For the implementation in the front end, the proven technologies Hypertext Markup Language (HTML),
Cascading Style Sheets (CSS) and JavaScript (JS) were resorted to. As a facilitator, the framework Bootstrap was
used here. Since the Laminas framework is employed on the server side, the PHP: Hypertext Preprocessor (PHP)
programming language is in use there. The development itself took place within a Vagrant box, so that compatibility
problems could be ruled out from the start.
To allow the meaningful storage and use of the data collected in the game, they are stored in a database. An
application programming interface (API) was developed with Laminas API Tools so that the underlying database
could also be used by other applications. The several provided services pave the way for native apps.
Evaluation
After the final version of the web app has been implemented and published, it was supposed to be tested.
Several primary schools initially showed interest in testing the app, but then it turned out that neither of them was
equipped with the necessary resources for a meaningful implementation, namely PCs or laptops in adequate
numbers. Thus, the testing was completed with 49 school children from secondary education. Fictitious test accounts
have been created in advance which were randomly drawn by the students. As the paper scraps with the login data
were not collected after completion of the testing, it was carried out anonymously.
The testing itself and the contained survey were meant to function as a proof of concept (POC), which
means that the feasibility was investigated. The goal was to demonstrate the practicality of the web app on the one
hand and to provide valuable data to generate statistics on the other hand. (Tab. 1) provides an overview of the
schedule of the testing.
[
1
] https://developer.mozilla.org/en-US/docs/Glossary/MVC, retrieved April 11, 2023
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
Time in minutes
Description of the activity
10
The students arrive at the computer labs and log in to the PCs, the fictitious login
data for the web app are drawn and the children log in with the credentials.
10
The children are exploring the app on their own.
5
The most important functions of the app are briefly presented and discussed.
5
Students are asked to play at least three highscore games each.
10
The children are supposed to try to unlock as many stickers as possible.
10
Students receive a link to the online survey and complete the questionnaire.
Table 1: Schedule of the testing with students from secondary education.
Results of the Conducted Survey
Since the test users are not part of the actual target group of the app, their performance during testing does
not play a significant role. Due to the higher age and the accompanying greater experience, the results obtained
cannot be extrapolated to primary school children. Thus, the focus is on the users’ subjective perception of the app,
which was measured with an online questionnaire at the end of testing. Of all the 49 participants, 46 have submitted
their answers.
Whenever learners were asked to give a rating, a scale of 1 to 5 points was used. Here, 1 point means the
worst and 5 points the best possible score. Apart from these kinds of question formats, mainly single choice and free
text questions were posed. (Fig. 4) provides an overview of the distributions of awarded points for all questions
which included ratings. The answers to all these questions are detailed below individually in terms of basic
descriptive statistics. The abbreviated designations occurring in the square brackets correspond to the labels in the
figure.
Figure 4: The test users' ratings for various aspects of the web app
[Design] How do you like the design/layout of the app (colors, structure, ...)? Only two different answers
were given here, with one exception. Accordingly, the students mostly agreed that the design of the app is very good
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
or good. This is also reflected in the standard deviation of 0.55 around the mean value of 4.52. The full score
occurred most frequently, and the median is also 5.
[Design (younger)] How well do you think the design/layout of the app (colors, arrangement, ...) is suitable
for children aged 6-8? This question was answered with both the maximum score and the minimum of one point.
However, since the minimum score is a sole assessment and, apart from the single award of 3 points, 4 or 5 points
were otherwise again awarded exclusively, this can be considered an outlier. This can be seen in the standard
deviation of 0.74. The mean value is the second highest of all the questions asked, at 4.65. The overwhelming
majority of test users awarded the maximum points here.
[Mascots] How do you like the mascots? The answer to this question received on average the highest score
among all with the lowest standard deviation at the same time. This is 0.53 with an arithmetic mean of 4.74. Of all
46 participants, 36 answered this question with the highest score. Less than three points were not awarded at all in
this case.
[Unlocking stickers] How motivating or good would you rate the unlocking of new stickers of your mascot?
The possibility to unlock stickers delivered the second lowest value in terms of the arithmetic mean. Since more than
half of all children awarded the maximum number of points, the median and mode values are still at 5 points.
[Highscore mode] How motivating or good would you rate the highscore mode and the leaderboard?
Regarding the arithmetic mean, this question by far received the worst feedback from the students. The discordance
of results is also highest here. On average, the subjects gave 3.87 points with a standard deviation of 1.02.
Furthermore, this is the only question where neither the median nor the mode are 5; both are 4. Apart from that, this
is the only question where all ratings from 1 to 5 were given.
[App altogether] How motivating or good would you rate the app in general? Here again, both the
minimum and maximum scores occurred. As with question 2, the award of the minimum number of points can once
again be considered an outlier. On average, the learners awarded 4.46 points for the app in general with a standard
deviation of 0.81. The mode as well as the median are at 5 points.
Apart from questions where a rating was required, the most popular game mode and the most played mode,
among others, were investigated. 36.96% of the respondents, that are 17 children, said that they liked the training
mode the most. Not far behind, mix mode was chosen by 32.61% (which are 15) of all children. 12 (26.09%) of the
test persons chose the highscore mode as their favorite game mode and the clearly most unpopular game mode was
the master mode with only two votes (4.35%). As for the order of the most played modes, they correspond to the
most popular game modes. The distribution differs slightly: Training received 18 (39.13%), mix 15 (32.61%),
highscore 10 (21.74%) and master 3 (6.52%) votes.
Furthermore, three optional and open questions were posed. The users were asked to indicate whether there
were any aspects of the app that they particularly liked and, if so, to describe them. Of all the 46 participants in the
survey, 27 gave meaningful answers to this free-text question. Among them by far the most positive responses were
related to the mascots and stickers. Based on the answers given, it is not always possible to determine in retrospect
whether these two terms were not sometimes used synonymously by the testers. The mascots or a specific mascot
were mentioned 13 times in praise. 8 times, a specific sticker or the possibility of unlocking new stickers was
mentioned. In two cases, the design or background of the app was entered as an answer. There were also two votes
for the highscore mode or the leaderboard. One student mentioned the mix-mode positively and the diversity of
modes received one vote as well.
Other than that, the children were asked, whether they had any suggestions on how to improve the app.
There were 18 responses returned for this question. A large portion of the feedback was related to acoustic features
that could be integrated into the game. To be more exact, 8 children stated that they would approve to have some
music and sound effects in the app. Four subjects thought that a talking mascot would be a good idea. Two users
reported back that they would enjoy having more time for completing a highscore game.
The last question gave the participants the opportunity to state if they found any errors or problems in the
app. A total of four answers were entered for this question. Two users encountered the problem that the app did not
work properly, no tasks appeared and the background remained white. One student pointed out that there were
repeated dropouts of about 5 seconds when playing the highscore mode. One tester also stated that the lack of music
was a problem or error.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
Correlations
In addition to descriptive statistics, which merely depicts individual aspects of the app, correlations of
variables were also examined. To be more precise, possible connections between the user's subjective perception
towards the mascots and the app in general were investigated. Moreover, the learners' ratings of the unlocking
feature as well as the highscore mode were each tested for whether they correlate with the general assessment of the
app. Thus, Spearman's rank correlation test was performed. The resulting Spearman's rank correlation coefficient ρ
and the corresponding p-value, which is decisive in terms of significance, are shown in (Tab. 2).
Tested variables
ρ
p
mascots - app in general
0.322
0.029
unlocking stickers - app in general
0.379
0.009
highscore mode/list - app in general
0.287
0.053
Table 2: Results of Spearman's rank correlation test for selected aspects of the app
Since the coefficient ρ is greater than zero in all three cases, a positive linear relationship between the
variables exists. The determined p-values provide indications as to whether the results are statistically significant.
Since the p-values regarding the correlations between the feeling towards the mascots and the app in general, as well
as the unlock function and that of the app overall, are below 0.05, statistical significance can be implied. For the
third tested case, the relation between the perception of the highscore mode/list and the app in general, statistical
significance could not be substantiated. Having shown the statistical significance of the correlation between the
variables in the first two rows of (Tab. 2), we now proceed to examine the effect size of the correlation so that it is
clear how meaningful the results are. The calculated Spearman's rank correlation coefficient ρ itself is a measure of
the effect size. Both investigated cases show a value of ρ > 0.3 which implies a medium effect size. [
1
]
Discussion
Since only four responses were received from all 46 participants in the online survey regarding problems
with the app, half of which were caused by using a meanwhile unsupported browser (Internet Explorer, which has
been retired in 2022 [
2
]) and one of which actually described a lacking feature, it can be inferred that the app
worked fine for the vast majority of testers.
The fact that the mascots received the highest average score of all surveyed features is not surprising. As
explained in the chapter on games in the classroom, fun comes from accomplishment and exploration, among other
things. A user’s mascot takes on the role of accompanying them on their journey of discovery through the app and
has words of praise or encouragement for the user, which is assumed to contribute to its positive perception. It can
be stated that the mascots had a motivating effect on the participants. The mean value of the scores given was
considerably higher than that of the app in general, and the Spearman's rank correlation test carried out showed that
there was a medium correlation between the motivation triggered by the mascots and the motivation with regard to
the app itself.
Due to the competitive component of the highscore mode, which is a motivating factor of games in general,
it would have been expected that this mode would excite many players. In retrospect, the scheduled time slot of 5
minutes may have been slightly tight, as learners also had to get used to this rather stressful mode first. It may also
be that some users have taken such a liking to the mascots that they did not appreciate the one mode that does not
advance the unlocking process. Another reason for the low approval of the highscore mode could be the fact that the
test users, since they should already be able to master this type of basic arithmetic at their age, did not feel the
opportunity to prove themselves with these rather simple tasks. Apart from that, the leaderboard fills up and creates
competition with the widespread use of the app, which was not yet the case at the time of testing. Correspondingly,
no significant correlation regarding the subjective perception of motivational ability could be found between the
[
1
] https://www.methodenberatung.uzh.ch/de/datenanalyse_spss/zusammenhaenge/rangkorrelation.html, retrieved
February 15, 2023
[
2
] https://www.microsoft.com/en-us/download/internet-explorer.aspx, retrieved March 2, 2023
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
competitive high score mode and the app as a whole. Since the competitive mode is only the third most popular
game mode on the web app, it can be inferred that the self-paced modes are more in favor of the players.
As the results of the data collection have shown, both the mascots themselves and the function of unlocking
them correlate with the overall satisfaction towards the app. Judging by the comparison of arithmetic means,
unlocking stickers has a less motivating effect than the app per se. Hence, the one via the conducted survey detected
aspect that positively contributes to the motivation towards using the app is the existence of mascots. Besides that,
the literature research showed that especially the fun factor plays a motivating role. The feeling of fun arises in the
players when they are rewarded, when they may exlpore, or when they have accomplished something special.
Conclusion
This work aimes to identify aspects which enhance learners' motivation towards a gamified learning app.
The integrated mascots in particular prove to be an influencing factor worth mentioning. To specifically address the
research question defined in the introduction, it can be stated that both the mascots themselves and their ability to
evolve with new costumes had a motivating effect on the subjects. The research showed that the perceptions of the
mascots and the app in general correlate to a medium degree. The same goes for unlocking new costumes and the
general feeling towards the app. Furthermore, the evolving feature and the mascots were frequently praised by the
pupils. That being said, the study indicates that self-paced game modes were better received by the subjects than the
competitive one.
It should be noted that the results of the study are subject to certain limitations. With a number of 46
submitted questionnaires, the test group size is rather small. Apart from that, the test subjects were significantly
older than the intended target group. Therefore, the results cannot be transferred unreservedly to primary school
children.
When developing the app, the emphasis was placed on scientifically proven characteristics such as
incentives and rewards, which are intended to make the user experience fun. Also in terms of LA, the app provides
opportunities for teachers and learners to receive feedback on the learning process. The proof of concept has shown
that the developed Junior Plus-Minus-Trainer web app works well and thus integrates appropriately into Graz
University of Technology's list of learning apps [
1
].
References
Al-Azawi, R., Al-Faliti, F., and Al-Blushi, M. (2016). Educational gamification vs. game based learning: Comparative study.
International journal of innovation, management and technology, 7(4):132136.
Alsawaier, R. S. (2018). The effect of gamification on motivation and engagement. The international journal of information and
learning technology, 35(1):5679.
Chaudhuri, U. (2009). Mit Fehlern rechnen. Auer Verlag in der AAP Lehrerfachverlage, Augsburg, Germany, 1 edition.
Clow, D. (2012). The learning analytics cycle: Closing the loop effectively. In Proceedings of the 2nd International Conference
on Learning Analytics and Knowledge, LAK ’12, page 134–138, New York, NY, USA. Association for Computing Machinery.
Dash, G., Chakraborty, D., and Alhathal, F. (2022). Assessing repurchase intention of learning apps during covid-19. Electronics,
11(9):1309.
Gaidoschik, M. S. (2010). Die Entwicklung von Lösungsstrategien zu den additiven Grundaufgaben im Laufe des ersten
Schuljahres. PhD thesis, Wien.
Gasteiger, H. (2019). Strategieverwendung bei additionsaufgaben mit zehnerbergang ende jahrgangsstufe 2.
Gašević, D., Dawson, S., and Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends, 59(1):64
71.
Joksimovic, S., Kovanovic, V., and Dawson, S. (2019). The journey of learning analytics. HERDSA Review of Higher Education,
6:2763.
[
1
] https://schule.learninglab.tugraz.at/, retrieved April 30, 2023
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
Kadosh, R. C. and Walsh, V. (2008). From magnitude to natural numbers: A developmental neurocognitive perspective.
Behavioral and Brain Sciences, 31(6):647648.
Khalil, M., Prinsloo, P., and Slade, S. (2022). A comparison of learning analytics frameworks: a systematic review. In LAK22:
12th International Learning Analytics and Knowledge Conference, pages 152163.
Kim, Y. J., Valiente, J. A. R., Ifenthaler, D., Harpstead, E., and Rowe, E. (2022). Analytics for game-based learning. Journal of
Learning Analytics, 9(3):810.
Krath, J., Schürmann, L., and von Korflesch, H. F. (2021). Revealing the theoretical basis of gamification: A systematic review
and analysis of theory in research on gamification, serious games and game-based learning. Computers in Human Behavior,
125:106963.
Laine, T. H. and Lindberg, R. S. N. (2020). Designing engaging games for education: A systematic literature review on game
motivators and design principles. IEEE transactions on learning technologies, 13(4):804821.
Lang, C., Wise, A. F., Merceron, A., Gašević, D., and Siemens, G. (2022). What is learning analytics? In Lang, C., Siemens, G.,
Wise, A. F., Gašević, D., and Merceron, A., editors, The Handbook of Learning Analytics, pages 818. SoLAR, 2 edition.
Section: 1.
Sujarwo, M. et al. (2020). Analysis on mathematics learning misconceptions of the second-grade students of elementary school in
addition and subtraction integer topics. In 3rd International Conference on Learning Innovation and Quality Education (ICLIQE
2019), pages 757764. Atlantis Press.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The purpose of this special section is to collect in one place how data in game-based learning environments may be turned into valuable analytics for student assessment, support of learning, and/or improvement of the game, using existing or emerging empirical research methodologies from various fields, including computer science, software engineering, educational data mining, learning analytics, learning sciences, statistics, and information visualization. Four contributions form this special section, which will inspire future high-quality research studies and contribute to the growing knowledge base of learning analytics and game-based learning research and practice.
Article
Full-text available
Learning apps are becoming increasingly popular, and consumers have widely recognized their benefits, particularly during COVID-19 and the resultant lockdowns. However, despite the growing popularity of learning apps, little is known about the consumer values that impact repurchase intent. Learning apps must increase client engagement by providing stronger value propositions to overcome this hurdle. The current study proposes the consumption values theory to find this gap, better explaining customer behavior toward learning apps. Data from 429 learning app users are used to test the suggested model. According to the research, all consumption values have a favorable and significant impact on the intention to repurchase learning apps. In addition, the moderating effect of Visibility on intent to use and trust’s mediating role are confirmed. The study’s findings add to our knowledge of consumer behavior and practice.
Conference Paper
Full-text available
While learning analytics frameworks precede the official launch of learning analytics in 2011, there has been a proliferation of learning analytics frameworks since. This systematic review of learning analytics frameworks between 2011 and 2021 in three databases resulted in an initial corpus of 268 articles and conference proceeding papers based on the occurrence of "learning analytics" and "framework" in titles, keywords and abstracts. The final corpus of 46 frameworks were analysed using a coding scheme derived from purposefully selected learning analytics frameworks. The results found that learning analytics frameworks share a number of elements and characteristics such as source, development and application focus, a form of representation, data sources and types, focus and context. Less than half of the frameworks consider student data privacy and ethics. Finally, while design and process elements of these frameworks may be transferable and scalable to other contexts, users in different contexts will be best-placed to determine their transferability/scalability.
Article
Full-text available
Despite increasing scientific interest in explaining how gamification supports positive affect and motivation, behavior change and learning, there is still a lack of an overview of the current theoretical understanding of the psychological mechanisms of gamification. Previous research has adopted several different angles and remains fragmented. Taking both an observational and explanatory perspective, we examined the theoretical foundations used in research on gamification, serious games and game-based learning through a systematic literature review and then discussed the commonalities of their core assumptions. The overview shows that scientists have used a variety of 118 different theories. Most of them share explicitly formulated or conceptual connections. From their interrelations, we derived basic principles that help explain how gamification works: Gamification can illustrate goals and their relevance, nudge users through guided paths, give users immediate feedback, reinforce good performance and simplify content to manageable tasks. Gamification mechanics can allow users to pursue individual goals and choose between different progress paths, while the system can adapt complexity to the user's abilities. Social gamification elements may enable social comparison and connect users to support each other and work towards a common goal.
Article
Full-text available
Effective educational interventions require sufficient learner engagement, which can be difficult to achieve if the learner is inadequately motivated. Games have been shown to possess powerful motivators that fuel a person's desire to engage in unattractive activities, such as learning theoretical material. However, to design an educational game that is capable of providing motivated engagement is a challenging task. Previous research has proposed various game motivators and game design principles to alleviate this, but a comprehensive synthesis has yet to appear. In this study, we conducted a systematic literature review that yielded two major contributions: (i) a taxonomy of 56 game motivators in 14 classes; and (ii) a taxonomy of 54 educational game design principles in 13 classes, with linkages to the identified game motivators. As a minor contribution, we have also presented a classification of gamification-related terms and proposed different strategies for applying gamification. The results of this study are available for educational game designers and researchers to use as a practical toolkit for the creation and evaluation of motivating educational games that keep players engaged. Moreover, this study is the first step towards the creation of a unified gamification framework.
Article
Full-text available
It has been almost a decade since the emergence of learning analytics, a bricolage field of research and practice that focuses on understanding and optimising learning and learning environments. Since the initial efforts to make sense of large learning-related datasets, learning analytics has come a long way in developing sophisticated methods for capturing various proxies of learning. Researchers in the field also quickly recognised the necessity to tackle complex and often controversial issues of privacy and ethics when dealing with learner-generated data. Finally, despite huge interests in analytics across various stakeholders-governments, educational institutions, teachers, and learners-learning analytics is still facing many challenges when it comes to broader adoption. This article provides an overview of this journey, critically reflecting on the existing research, providing insights into the recent advances, and discussing the future of the field, positioning learning analytics within the broader agenda of systems thinking as means of advancing its institutional adoption.
Article
Full-text available
The analysis of data collected from the interaction of users with educational and information technology has attracted much attention as a promising approach for advancing our understanding of the learning process. This promise motivated the emergence of the new research field, learning analytics, and its closely related discipline, educational data mining. This paper first introduces the field of learning analytics and outlines the lessons learned from well-known case studies in the research literature. The paper then identifies the critical topics that require immediate research attention for learning analytics to make a sustainable impact on the research and practice of learning and teaching. The paper concludes by discussing a growing set of issues that if unaddressed, could impede the future maturation of the field. The paper stresses that learning analytics are about learning. As such, the computational aspects of learning analytics must be well integrated within the existing educational research.
Article
Full-text available
In their target article, Rips et al. have presented the view that there is no necessary dependency between natural numbers and internal magnitude. However, they do not give enough weight to neuroimaging and neuropsychological studies. We provide evidence demonstrating that the acquisition of natural numbers depends on magnitude representation and that natural numbers develop from a general magnitude mechanism in the parietal lobes.