Content uploaded by Martin Ebner
Author content
All content in this area was uploaded by Martin Ebner on Jul 19, 2023
Content may be subject to copyright.
The Development of Two Math E-Learning Systems for Primary School Chil-
dren in Android
Markus Plieschnegger
Graz University of Technology
Austria
m.plieschnegger@student.tugraz.at
Josef Wachtler
Educational Technology
Graz University of Technology
Austria
josef.wachtler@tugraz.at
Martin Ebner
Educational Technology
Graz University of Technology
Austria
martin.ebner@tugraz.at
Abstract: Electronic learning, with all its different variations, approaches and applications, is a very
relevant topic in today’s education. E-Learning is a wide field, as it affects classes in school as well
as training scenarios for adult persons. This work describes the development of two E-Learning
systems: EinMalEins Trainer and Division Trainer. Both applications are part of the learning lab of
Graz University of Technology. This document sums up the theoretical background of the
development process as well as the realization. A lot of emphasis in this work laid on applying
different testing methods, therefore thinking aloud testing as well as automated testing are handled
more deeply. The methodology behind these methods is explained and the results are presented.
Introduction
Motivation
E-Learning is a very relevant topic today. This work was realized during the corona pandemic, which broke
out 2019. This crisis and the need of social distancing showed once again that E-Learning is a very important addition
to classical education (Ebner et al. 2020). However, when talking about electronic learning, a certain range of terms
and definitions can be discussed. This topic reaches from the first attempts to use electronic devices and “new media”
for training up to stand alone learning systems (serious games) and the integration of gaming elements into other
contexts (gamification). These two terms will be discussed in the upcoming sections.
Even when limiting the discussion to the terms serious games and gamification, still some interesting ques-
tions regarding the details of the usage of these applications arise: What are the benefits and limitations of serious
games and gamification? Should stand alone learning systems be seen completely separated from classic school edu-
cation or should such software products be included? What is necessary to build a successful learning application,
what motivational factors need to be considered? The upcoming sections address these questions and give an overview
on various topics and considerations regarding the field of game-based learning.
Graz University of Technology offers a big variety of electronic learning solutions for different topics (e.g.
math, languages, etc.) that resulted from different theses (Schön et al. 2012, Ebner 2015). Therefore, different target
platforms are available for the various applications (e.g. web applications, Android or IOs). Two new learning systems
for Android were developed: EinMalEins (German for multiplication table) Trainer and Division Trainer. Both appli-
cations are Android realizations of already existing web learning systems. The target of this development process was
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
PrePrint - originally published in:
Plieschnegger, M., Wachtler, J. & Ebner, M.
(2023). The Development of Two Math E-
Learning Systems for Primary School
Children in Android. In T. Bastiaens (Ed.),
Proceedings of EdMedia + Innovate
Learning (pp. 798-810). Vienna, Austria:
Association for the Advancement of
Computing in Education (AACE). Retrieved
July 19, 2023 from https://
www.learntechlib.org/primary/p/222581/
to implement the theoretical background into working applications that can be used in schools. These applications are
examples for serious games. To ensure the quality of these learning systems different testing methods were applied:
Besides automated testing (unit testing, integration testing and GUI testing) several thinking aloud tests were per-
formed. Advantages and limitations of these methods are discussed in this work and the results are presented.
This document sums up the theoretical background that builds the base for the development process, gives
an overview over the testing methodology and introduces both applications. Furthermore, the results of the different
testing sessions are presented and the conclusions of the work are provided. Regarding testing, this research question
should be answered: “How can different types of automated testing and user testing help to improve the development
process of learning applications?”
User Interface Concepts
(Holzinger 2013) describes the term human-computer-interaction (short: HCI) as a research field and part of
informatics. According to this, this topic arose around 1983 and describes two main measurements that determine the
success when it comes to interaction between humans and computers:
• Effectiveness: Defines the degree to what a task can be fulfilled.
• Efficiency: Measures the relation of costs/time to benefit.
• Satisfaction: Joy of use, motivation, and fun by using a software. This is especially important in the
context of E-Learning.
(Holzinger 2013) summarizes the three terms above as usability. So, this term describes much more than just
the possibility to achieve a certain goal (which would be better described by the term effectiveness). A few examples
what helps the usability of a software according to the author:
• Orientation: It is assumed that a software is visually split into different parts. Users have to know and
understand where they are.
• Navigation: Buttons, links or other navigation elements offer the possibility to get to the desired area of the
software.
• Content: Texts, images, etc. should be presented in an understandable way.
• Interaction elements like buttons and sliders offer the possibility to perform specific actions. The way
these actions are presented raise intuitive expectations from the users. These actions and the expectations to
them should match.
While it is not the only target of the research field HCI, today’s discussions regarding the listed usability
aspects above are mainly about graphical user interfaces, which reach back to the year 1983, according to (Shnei-
derman 1981). Especially in this context the topic of visual perception is very important. (Duchowski 2007) describes
the process of perceiving visual information as non-continuous: Only a portion of the visual information provided by
the eyes of a human is processed in detail. This is called focusing. (Duchowski 2007) also mentions that the fixation
on different objects changes frequently. This understanding should be used to lead the attention of a user to the specific
GUI elements.
(Rakoczi et al. 2013) provide various examples on how to use knowledge about the human image processing
to improve user experience. One important example is the pop-out effect: Depending on the presentation, information
can be recognized on different layers. Some portions might be pushed into the background by the visual perception,
while the rest seems to appear in front (“pops out”). Figure 1 shows an example of this effect.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
(Rakoczi et al. 2013) describe visual perception as a bidirectional process: Emitted light that gets reflected
from certain objects into the eye of the viewer is called the bottom-up direction of visual perception. However, there
is also a counterpart to that: Experience from a viewer builds up certain expectations on what is about to be percep-
tualized, this is called the top-down direction. These expectations influence the information building process in a way
that small deviations to the visualized view are compensated. The authors name typos in texts as an example, these
minor deviations from the expectation are hard to spot because they are not actively recognized. It is important to
include the expectations of users into the GUI design, because otherwise the user interface will be experienced as anti-
intuitive. Effects like the pop-out effect should be used to lead the attention of a user towards the more important UI
elements. Therefore, actions represented by buttons for example can be split into two groups: Primary buttons, which
appear more prominent and secondary buttons for actions like “back”. Also hint texts and background design elements
should not catch too much attention by a user. In the example from Figure 1 the brightness is used to provide a dis-
tinction between the foreground (the nines) and the background. It is important to know, that a certain threshold has
to be exceeded so that layers like displayed in the figure build up. In this example the contrast is high enough to
produce a pop-out effect. (Rakoczi et al. 2013) calls this threshold “salience”. Other attributes than the color can also
be used to group certain objects together. (Ware 2019) lists geographic attributes like length, width, size or orientation
for example. Figure 2 shows some of these form effects.
Figure 2: Different form effects used to group objects together (Ware 2019)
Figure 1: An example for the pop-out effect as described by (Rakoczi et al. 2013)
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
Motivation and Gamification
As already mentioned, the topic of E-Learning uses a lot of different definitions and terms. The origins reach
back to Computer-Based Trainings (CBT) and Web-Based Trainings (WBT), according to (Le and Weber 2011).
Five authors are listed that introduced the term game-based learning: James Paul Gee, Diana Oblinger, Richard Van
Eck, Steven Johnson, and Marc Prensky. This term contains two different approaches, that both combine gaming
elements with electronic learning: Serious games on the one hand are described by (Abt 1987) as games, that have an
educational purpose, while gamification is defined by (Deterding et al. 2011) as the strategy to use game design
elements in other contexts. The difference between these two concepts is displayed by Figure 3. Both concepts are
based on gaming, which compared to playing, defines certain rules and borders of acting. However, the difference is
that serious games are whole solutions while gamification is based on gaming elements.
In serious games as well as gamification contexts, motivation plays a key role for success: Motivation en-
sures the long-term effect when topics are repeated often enough to be remembered. Motivation and games are cer-
tainly a good combination because games rely on the entertainment factor. Even if serious games are designed for
educational benefit, (Abt 1987) clearly states, that this does not mean, that they are not allowed to be entertaining.
(Klimmt 2008) lists three different kinds of entertainment processes that are triggered by games:
• Self-efficacy experience: Games are reacting on player decisions. These decisions are decisive for the
game result.
• Tension: Uncertainty exists until the result of the game or an action is revealed. Players have to put effort
in their attempts.
• Life/Role experience: Players get part of the environment provided by the game.
The self-efficacy experience is best described by a trial-and-error process: Questions are repeated until they
are solved correctly. (Le and Weber 2011) discuss a very interesting discovery: Success as well as failure can lead to
motivation. Success raises confidence by providing confirmation, on the other hand failure challenges a player by
inviting them to try a different approach. (Garris, Ahlers, and Driskell 2002) present this as an iterative process: The
game cycle contains user judgments, user behavior and the feedback from the system. This cycle debriefs the learning
outcomes. Figure 4 displays this model.
Figure 3: The distinction between serious games (top, left)
and gamification (top, right). (Deterding et al. 2011)
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
The question is raised whether a learning system should be integrated in a traditional school class or seen as
a standalone solution. (Wagner 2009) answers this question by arguing that the presence of a teacher in an E-Learning
scenario ensures the efficiency of the learning process. He furthermore states that the learning software, teachers and
students are part of a didactic context. This context is dependent on the expectations of students and on individual
supervision. (Wagner 2009) calls the teacher a moderator in a game-based learning scenario.
Another important question regarding intrinsic motivation and serious games: How should the two content
applications (learning subject and gaming content) be presented? More precisely, should these two contents be sepa-
rated from each other in a way that players switch between learning and gaming mode? (Jantke 2007) advises against
this method, because the learning content might appear as an annoying disturbance in such a scenario. This is con-
firmed by (Bormann et al. 2008), who used eye-tracking technologies to measure the attention towards learning con-
tents in such applications. The result of this study shows that that information is scanned at most.
However, standalone learning systems presented as video games are getting more and more popular as they
are released for the respective app stores. These games have to compete against other video games in the same app
stores which raises their requirements for success (Shen, Wang, and Ritterfeld 2009). Quality measures like stability,
proper presentation and intuitive controls are decisive for their success.
Testing Methodology
Two learning applications were developed: EinMalEins Trainer and Division Trainer. To ensure the quality
of both applications these testing methods have been applied:
• Automated testing:
◦ Unit testing
◦ Integration testing
◦ GUI testing
• Thinking aloud testing
Automated Testing Strategies
Automated tests contain unit tests, integration tests and GUI tests. The difference between unit tests and
integration tests can be expressed by the quantity of tested functionality: A unit describes an as small as possible piece
of code that is tested by a unit test. Integration tests cover multiple of these units in one test. GUI tests simulate whole
processes by emulating user inputs and verifying outputs on the application screen.
(Khorikov 2019) discusses several questions how automated tests can improve the quality of a software prod-
uct. One attempt that comes into mind is the measurement of different coverage metrics. According to (Khorikov
2019) branch coverage is superior to line coverage because it distinguishes between different branches that res ult of
conditional statements (e.g. if, else, etc.). However, the author concludes that even a high branch coverage does not
Figure 4: (Garris, Ahlers, and Driskell 2002) describe the learning outcomes as the result
of an iterative game cycle
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
guarantee the quality of a software: Automated tests are only valuable if meaningful asserts are done. On the other
hand, low branch coverage values signal that too much code might be missed by the test suite. (Khorikov 2019)
therefore calls coverage metrics as good negative indicator, but bad positive indicator.
The question arises whether the quality of an automated test can be quantified. (Khorikov 2019) suggests
doing so by introducing these four pillars of a good unit test:
• Protection against regressions: Can a unit test find a possible bug in the software?
• Resistance to refactoring: Is a unit test likely to keep working if a successful refactoring is done?
• Fast feedback: How long does it take for the test to provide results (run time)?
• Maintainability: Is it easy to understand and modify the test?
The formula provided by (Khorikov 2019) works by applying a number between 0 and 1 to each pillar and
multiplying these four values, so the calculated value of the unit test is also between 0 and 1. Unfortunately it is not
possible to maximize all four values independently of each other: To have a good protection against regressions more
code needs to be tested which will result in a longer runtime. Furthermore, testing implementation details provide
good protection against regressions but will also result in brittle tests which are not resistant against refactorings. Now
the question arises how to maximize the result value. (Khorikov 2019) states that resistance to refactoring is more of
a binary value, either a test is resistant or not. A value of 0 for this pillar would result into the product being 0 as well,
so the only option here is to maximize this value and not testing implementation details. The pillars protection against
regressions and fast feedback can now be balanced out: If a test runs more code, it ensures more protection but takes
more run time, these are typically integration tests (or even UI tests). Tests that only test a small piece of code on the
other hand run quickly, these are unit tests. Figure 5 shows how this balancing process looks like.
As a result of integration tests covering more code compared to unit tests less integration tests (and even less
UI tests) are needed. The majority of unit tests together should still cover the most important parts of the code: The
business logic containing the algorithms. Fewer integration and even fewer UI tests will result in run times for the
whole testing suite, that are still acceptable.
Thinking Aloud Testing Strategies
(Andrews 2023) describes thinking aloud tests as a type of user tests where a facilitator encourages different
test persons to verbalize their thoughts while trying to solve specific tasks. In an ideal scenario these tests are filmed.
Analyzing the video material helps to detect surprise moments for test users, even if they might not state their feelings
that clearly. These tests help to find user experience flaws that happen when UI elements seemed intuitive to the
designers of a software but turn out to be counter-intuitive to users.
Figure 5: Balancing protection against regressions and fast feedback (after (Khorikov 2019))
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
According to (Andrews 2023), the preparation of these tests is the key success factor to get good results.
Therefore, a lot of emphasis should be placed on explaining to a test user that they are not tested for their competence
to control the application. Instead, it should be made clear that the software is tested for its design. In the case of
EinMalEins Trainer and Division Trainer mostly school children participated as test users, for children it is even harder
to not feel under pressure of being tested. (Andrews 2023) suggests to show the test user another thinking aloud test
(of another software) to explain the principles of the testing method. A slide variation was applied in the course of this
study: Before the actual test was started the facilitator played the role of the test user and one parent did the job as the
facilitator. In this scenario a full thinking aloud test on a completely different application was performed and showed
to the actual test user. This strategy worked well, because the test user had no pressure in this demonstration and saw
that not being able to fulfill a task is expected.
A few disadvantages of thinking aloud tests should also be mentioned: It should be expected that the testing
environment slows the test user down ((Ericsson and Simon 1993) quantifies this with 17%). Therefore, thinking aloud
tests should rather focus on effectiveness and satisfaction than on efficiency. Furthermore (Andrews 2023) states that
the testing environment might also influence the test user’s problem solving behaviors. Therefore, the results should
be interpreted with care, especially if a situation occurs where the test user runs out of limited time. However, allowing
the user to do multiple attempts might salvage this issue, because the user will probably not explain every thought
again on the second try but focus more on solving the question.
Implementation
Two mobile applications were developed: EinMalEins Trainer and Division Trainer. Both applications are
described in the following sections.
EinMalEins Trainer
“Einmaleins” is the German name for the multiplication table. Therefore, EinMalEins Trainer can be used to
practice these operations from 1x1 to 9x10. The application is targeted for primary school children. The user interface
is simple by design, it should be intuitive to be used. The main menu and the calculation screen are displayed in Figure
6.
Figure 6: EinMalEins Trainer: The main menu (left) and the calculation (right) screens
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
The application provides some motivational elements like a score and avatars that can be unlocked. The
availability of these avatars depends on specific score thresholds that have to be reached. The application contains two
parts, the client that runs on Android and the server API written in Laminas
1
, which was not part of this project.
However, this API had to be extended to support the calls necessary to fetch the score and the time limit setting. The
server API is able to track successes and failures of users, therefore a star displays how well a specific question is
known by a user. Repetitive correct answers will fill up the star. The application also provides a star overview to
visualize the general progress of a user. The avatar selection and the star overview are presented in Figure 7.
The calculation can be started in two different ways: The practicing mode (in German “üben”) works offline
and randomly selects questions. This mode does not track any statistics from users and therefore no score is calculated.
The gaming mode (in German “spielen”) provides the mentioned motivational features (star overview, score).
Division Trainer
Division Trainer is another electronic learning system targeted for primary school children. It teaches the
division operation like it is taught in school classes. This operation is more complex compared to the one in Ein-
MalEins Trainer, therefore more interaction is necessary and the calculation screen includes more controls. While the
divisor is always a smaller number than 10, the dividend can have multiple digits.
To cover multiple possibilities how such a calculation can be done, two different modes are supported (ex-
ample screenshots are shown in Figure 8):
• Long mode: In this mode every step of the calculation includes three parts: First a partial division has to be
performed and the whole number of the quotient has the be typed into the respective field. Afterwards the
product of this quotient and the divisor has to be calculated and written in the correct field like under the
dividend. The third step is to subtract the last two lines below the dividend. This process is repeated until the
operation is completed.
• Short mode: In this mode step two and step three described in long mode are combined into one step: The
product that is built from the partial quotient and the divisor is not written in a separate line but subtracted
immediately from the line above.
1
Laminas project: https://getlaminas.org/ (visited on 2023/05/10)
Figure 7: EinMalEins Trainer: Avatar selection (left) and star overview (right) screens
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
As mentioned already this application provides questions that are harder to solve compared to EinMalEins
Trainer. For this reason, the application supports a leveling system (which is also a motivational factor) that influences
the difficulty of the questions. The difficulty is raised by using larger numbers for the dividend.
As in EinMalEins Trainer, a practicing mode and a gaming mode are provided. Again, the practicing mode
will generate random numbers without tracking any statistics. The leveling system including questions of appropriate
difficulty are available in the gaming mode only.
Evaluation
Both applications, EinMalEins Trainer as well as Division Trainer were tested by using different approaches.
The following sections list the results and possible improvements that were discovered during these tests.
EinMalEins Trainer
These automated tests are provided for EinMalEins Trainer (all are passing in the current version):
• API response tests for all relevant server calls. The communication for these tests is emulated.
• Seven unit tests which test different conversions.
• One GUI test which navigates through the application, solved three calculations correctly, one incorrectly
(on purpose).
The automated tests were created early in the development process to ensure that the server responses are
parsed correctly. Therefore, real responses were saved and used for the emulation. As the application uses a lot of
conversions from labels to question IDs and the other way around these conversions were also covered early by tests.
This helped during the development process as it was easy to exclude conversion errors in case of any issues that
happened. The GUI test showed that the process is working correctly as soon as the tasking engine was implemented.
Figure 8: Division Trainer: The main menu (left) is in the same design as the one from EinMalEins Trainer.
The same calculation is demonstrated in long mode (middle) and short mode (right).
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
In the thinking aloud tests the users were asked to interpret the main screen, try practice multiplications and
the gaming mode as well. After checking the result, they were asked if they think that their result was correct. Fur-
thermore, they were asked to interpret the star and if they can find the respective star overview and interpret it. They
were also asked to change the avatar.
The test users handled the application with ease. They had no problems navigating to the screens when they
were asked to do so. The test users identified correct and incorrect answers without an issue. They solved the multi-
plications with ease. The stars and the star overview were partly interpreted correctly. One test participant was con-
fused because that star overview looks similar to the classic 1x1 chart. However, the interpretation that the various
possible questions are displayed in that overview were correct. All test users were able to find the avatar selection
screen. However, one test user found a bug that causes the score not to get updated between questions, so no additional
avatars were unlocked. These user tests offered additional information about the usability and even an issue was de-
tected, so it was very successful.
Division Trainer
These automated tests were implemented for Division Trainer (all are passing in the current version):
• API response tests for all relevant server calls as done in EinMalEins Trainer. Again, the communication
for these tests is emulated. Long and short mode are both checked here.
• Two unit tests checking equality of instances of the result class.
• In total 19 different unit tests that check the result interpretation. 16 are using the long mode, 3 the short
mode.
Besides the API response testing the tests for the result interpretation as exceptionally important: As these
calculations are much more complex compared to the calculations in EinMalEins Trainer, a lot more issues could
happen when the user interface for calculations is created dynamically. Whenever, during the implementation phase,
a question was discovered which led to an erroneous calculation screen, this question was covered by a unit test first
and fixed afterwards. This test-driven-development helped a lot in case an issue arose, as the fix was marked working
as soon as the respective unit test succeeded and no other unit test got broken by the fix.
The thinking aloud tests focused again on finding the screens of the application, interpreting them, and solv-
ing questions. In Division Trainer the test users were also asked to switch between the two modes and interpret those.
Again, after checking a result they were asked if they think that the result was correct. They had no issues interpreting
the results. In case of a level-up they were asked what they think that the screen is telling them, this was also easy for
the test users. However, the modes caused some issues for some test participants: It turned out that the long mode was
counter-intuitive to the majority, even when this mode was selected some test users entered the values like they would
in the short mode. They also detected smaller UI issues like “continue” buttons that use a back arrow instead of a
forward arrow. Another issue was that the practice mode generated too difficult questions.
These user tests showed that the division calculation is much more complicated than the simple multiplica-
tions from EinMalEins Trainer. Some issues were detected that did not occur during the development. These findings
were very important and helpful to improve the user experience of the application and therefore the expected learning
result.
Conclusion
The development of learning systems turns out to be a challenging, but also very interesting and diverse task.
While software learning systems have to meet at least the same quality requirements like any other software, the
teaching context generates extra complexity to fulfill the goal of such a system: To be a useful addition to a classic
school class or even act as a stand-alone learning solution.
Regarding the question which E-Learning system works best, serious games or gamification solutions, this
is difficult to answer: Both ideas can work in specific scenarios, but it can be concluded that the way games or gameful
design are defined, blends in very well with electronic learning. This is the case because gaming (compared to playing)
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
is based on certain rules and boundaries. Classification like correct or incorrect works well together with learning
targets.
Regarding serious games switching between gaming and learning modes cannot be suggested. These would
be recognized as disturbance by many users and the learning context will be avoided if possible (Jantke 2007). On the
other hand, a serious game should include motivational, joyful elements to keep users motivated.
When looking at the two provided learning systems EinMalEins Trainer and Division Trainer, a number of
possible improvements were identified:
• Sometimes, if a network issue happens, the applications go back to the main screen. A retrying mechanism
would help here.
• The score in EinMalEins Trainer needs to update itself after each question.
• The Division Trainer needs some tutorial: It would help a lot here, if a demo calculation would be shown
by the game’s signature character, the owl “Vogi”.
• Some UI buttons should be improved.
This research question was asked at the beginning of this document: “How can different types of automated
testing and user testing help to improve the development process of learning applications?”. It turns out, that the early
unit testing sealed a lot of important functionality as working. Whenever smaller refactorings had to be done, these
unit tests gave a lot of confidence, that the chance was successful. Whenever an issue occurred (especially in Division
Trainer) adding a new unit test to address this issue turned out to be the correct way to go. This guaranteed that no
other cases were broken by applying the fix. The GUI tests were a good way to demonstrate the functionality of the
implemented framework containing the dialogs and the task switching. Finally, the user tests were the most important
ones, as they revealed the truth about the applications. This thinking aloud method gave a lot of insight on smaller
issues and not perfectly intuitive user dialogs.
Figure 9 tries to sum up the relevance of the different testing methods over time. As already mentioned in the
end the thinking aloud tests turned out to be most helpful. However, as the early availability of automated test results
granted a continuous development process, these tests also need to be rated very impactful and helping.
Figure 9: The relevance of different testing methods over time. This is the result of experience made during
the development of EinMalEins Trainer and Division Trainer.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
It can be assumed that E-Learning systems of different types will have an even bigger relevance in the future.
Planning and developing these applications turns out to be a motivating and interesting tasks, because of it’s diversity.
User experience is especially important in this context, as motivation as absolute key to grant a learning success, this
holds true for any learning scenario, whether supported by an electronic component or not. Developing gameful de-
signs and integrating them into different contexts offers a lot of chances for future development.
References
Abt, C. C. (1987). Serious games. University press of America.
Andrews, K. (2023). Human-Computer Interaction. https://courses.isds.tugraz.at/hci/hci.pdf (visited on 2023/05/10)
Bormann, M., Heyligers, K., Kerres, M., & Niesenhaus, J. (2008). Spielend lernen! Spielend lernen? Eine empirische
Annäherung an die Möglichkeit einer Synthese von spielen und lernen. In Workshop Proceedings der Tagungen
Mensch & Computer 2008, DeLFI 2008 und Cognitive Design 2008. Logos Verlag.
Deterding, S., Khaled, R., Nacke, L. E., & Dixon, D. (2011). Gamification: Toward a definition. In CHI 2011
gamification workshop proceedings (Vol. 12, p. 15). Vancouver, BC, Canada: ACM.
Duchowski, A. (2007). Eye tracking techniques. Eye tracking methodology: Theory and practice, 51-59.
Ebner, M. (2015) Mobile Learning and Mathematics. Foundations, Design, and Case Studies. Crompton, H., Traxler,
J. (ed.). Routledge. New York and London. pp. 20-32
Ebner, M.; Schön, S.; Braun, C.; Ebner, M.; Grigoriadis, Y.; Haas, M.; Leitner, P.; Taraghi, B. (2020) COVID-19
Epidemic as E-Learning Boost? Chronological Development and Effects at an Austrian University against the
Background of the Concept of “E-Learning Readiness”. Future Internet 2020, 12, 94
Ericsson, K. (8). A. and Simon, H, A.(1993) Protocol analysis; Verbal reports as data, revised edition.
Garris, R., Ahlers, R., & Driskell, J. E. (2002). Games, motivation, and learning: A research and practice model.
Simulation & gaming, 33(4), 441-467.
Holzinger, A. (2011). Human-Computer Interaction-Usability Engineering im Bildungskontext. Lehrbuch für Lernen
und Lehren mit Technologien.
Jantke, K. P. (2007, September). Serious Games–eine kritische Analyse. In 11. Workshop Multimedia in Bildung und
Wirtschaft, Ilmenau, Sept. 20-21 (pp. 7-14).
Khorikov, V. (2020). Unit Testing Principles, Practices, and Patterns. Simon and Schuster.
Klimmt, C. (2008). Unterhaltungserleben bei Computerspielen. Mitgutsch, Konstantin/Rosenstingl, Herbert (Hg.),
Faszination Computerspielen: Theorie-Kultur-Erleben. Wien: Braumüller, 7-17.
Le, S., & Weber, P. (2011). Game-Based Learning-Spielend Lernen?. Lehrbuch für Lernen und Lehren mit
Technologien.
Rakoczi, G., Bochud, Y. E., Garbely, M., Hediger, A., & Pohl, M. (2013). Sieht gut aus. Visuelle Gestaltung auf
wahrnehmungspsychologischen Grundlagen.
Schön, M., Ebner, M., Kothmeier, G. (2012) It's Just About Learning the Multiplication Table, In Proceedings of the
2nd International Conference on Learning Analytics and Knowledge (LAK '12), Simon Buckingham Shum, Dragan
Gasevic, and Rebecca Ferguson (Eds.). ACM, New York, NY, USA, 73-81
Shen, C., Wang, H., & Ritterfeld, U. (2009). Serious games and seriously fun games: Can they be one and the same?.
In Serious Games (pp. 70-84). Routledge.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023
Shneiderman, B. (1981). Direct manipulation: A step beyond programming languages. In Proceedings of the Joint
Conference on Easier and More Productive Use of Computer Systems.(Part-II): Human Interface and the User
Interface-Volume 1981 (p. 143).
Wagner, M. (2009). Eine Theorie des Digital Game Based Learning - Teil 3: Fünf Kernaussagen
Ware, C. (2019). Information visualization: perception for design. Morgan Kaufmann.
Preview version of this paper. Content and pagination may change prior to final publication.
EdMedia + Innovate Learning 2023 Vienna - Vienna, Austria, July 10-14, 2023