Conference PaperPDF Available

Realtime Knowledge Space Skill Assessment for Personalized Digital Educational Games

Authors:

Abstract and Figures

Digital Educational Games offer immersive environments through which learners can enjoy motivational and compelling educational experiences. Applying personalization techniques within these games can further enhance the educational potential, but the often realtime and narrative-driven focus of games presents many challenges to traditional adaptation approaches. This paper describes an approach to the realtime assessment of learner skills for personalization that was implemented and evaluated as part of the ELEKTRA European Commission funded project.
Content may be subject to copyright.
Realtime Knowledge Space Skill Assessment for Personalized Digital
Educational Games
Owen Conlan1, Cormac Hampson1, Neil Peirce1, Michael Kickmeier-Rust2
1Trinity College, Dublin; 2University of Graz
{owen.conlan, hampsonc, peircen}@cs.tcd.ie; michael.kickmeier@uni-graz.at
Abstract
Digital Educational Games offer immersive
environments through which learners can enjoy
motivational and compelling educational experiences.
Applying personalization techniques within these
games can further enhance the educational potential,
but the often realtime and narrative-driven focus of
games presents many challenges to traditional
adaptation approaches. This paper describes an
approach to the realtime assessment of learner skills
for personalization that was implemented and
evaluated as part of the ELEKTRA European
Commission funded project.
1. Introduction
Digital Educational Games (DEG) are an emerging
area in which personalization techniques, traditionally
developed within the Adaptive Hypermedia (AH)
research domain, are being applied. A major issue that
has plagued online learning solutions for quite some
time has been the high levels of drop out [1] often
precipitated by poor intrinsic motivation and relevance
in the material presented. Non-adaptive DEGs seek to
address the motivation issue by presenting the learner
with a compelling and engaging environment and
backdrop in which to learn. Through rich narratives
[2], engaging gameplay [3] and a fidelity to real world
situations [4] these games strive to engage and
motivate the learner. Combining personalization
techniques with such educational games has the
potential to further improve the relevance of what is
offered to the learner. A broad range of adaptation
axes, such as Cognitive feedback, Meta-cognitive
feedback, Affective/motivational feedback, Knowledge
based hinting and Progression hinting [5] may be
considered.
In order to offer appropriate adaptive interventions
three challenges must be overcome: 1) modeling of the
learner’s knowledge acquisition (also referred to as
cognitive gain) must be achieved in realtime; 2)
adaptive hypermedia techniques, which are typically
applied to web-based systems, also need to operate in
realtime; 3) the personalizations offered must not
adversely impact the flow [6] of the game. The
challenges of realtime adaptation and the maintenance
of flow [6] stem from the need to maintain a learner’s
immersion within the gaming experience.
This paper focuses on the first of these challenges,
while referencing the others, by presenting how the
Knowledge Space Theory (KST) [7] [8] was adopted
as a realtime, probabilistic approach to progressively
modeling a learner’s skills and knowledge whilst
engaged with an immersive DEG. It provides a brief
overview of the current state of the art in DEGs and
Adaptive Hypermedia, along with the basics of KST.
The paper will also introduce the ELEKTRA research
project, its architecture and the Skill Assessment
Engine, a realtime KST-based modeling engine for
personalized DEGs.
2. Background
DEGs have reported successful outcomes by
integrating adaptation and strong storylines with
inherent motivational qualities. The DARPA funded
Tactical Language and Cultural Training System
(TLCTS) [2] has shown effective learning outcomes
achieved through the application of adaptation. Façade
[4] and the Virtual Team Collaborator (VTC) [9] have
shown that a strong narrative, an adaptive narrative in
the second instance, can provide immersive
experiences. Whilst both of these showed benefits,
their technical approach was highly complex and
involved the authoring of several narrative strands. The
Adaptive Learning In Games through Non-invasion
(ALIGN) system [5] eases this authoring burden and is
an expansion on the proven APeLS multi-model,
metadata driven approach [10], but it does not
specifically focus on narrative issues.
Adaptive Hypermedia Systems have typically dealt
with narrative from a different perspective. Most
prevalent examples come from the adaptive eLearning
domain where narrative usually refers to the flow of a
2009 Ninth IEEE International Conference on Advanced Learning Technologies
978-0-7695-3711-5/09 $25.00 © 2009 IEEE
DOI 10.1109/ICALT.2009.150
538
piece of coursework [11] [12]. However, as these
systems are web-based and the narratives are usually
constructed periodically they do not suffer from the
severe realtime restraints of DEGs.
Knowledge Space Theory (KST), introduced by
Doignon and Falmagne [13], provides a theoretical
framework within which the knowledge or skill state of
a learner can be determined. It is based on a
prerequisite competence structure that describes the
relationships between different skills. For example, a
learner should typically be able to perform fraction
addition before they can multiply them. If the learner
exhibits evidence of fraction multiplication it may be
assumed that they can also add fractions. Such
probabilistic reasonings enable a system to infer a
learner’s skill state based on partial evidence [7]. The
fundamental approach taken in KST is to reduce the
number of possible pieces of evidence needed about a
learner to an optimal set. In this way the Knowledge
State of a learner may be assessed through the
minimum number of inferences, thus achieving
maximum efficiency. This is only possible by
examining the domain in which the learning is
occurring and identifying the underlying prerequisite
relationships that exist between concepts. This is a time
consuming and expert task that involves describing a
learning domain, such as mathematics, in terms of
formal prerequisite relationships. Specific educational
tasks, such as the learner interacting with a virtual
experiment, are broken down into specific sub-tasks.
Success or failure in these sub-tasks forms evidence
that facilitates the probabilistic update of the learner’s
model. The certainty is dependent on the level of
inference required. However, as only partial evidence
is needed to assess a skill state it can be done very
efficiently. When applied to DEGs KST has the
potential to provide the basis of a time sensitive
approach to modeling a learner’s acquisition of
knowledge and skills [8].
3. The ELEKTRA Project
One of the core design strategies of the ELEKTRA
project [14] was to separate the gaming environment
from the learning adaptation [15]. This was realized
through the two core components the Game Engine
(GE), responsible for graphics, audio, and gameplay,
and the Learning Engine (LE), which is responsible for
the adaptation of the educational experience. The
communication from the GE to the LE provides the
evidence on which adaptation is performed, and
conversely the LE to GE communication contains the
game adaptations to be executed. In this service-driven
approach to adaptation [16] the LE reasons over
educational concerns that have been abstracted and
inferred from the basic game evidence.
The nature of the game evidence sent from the GE
is game specific and consists of player actions,
movements, and task successes or failures. This
information however is not immediately useful for
educational adaptation, requiring a degree of inference
by the LE. Inference within the LE is the first step in
the four stage process employed to provide effective
non-invasive adaptation. The four stages employed are
inference, context accumulation, adaptation constraint,
and adaptation selection. Further details of the four
stage adaptation process are detailed in [5].
The design of the LE and the four stage adaptation
process allows for the educational adaptation to be
performed without regard for the game specifics. The
LE effectively infers and abstracts game actions into
educational evidence that can be reasoned over in a
generic manner, thus enabling it to be employed for
different games with minimal alteration. A key
example of this is the abstraction of skills provided
through the Skill Assessment Engine (SAE). The SAE
effectively maps user actions within the game to skill
evidence, and further generates a probabilistic skill
model for the learner.
The second stage of the adaptation process involves
accumulating game and learner evidence. In
consideration of the large quantity of evidence
accumulated, potentially dozens of items per second,
the use of XML based models, a traditional approach
in many Adaptive Hypermedia Systems such as
APeLS [10], becomes impractical due to manipulation
and reasoning speed. Consequently all data is
accumulated in a working memory provided by the
Drools rule engine. The use of the Drools rule engine
provides an efficient means to reason over large data
sets using declarative logic.
In order to perform adaptation within the GE the LE
must have an a priori abstracted understanding of the
adaptations possible. Within the LE these adaptations
are represented as Adaptive Elements. An Adaptive
Element consists of an adaptation identifier used by the
game and associated metadata indicating the probable
outcome of the adaptation and when it can be suitably
used. An example Adaptive Element in the ELEKTRA
game would be the Non Player Character (NPC)
Galileo giving an encouraging hint such as, “Yes. It
isn’t easy, and I’m not sure that I would do any better
in your position, but you must persevere.” Such an
Adaptive Element would have an abstracted outcome
description of “encouragement”, and a suitability
requirement of the Galileo NPC being present.
The following are the benefits of using Adaptive
Elements:
539
Educational adaptation does not need to be
concerned with realizing adaptations
Facilitates the independent authoring of the
game engine and the adaptation logic
The third LE stage of adaptation constraint is
concerned with ensuring that only appropriate
Adaptive Elements are used. By using constraint rules,
only feasible and appropriate Adaptive Elements are
made available for selection in the final LE stage. The
selection of adaptation is achieved through adaptation
rules that examine the accumulated learner data and the
available Adaptive Elements.
Through an authentic evaluation using secondary
school physics students the ELEKTRA game proved to
be effective and successful. The ELEKTRA game is a
narrative-driven adventure game where the
learner/gamer must overcome several physics-oriented
puzzles. They are guided by an NPC representing the
ghost of Galileo who advises and encourages them as
they interact with experimental apparatus. The skills
they acquire are directly relevant to tasks they will face
whilst playing the game. Through the evaluation of
ELEKTRA the realtime and appropriate nature of the
adaptation was favorably received [DIGITEL-ref].
4. Skill Assessment Engine
Interpreting evidence sent by the Game Engine (GE) is
central to the first inference step of the four stage
ELEKTRA process [5]. The Skill Assessment Engine
(SAE), a component of the Learning Engine (LE), is
responsible for translating each learner’s actions within
the game into a list of probabilities that show the
likelihood of each relevant skill having been acquired
by the learner. This assessment of a learner’s skills
must be done in an implicit fashion so as not to
negatively impact their flow through the game.
The domain specific skills to be acquired in the
ELEKTRA game were organized according to KST
into a prerequisite knowledge structure, which was
encoded as an OWL ontology and parsed by the SAE
at design time. The parsing process had a dual purpose:
firstly it extracted each valid skillstate (a unique
combination of skills a user could have at any one
time) from the ontology; secondly it converted these
skillstates into a binary matrix, which could be more
efficiently processed by the SAE at runtime, than the
more verbose OWL representation. The runtime
performance of ontologies, even quite small ones, is
poor and insufficient for use in time sensitive DEGs.
During the game, the user faces various learning
challenges, with specific educational rules triggered
depending on their interactions with learning objects,
such as virtual experimental apparatus (Fig. 1).
Learning objects are traditionally seen as static pieces
of content, usually HTML, with associated metadata.
In ELEKTRA a learning object was an interactive
experience that was woven into the game narrative.
Each learning object has skills associated with it, thus
if a rule relating to a learning object is fired through a
learner’s interaction the SAE must run its algorithm to
determine which skillstates (and subsequently which
individual skills) have increased or decreased in
probability. Once the thresholds for skill probabilities
have exceeded a pre-determined value, the user is said
to have acquired this skill. These calculations must be
done in less than 200ms [17] so that the delay in the
LE selecting an appropriate intervention for the GE is
not noticeable to the user. For the purposes of the work
presented here below 200ms is the definition of
realtime. The adjustment in skill probabilities is taken
into account in stage two of the ELEKTRA process
where all evidence from the game and user is
accumulated. Thus any change in skill probabilities
has influence over which adaptive interventions are
eventually presented to the user within the game
environment.
Figure 1. The Slope Device
The initial ontology created for the ELEKTRA game
contained 83 skills and had over 10 million
corresponding skillstates. Because of the large number
of skillstates it meant that there would be latency
issues when applying the SAEs algorithm at runtime.
Any such delay would be detrimental to selecting
appropriate adaptive interventions in a timely fashion.
Thus a reduced version of the skill list and its
corresponding prerequisite relation was developed,
which contained 25 skills of a lesser granularity.
These skills contained 12,414 skillstates, which was a
number that could be processed at runtime with
minimal delay by the SAE.
Because of the issue regarding the maximum
amount of skillstates that can be efficiently processed
by the SAE, the scalability of the solution is in
question. For ELEKTRA this was not an issue due to
540
the limited scope of the game, however for larger
games with many more learning situations (and
associated number of skillstates), it would not be a
viable technique to process the entire skill structure as
a single entity at runtime. The next iteration of the
SAE, currently being researched and developed as part
of the European Commission 80Days project [18], will
tackle this precise problem. By working with partitions
(with a correspondingly reduced number of skillstates)
and not the complete ontology, the next version of the
SAE will provide a scalable and practical solution for
the runtime calculation of a user’s current skills.
5. Evaluating Skill Assessment
The evaluation of the SAE relied on the comprehensive
log files generated by the Learning Engine. These log
files detailed every action performed by the learner, the
corresponding skill probability changes, and any
adaptations sent to the Game Engine. The following
graphs illustrate how a learner’s skill probabilities
change after successive task attempt. The large circles
indicate skills that were targeted with adaptations
following a task attempt, i.e. each circle indicates a
personalized adaptation that was presented to assist the
learner.
Figure 2. Skill probability change with task
attempts, student A.
The graphs shown in figures 2 and 3 plots of ten skill
probabilities against the number of learning task
attempts in a learning object. It shows ten of the
twenty-five possible skills. The remaining skills have
been omitted as they were not relevant to the specific
task and so showed negligible change. Through
comparing the final skill probabilities of the student A
(Fig. 2) and student B (Fig. 3) it is evident that the
SAE has effectively identified skills deficiencies and
provided adaptations accordingly. This is particularly
noticeable in task attempts 14 and 15 in figure 2, and in
task attempts 13-15 in figure 3.
By way of example, consider the plotline with
square markers (with a starting probability of
approximately 0.8) in Figure 2. This line indicates the
learner’s knowledge about gravity. The line with small
circles starting at a probability of about 0.7 represents
their knowledge of magnetism. The experiment they
are interacting with was referred to as the ‘slope
device’ and enabled a learner to experiment with the
effect gravity has on a falling object. They could
attempt to impact the objects trajectory by
manipulating a magnet and fan. In the case of the
learner shown in figure 1, they initially exhibited
slightly poor control over these mechanisms, by
altering the magnet when the falling object was made
of wood. This is exhibited in drops in the skill
probability of both the gravity and particularly the
magnetism skills. From task attempt six onwards the
learner receives adaptive hints and exhibits an
improvement in both skills. The learner shown in
figure 2, however, did not improve after the adaptive
hinting and a drop in their skills (corresponding to
weaker performance in the task) is seen.
Figure 3. Skill probability change with task
attempts, student B.
Although it appears that the relatively high probability
skills receive more frequent adaptations, this is a result
of the learning task in question dealing predominantly
with these skills. Additionally it should be noted that
not all hints are explanatory, such an example would
be the adaptation given for task attempt four for both
students. This adaptation was the following
congratulatory dialogue for the recent skill increases,
“I knew you were up to this challenge”.
Due to the finite number of Adaptive Elements
available, skills with dropping probabilities are not
always targeted for adaptations. This is by design as it
541
is not always feasible or appropriate in a game context
to select a particular adaptation.
The starting probability of each skill is determined
from the distribution of a particular skill across all
skillstates. Initially all skillstates are assigned a
uniform probability, thus making no assumptions of a
learner’s prior knowledge. This was again a design
decision as it was felt that an explicit pre-test would
adversely impact the game experience. However,
paper-based pre-tests were used as part of the
evaluation to investigate cognitive gain.
6. Future Work and Conclusion
This paper has presented the Skill Assessment Engine,
a component of the ELEKTRA DEGs Learning
Engine. This component is responsible for the realtime
evaluation of a learner’s skills through interpreting
evidence from a Game Engine. By using the
probabilistic-based Knowledge Space Theory the SAE
can determine a learner’s probable skillstate with a
minimum of evidence. However, this approach, whilst
effective within the limited scope of the ELEKTRA
game, does not seem to scale well. This is due to the
large number of possible skillstates that can exist with
even just a limited number of skills. As part of the
80Days project [18], a continuation of ELEKTRA, a
solution is being proposed that involves partitioning
the Knowledge Space. This approach will enable the
SAE to function much as it did in ELEKTRA, but a
solution for mapping the boundaries of partitions needs
to be derived. That said, the approach proposed in this
paper shows much potential for effectively and
efficiently assessing learners’ skills when there is a
realtime consideration.
7. References
[1] Frankola, K. (2001). Why online learners dropout.
Workforce, 10, 53- 63.
[2] A. Stern and M. Mateas, "Build It to Understand It:
Ludology Meets Narratology in Game Design Space," in
International DiGRA Conference Vancouver, British
Columbia, Canada 2005.
[3] P. Moreno-Ger, D. Burgos, J. L. Sierra, and B.
Fernández-Manjón, "A Game-Based Adaptive Unit of
Learning with IMS Learning Design and <e-Adventure>," in
Second European Conference on Technology Enhanced
Learning (EC-TEL 2007), Crete, Greece., 2007.
[4] W. L. Johnson, N. Wang, and S. Wu, "Experience with
serious games for learning foreign languages and cultures,"
in SimTecT Conference., Australia, 2007.
[5] Peirce, N., Conlan, O., Wade, V., "Adaptive Educational
Games: Providing Non-invasive Personalised Learning
Experiences," Digital Games and Intelligent Toys Based
Education, 2008 Second IEEE International Conference on ,
vol., no., pp.28-35, 17-19 Nov. 2008.
[6] M. Csikszentmihalyi, Flow: The psychology of optimal
experience. NY: New York: Harper and Row., 1990.
[7] Conlan, O., Hampson, C., O'Keeffe, I., Heller, J., Using
Knowledge Space Theory to support Learner Modeling and
Personalization, World Conference on E- Learning in
Corporate, Government, Healthcare, and Higher Education,
Hawaii, USA, 2006.
[8] D. Albert, C. Hockemeyer, M. D. Kickmeier-Rust, N.
Peirce, and O. Conlan, "Microadaptivity within Complex
Learning Situations – a Personalized Approach based on
Competence Structures and Problem Spaces," in 15th
International Conference on Computers in Education, ICCE
2007 Hiroshima, Japan, 2007.
[9] R. M. Carro, A. M. Breda, G. Castillo, and A. L.
Bajuelos, "A Methodology for Developing Adaptive
Educational-Game Environments," in AH 2002:Adaptive
Hypermedia and Adaptive Web-Based Systems Malaga,
Spain: Springer, 2002.
[10] O. Conlan and V. P. Wade, "Evaluation of APeLS - An
adaptive eLearning service based on the multi-model,
metadata-driven approach," Adaptive Hypermedia and
Adaptive Web-Based Systems, Proceedings, vol. 3137, pp.
291-295, 2004.
[11] Specht, M., Kravcik, M., Klemke, R., Pesin, L.,
Huttenhain, R.: Adaptive Learning Environment for
Teaching and Learning in WINDS. LECTURE NOTES IN
COMPUTER SCIENCE (2002) 572-575
[12] De Bra, P., Aerts, A., Berden, B., de Lange, B.,
Rousseau, B., Santic, T., Smits, D., Stash, N.: AHA! The
adaptive hypermedia architecture. Proceedings of the
fourteenth ACM conference on Hypertext and hypermedia
(2003) 81-84
[13] Doignon, J.-P., & Falmagne, J.-C. (1985). Spaces for the
assessment of knowledge. International Journal of Man-
Machine Studies, 23, 175-196.
[14] "ELEKTRA - Enhanced Learning Experience and
Knowledge TRAnsfer", Retrieved 25th June 2008 from
http://www.elektra-project.org
[15] Kickmeier-Rust, M. D., Peirce, N., Conlan, O., Schwarz,
D., Verpoorten, D., Albert, D. (2007) Immersive digital
games: The interfaces for next-generation e-learning.
Proceedings of the HCI International 2007, 22-27 July 2007,
Beijing, Lecture Notes in Computer Science. New York,
Heidelberg. Springer. pp647 – 656.
[16] Brusilovsky, P., Wade, V., Conlan, O.: From Learning
Objects to Adaptive Content Services for E-Learning. In:
Pahl, C. (ed.): Architecture Solutions for E-Learning
Systems. IGI Global (2007) 243 – 261.
[17] I. S. MacKenzie and W. Colin, "Lag as a determinant of
human performance in interactive systems," in Proceedings
of the INTERACT '93 and CHI '93 conference on Human
factors in computing systems Amsterdam, The Netherlands:
ACM, 1993.
http://portal.acm.org/citation.cfm?doid=169059.169431
[18] “80Days - around an inspiring virtual learning world in
eighty days” Retrieved 16th January 2009 from
http://www.eightydays.eu
542
... In addition, experts or automated systems define an appropriate description for each set of actions. These descriptions are then used to model the learner (Lessard 2012; Conati and Zhao 2004; Manske and Conati 2005; Stathacopoulou et al. 2004; Virvou et al. 2003; Katsionis and Virvou 2004; Conlan et al. 2009). For example, the learner model in Lessard (2012) aimed at creating a representation of learner's cognitive traits. ...
... The collected data was analyzed by five human experts who also observed learners' actions while the learners played the game and noted down what the learners were likely to have felt. The learner model's goal in Conlan et al. (2009) was to generate a real-time evaluation of a learner's skills. The skill assessment engine component was responsible for translating each learner's actions within the game into a list of probabilities that showed the likelihood of each relevant skill having been acquired by the learner. ...
... In this game, the learner has to solve several physics-oriented puzzles. A virtual character (representing the ghost of Galileo) guides, advises and encourages the learner during the game (Conlan et al. 2009). The ELEKTRA project is accessible online (ELEKTRA 2015). ...
Article
Full-text available
Recently, there has been growing interest in the use of games in education. Educational games have been found to stimulate learners by increasing their motivation and engagement. In addition, educational games could be used for creating the learner model. In fact, these games provide ample opportunities for learner’s interactions with the computer which can be exploited for creating a reliable learner model. This paper presents a survey of the field of learner modeling using educational games, describing the main methods and proposing taxonomy to better organize them. The paper also enumerates several educational games that are suitable for experimentation. This synthesis is expected to not only help the researchers and developers working in this field but also pedagogues and teachers who plan to integrate these approaches in educational context.
... Tout d'abord, le suivi peut être utilisé pour évaluer la performance de l'apprenant (Gee, 2003). Le suivi peut également permettre de guider le joueur, adapter le jeu en fonction des actions de l'apprenant (Conlan et al., 2009) ou améliorer la conception même du jeu (Marty et Carron, 2011). Pour ces raisons, nous souhaitons intégrer à JEM Inventor le suivi de la progression des joueurs pendant le jeu. ...
Thesis
Full-text available
L’essor des périphériques mobiles (ex. tablettes, smartphones) ainsi que leurs applications pédagogiques et ludiques ont contribué à la naissance des Jeux Éducatifs Mobiles (JEM). De nombreux chercheurs ont prouvé les effets positifs de ces JEM sur la motivation des apprenants et même sur certains apprentissages. Cependant, l’utilisation de JEM en contexte scolaire reste très limitée. En effet, les JEM existants, parfois assez coûteux, sont souvent conçus pour un domaine très spécifique, et n’offrent donc pas de possibilités de réutilisation. De plus, les outils auteur existants sont, soit riches en fonctionnalités mais nécessitent un investissement important des enseignants pour être pris en main, soit simples à utiliser mais ne permettent pas de concevoir des JEM qui répondent aux besoins pédagogiques. Pour s’attaquer à ces problématiques, nous proposons JEM iNVENTOR, un outil auteur de JEM, fondé sur une approche de conception gigogne, destiné aux enseignants, conservateurs de musée, ou toute personne non-informaticienne, qui souhaitent scénariser leurs propres JEM et les déployer sur les systèmes mobiles.Le modèle de conception gigogne a été validé par une série d’expérimentations auprès d’une vingtaine d’enseignants ayant des niveaux d’expertises et des domaines d’enseignement très variés. Nous avons également mené des expérimentations de terrain, auprès d’environ 1500 étudiants et élèves, afin d’évaluer la qualité des JEM créés avec JEM iNVENTOR ainsi que leur impact sur les apprenants.
... Using educational games to assist the learning process offers a wide range of possibilities that can be difficult to attain in a traditional classroom; for example, GBL gives players the possibility of going at their own pace and learning through trial and error in a controlled and safe environment. GBL also offers many options for assessment; for example, using real-time assessment data, an educational game can be adapted to a learner's needs (Conlan, Hampson, Peirce, & Kickmeier-Rust, 2009;Göbel, Mehm, Radke, & Steinmetz, 2009;Kickmeier-Rust, Hockemeyer, Albert, & Augustin, 2008). GBL also offers the possibility of formative assessment and feedback according to a player's actions (Jarvis & de Freitas, 2009). ...
Preprint
Full-text available
Educational games are highly engaging, motivating and they offer many advantages as a supplementary tool for education. However, the development of educational games is very complex, essentially because of its multidisciplinary aspect. Fully integrating assessment is challenging and the games created are too often distributed as blackboxes; unmodifiable by the teachers and not providing much insight about the gameplays. We propose an assessment engine, EngAGe, to overcome these issues. EngAGe is used by both developers and educators. It is designed to separate game and assessment. Developers use it to easily integrate assessment into educational games and teachers can then modify the assessment and visualise learning analytics via an online interface. This paper focuses on EngAGe's benefits for games developers. It presents a quantitative evaluation carried out with 36 developers. Findings were very positive: every feature of the engine was rated useful and EngAGe received a usability score of 64 using the System Usability Scale. A Mann-Whitney U test showed a significant difference in usability (Z=-3.34, p<0.002) between novice developers (mean=56) and experienced developers (mean=71) but none in terms of usefulness. This paper concludes that developers can use EngAGe effectively to integrate assessment and learning analytics in educational games.
... Learners rely on them to measure their progress and improve, and educators need assessment to determine whether their learning goals have been achieved. GBL also offers many options for assessment and feedback; for example, using real-time assessment data, an educational game can be adapted to a learner's needs (Conlan, Hampson, Peirce, & Kickmeier-Rust, 2009;Kickmeier-Rust, Hockemeyer, Albert, & Augustin, 2008;V. Shute, Ke, & Wang, 2017). ...
Chapter
Assessment is a crucial aspect of any teaching and learning process. New tools such as educational games offer promising advantages: they can personalise feedback to students and save educators time by automating the assessment process. However, while many teachers agree that educational games increase motivation, learning and retention, few are ready to fully trust them as an assessment tool. A likely reason behind this lack of trust is that educational games are distributed as black-boxes, unmodifiable by educators and not providing enough insight about the gameplays. This chapter presents three systematic literature reviews looking into the integration of assessment, feedback and learning analytics in educational games. It then proposes a framework and present a fully developed engine. The engine is used by both developers and educators. Designed to separate game and assessment, it allows teachers to modify the assessment after distribution and visualise gameplay data via a learning analytics dashboard.
... It is recognised that GBL offers many options for assessment. Using real-time assessment data, an educational game can be adapted to a learner's needs (Conlan et al., 2009;Göbel et al., 2009;Kickmeier-Rust et al., 2008). GBL also offers the possibility of formative assessment and feedback according to a player's actions (Jarvis and de Freitas, 2009). ...
Thesis
Full-text available
Educational games are increasingly used by educators to assist the teaching and learning process and offer many advantages over traditional education. They are highly engaging, motivating and they have the potential to adapt to each student quickly becoming an ideal supplementary tool for education. However, while many educators agree that using games-based learning (GBL) increases motivation, learning and retention for their students, few of them are ready to trust them as an assessment tool and would rather adopt a more conventional method such as a paper-based test. The author believes three main reasons explain this attitude: (i) a lack of ownership over the games used; (ii) the rigidness of the games, making them unmodifiable by the teacher; and (iii) the lack of detailed reports on the gameplays. To overcome these issues, an assessment engine has been designed and developed. The engine is used by both GBL developers and educators. Its design results in the separation of a game and its assessment, and this modularity allows educators to modify the assessment and feedback of a game even after distribution. Doing so, they have an increased feeling of ownership and control over the games they use in their teaching. The engine is also comprised of a learning analytics (LA) dashboard providing educators with much needed insight about the assessment regime used in the game and how their students are performing. Research studies have been carried out to evaluate the engine with both developers and educators. Results suggest that the engine can be used with a variety of educational games. Participating developers evaluated it positively both in terms of usefulness and usability showing that it can be used effectively to create more flexible educational games. Educators’ opinions were also collected and they confirm that the engine can be used to modify a game’s assessment easily and that the LA dashboard provides a useful insight into the gameplays. Finally, using the tool increases the educators’ trust towards the use of GBL as an assessment tool as shown by a comparison of educators’ opinions before and after the experiment. This thesis provides an original contribution to knowledge by providing guidelines for integrating assessment, feedback and LA in educational games, providing and empirically evaluating a practical tool for the creation of more flexible educational games and presenting empirical evidence in the emerging field of LA.
Chapter
Assessment is a crucial aspect of any teaching and learning process. New tools such as educational games offer promising advantages: they can personalize feedback to students and save educators time by automating the assessment process. However, while many teachers agree that educational games increase motivation, learning, and retention, few are ready to fully trust them as an assessment tool. A likely reason behind this lack of trust is that educational games are distributed as black boxes, unmodifiable by educators and not providing enough insight about the gameplay. This chapter presents three systematic literature reviews looking into the integration of assessment, feedback, and learning analytics in educational games. It then proposes a framework and present a fully developed engine. The engine is used by both developers and educators. Designed to separate game and assessment, it allows teachers to modify the assessment after distribution and visualize gameplay data via a learning analytics dashboard.
Chapter
Mathematics learning has become one of the most researched fields in education. Particularly, word or story problem solving skills have been gaining an enormous amount of attention from re-searchers and practitioners. Within this context, several studies have been done in order to analyze the impact that serious games have on learning processes, and in particular on the development of word problem solving skills. However, little is known regarding how games may influence student acquisition of the process skills of problem solving. In a first attempt, this theoretical paper deals with word problem solving skills enhancement in second grade school children by means of a practical educational serious game that addresses general and specific abilities involved in problem solving, focusing on how different parts of a solution effort relate with each other. The serious game is based on Polya’s problem-solving model. The emphasis of using the specific model was on dividing the problem-solving procedure into stages and the concentration on the essential details of a problem‐solving process and the relationships between the various parts of the solution.
Book
Game-based learning environments and learning analytics are attracting increasing attention from researchers and educators, since they both can enhance learning outcomes. This book focuses on the application of data analytics approaches and research on human behaviour analysis in game-based learning environments, namely educational games and gamification systems, to provide smart learning. Specifically, it discusses the purposes, advantages and limitations of applying such approaches in these environments. Additionally, the various smart game-based learning environments presented help readers integrate learning analytics in their educational games and gamification systems to, for instance, assess and model students (e.g. their computational thinking) or enhance the learning process for better outcomes. Moreover, the book presents general guidelines on various aspects, such as collecting data for analysis, game-based learning environment design, system architecture and applied algorithms, which facilitate incorporating learning analytics into educational games and gamification systems. After a general introduction to help readers become familiar with the subject area, the individual chapters each discuss a different aim of applying data analytics approaches in educational games and gamification systems. Lastly, the conclusion provides a summary and presents general guidelines and frameworks to consider when designing smart game-based learning environments with learning analytics.
Article
Learner modeling is a crucial step in the learning personalization process. It allows taking into consideration the learner's profile to make the learning process more efficient. Most studies refer to an explicit method, namely questionnaire, to model learners. Questionnaires are time consuming and may not be motivating for learners. Thus, this paper presents a new approach for Modeling the learner's Personality, introvert/extrovert, using Educational Games (MoPEG). The proposed approach includes three phases, namely design of the educational game, collection of traces and personality prediction. To evaluate the efficiency of MoPEG, an experiment was conducted with thirty learners where their personality modeling results using MoPEG are compared to the Big Five Inventory (BFI) questionnaire. The obtained results showed that the accuracy level of MoPEG is significant with an 80% of similarity. Besides, based on the Kappa method, used by psychologists to measure the similarity of two instruments. The obtained results highlighted an agreement degree of MoPEG with the level "good".
Thesis
Full-text available
Cette thèse porte sur les jeux sérieux éducatifs (JSÉ). En tant qu’environnements informatiques pour l’apprentissage humain (EIAH), les JSÉ doivent adapter leur contenu à la progression de l’apprentissage chez le joueur-apprenant (JA) et, en tant que jeux vidéo, ils doivent chercher à maintenir son engagement dans cette expérience d’apprentissage. Dans un JSÉ, les tâches que réalise le JA ainsi que les rétroactions que fournit le système forment le scénario pédagogique de jeu (SPJ). Cette thèse se focalise sur l’adaptation de ces SPJ à la progression de l’apprentissage du JA, tout en le maintenant engagé. Nous proposons une architecture de génération automatique des SPJ principalement composée d’un modèle du JA et d’un module d’adaptation. Le modèle du JA est soutenu par un réseau bayésien, dont le rôle est d’estimer la progression de l’apprentissage en cours de jeu, afin d’envoyer cette estimation au module d’adaptation, qui, grâce à la technique d’intelligence artificielle de la planification, génère le contenu du SPJ. Nous avons mis en œuvre notre architecture dans un JSÉ de simulation appelé Game of Homes, visant le développement de compétences de base en vente immobilière chez des apprenants adultes. Afin d’évaluer empiriquement notre architecture, nous avons effectué deux expérimentations au cours desquelles nous demandions à des participants de jouer à Game of Homes. Dans une première expérimentation, quatorze participants ont rempli un test de connaissances un jour avant et un jour après la session de jeu. Ils ont également rempli un questionnaire de motivation et d’engagement immédiatement après la partie. Dans une deuxième expérimentation, dix-huit participants ont participé à une session de Game of Homes, et leurs traces enregistrées lors de la partie ont été recueillies. Les résultats des expérimentations montrent que les participants, après avoir joué à Game of Homes, ont développé les connaissances associées aux compétences ciblées dans le JSÉ de simulation, tout en se sentant engagés lors de leur expérience de jeu. De plus, les analyses des traces montrent que les SPJ générés par le système étaient adaptés pour chaque JA et respectaient leur progression d’apprentissage. L’architecture de génération automatique de SPJ présentée dans cette thèse permet non seulement de proposer une démarche d’apprentissage s’adaptant à la progression du JA, mais aussi de révéler des données détaillées sur la démarche de chaque JA qui peuvent être utiles au formateur. Notre architecture pourrait être mise en œuvre dans d’autres JSÉ de simulation et donc s’appliquer à d’autres domaines d’expertise. Mots-clefs : jeux sérieux éducatifs de simulation, scénario pédagogique de jeu, système adaptatif d’apprentissage, engagement du joueur-apprenant, développement des compétences, génération automatique de scénarios, réseau bayésien, planification en intelligence artificielle, analyse de traces.
Article
Full-text available
In this paper we illustrate how to conceive, implement and play adaptive Units of Learning (UoLs) that embed educational videogames. For this purpose we describe The Art & Craft of chocolate UoL, with the game Paniel and the chocolate-based sauce adventure as a key feature. The UoL includes a pre-test whose outcome is used to adapt me game. The UoL also assesses the learning process using an in-game exam. This UoL has been modeled using IMS Learning Design (LD), and me embedded game has been developed using the 〈e-Adventure〉 educational game engine. This UoL may be deployed in any LD-compliant environment, although some of the features like the adaptation of the game or automatic assessment require special plug-ins mat enable the communication between the environment and the 〈e-Adventure〉 engine. These plug-ins have been developed as an open-source modification of me SLeD player.
Article
Full-text available
The Tactical Language and Culture Training System (TLCTS) that is an interactive learning platform that helps people quickly acquire communication skills in foreign languages and cultures. It integrates serious game technology and intelligent tutoring system technology. Learners alternate between working on interactive lessons that teach communication skills and interactive games that require learners to apply those skills. TLCTS trainers have been developed for Arabic dialects and Pashto, and trainers for other languages are under development. Thousands of American service members have trained using TLCTS trainers. This paper summaries some of the lessons learned from putting TLCTS trainers into practice, both about serious game design in general and learning environment design for language and culture training in particular.
Article
Full-text available
In this paper, we present an approach to microadaptivity, i.e. to adaptivity within complex learning situations as they occur, e.g., in game-based learning. Integrating the competence-based knowledge space theory and the information-processing theory of human problem solving we developed a sound model as a basis for microadaptivity and continuous competence state monitoring. The architectural design of a first demonstrator is presented.
Chapter
Full-text available
This paper argues that a new generation of powerful E-learning systems could start on the crossroads of two emerging fields: courseware re-use and adaptive educational systems. We argue for a new distributed architecture for E-learning systems based on the idea of adaptive reusable content services. This paper discusses problems that have to be solved on the way to the new organization of E-learning and reviews existing approaches and tools that are paving the way to next generation E-learning systems. It also presents two pioneer systems -APeLS and KnowledgeTree that have attempted to develop a new service-based architecture for adaptive E-learning.
Article
This paper describes a new methodological approach to design education on the web employed in the system called ALE. The system will be used to build a large knowledge base supporting Architecture and Civil Engineering Design Courses and to experiment a comprehensive Virtual University of Architecture and Engineering Design. The ALE system integrates the functionality of a complex e-Learning system with adaptive educational hypermedia on the Web. In this paper we outline the system architecture and focus on the design and functionality of its adaptive learning environment.