Content uploaded by Simone Kopeinik
Author content
All content in this area was uploaded by Simone Kopeinik on Mar 31, 2015
Content may be subject to copyright.
G. Biswas et al. (Eds.) (2012). Proceedings of the 20th International Conference on Computers in
Education. Singapore: Asia-Pacific Society for Computers in Education
Using CbKST for Learning Path
Recommendation in Game-based Learning
Simone Kopeinika*, Alexander Nussbaumera, Michael Bedeka & Dietrich Albertab
aKnolwedge Management Institute, Graz University of Technology, Austria
bDepartment of Psychology, University of Graz, Austria
*simone.kopeinik@tugraz.at
Abstract: This paper presents a novel approach how learning paths consisting of game units
can be created and adapted to learners based on their behavior during the game play.
Non-invasive assessment procedures interpret the behavior and calculate information about
the competences of the learners. A user model holds probabilistic information on the
competence profile. Based on this competence profile game units/stories are recommended
fitting to the actual competence state of the learner. This approach is part of the EC-funded
TARGET project which provides the technical infrastructure regarding the 3D virtual game
environment. The innovative part of this paper is the adaptive learning strategy and how it
can be included in a game-based environment. The user perspective is demonstrated on a
concrete scenario where the learner has to solve a task in the game-based environment.
Keywords: Digital Educational Game, Adaptive Learning Paths, Competence-based
Knowledge Space Theory, Simplified Updating Rule
Introduction
An important research area in Technology-enhanced learning (TEL) focuses on adaptivity
and personalization. Several approaches have been elaborated that demonstrate how a
system and its content can be adapted to the learner´s knowledge level. To allow
individually tailored educational software solutions, it is necessary to keep track of an
individual learner’s knowledge state at a specific moment in time [6]. In Adaptive systems
relevant information is typically described in user models, domain models, and adaptivity
models [3].
One research area in TEL is Game-Based Learning (GBL) and Digital Educational
Games (DEG). They provide powerful opportunities for the leaner regarding motivation and
flow experience. It has also shown that these factors in game-based settings have positive
influence on learning effectiveness and learning outcomes [9]. The European research
project ELEKTRA (http://www.elektra-project.org/) firstly explored and presented the
micro-adaptivity approach. This methodology allows assessing a learner non-invasively and
continuously without interrupting the learner´s potential game flow experience. Assessment
data is retrieved from the user’s behavior while being engaged in the game [8]. This
approach was revisited and implemented in subsequent projects. In 80Days
(http://www.eightydays.eu/) for instance, information was derived from specific actions
indicated by the manipulation of objects [9].
This paper presents an approach how learning paths consisting of game units can be
created and further adapted to learners based on their behavior during the game play.
Non-invasive assessment procedures interpret the learner´s observable behavior and infer
information about the competence level, which is stored in the user model. Based on the
user model, stories are recommended fitting to the actual competence state. This learning
cycle is done until a learner achieves a desired competence state. This approach is part of the
EC-funded TARGET project [10] which provides the technical infrastructure regarding the
3D game environment. The innovative part presented in this paper is the adaptive learning
strategy (LS) and how it is embedded in a game-based environment.
1. Conceptual Framework
In a DEG like TARGET, learning happens during the game play. Therefore, the game is
structured into game units that teach competences and thus act as learning objects (LOs).
Traditionally, LOs are designed as multimedia documents containing texts, images,
animations, and other 2D elements. In our case these units are designed as immersive 3D
environments where learners can move around and interact. Since there are also tasks to
solve in a defined and contextualized situation, they are seen as stories. For the completion
of a story, a certain level of proficiency is required but might also enhance throughout the
confrontation with comprised challenges. Hence, stories or LOs do not only teach subjects,
but also test the knowledge of the learners by observing the learners’ performance when
interacting in a story (see Figure 1).
Figure 1: Relation between stories, competences and learners
In order to formally structure the stories with respect to knowledge and
competences, the conceptual framework is based on the Competence-based Knowledge
Space Theory (CbKST) [5][7]. This framework allows for representing knowledge of
knowledge domains and learners and provides algorithms to test the knowledge of learners
in terms of competences. The learning path is adapted according to these competences. The
basic idea is to define a domain model by defining competences and to build a structure on
them (a competence can be a prerequisite for another one). These competences are assigned
to learners (a learner can demonstrate a competence), to learning objects (a learning object
teaches competences), and to assessment items (an item can test whether a learner can
demonstrate a particular competence). Each story is assigned with the competences that are
required for solving the tasks of that story and hence for story completion. Competences and
their relationships to stories form the domain model. Competences may or may not concern
with one another but if they do, this is displayed in the domain model using pre-requisite
relationships. In addition to the domain model there is also a user model that describes a
learner’s individual progress and state in terms of obtained competences. Figure 1 outlines
the relationships between stories, competences, and learners.
Figure 2 illustrates the learning cycle: In the initialization phase the learner is invited
to set learning objectives in terms of competences (Target Competence Profile, TCP) and to
provide pre-knowledge in terms of competences through self-assessment (User Competence
Profile, UCP). Based on the competence profile a story is selected that addresses the
competences that the learner should learn next. The learner plays this story by interacting
with the game and by trying to solve the task in the given situation. The interactions are
observed and used to identify if the learner shows the respective competences. The result of
this non-invasive assessment goes into and updates the user model. When the learner has
finished a story, the system recommends the next story based on the user profile taking into
account the results of stories a learner previously engaged with.
Figure 2: Learning Cycle
2. Adaptation Model
Adaptation in our case means to adapt the story path to the learner's competence state.
Stories are recommended to the learner according to currently shown competences.
Assessment items are usually provided to the learner before or after she has
consumed a set of LOs. However, as it often happens in serious games, both teaching and
testing are within a single learning unit or game scenario. This offers the great opportunity
to assess the learner’s ability while she or he is being engaged with the game. It allows
assessing the learner without destroying a potential flow experience [4]. Therefore, a
non-invasive or implicit assessment procedure was introduced. The approach is based on the
non-invasive assessment procedure that is already implemented in the TARGET project.
Basically, it grounds on the interpretation of the learner´s actions and interactions within the
virtual environment [2]. These observations result in values (ranging between 0 and 1) for
the set of competences assigned to the current story. For example, if a learner is playing
story A and the competences x, y, and z are assigned to this story, then the result could be
[0.1, 0.7, 0.8] meaning that the learner performed well in respect to competences y and z and
poorly as to competence x. We call these values competence performance values.
In Figure 3 one can find a more detailed view on the structure of the learning
strategy’s (LS) logic. Starting at the top of the illustration, the Domain Model encompasses
all identified competences of the domain and their (pre-requisite) relations to each other.
This model does not change during the learning cycle. On the other hand, the user model
located in the center of the illustration keeps track of the competences a learner
demonstrates. It is initialized with the values of the user competence profile, the target
competence profile and relevant parts of the competence domain. All competences of the
TCP and those that are pre-requisite relations to them are relevant for the user model.
Each competence within scope has assigned a probabilistic value that indicates the
probability of the learner being competent in the use of it. The Assessment part receives
competence performance values in a continuous range from 0 to 1 for single competences.
Incoming values are applied through an algorithm called Simplified Updating Rule [1]. As
the algorithm can only handle binary updates, solely values smaller than 0.35 for negative
assumptions and values higher than 0.65 for positive assumption are taken into account.
After the classification of the input, the algorithm can be applied on the affected competence
and its related competences in the user model. For example, if a competence x is a
prerequisite for competence y then we can assume that a learner that shows competence y
also shows competence x. If the assessment procedure delivers a probability value for
competence x, we can also make the assumption that the same learner demonstrates
competence y to a certain extent and thus we can increase also the probability value for
competence y. According to this consideration all related probability values are modified
each time the assessment procedure delivers data for the competences assigned to a story.
Figure 3: The Core Logic of the Learning Strategy Component
The recommendation strategy is based on the competence profile (of the user
model). This strategy is done in a two-step process: First the competences that the learners
should obtain next are determined. Then, an according story is selected. To that end, the
Path/Recommendation module accesses the user model and selects a small set of
competences whose probabilistic values differ from a defined threshold the least. If the
value of a competence is very high, it is likely that the learner already demonstrates this
competence. So this competence will not be selected. If the value is very low, it is likely that
the learner is not competent in this area yet. This competence is also not selected, because it
is assumed to be too difficult for the learner at this stage of the learning process. Therefore,
a competence should be selected that has a probability value of about 0.5, because such a
competence is expected to be of medium difficulty for the learner. In the second step a story
is selected that addresses the picked competences. Then the learner continues with this new
story and a new assessment is happening. This cycle is conducted until all competence
values are above a certain threshold value.
3. The Learner’s Perspective
The learner’s active part in the personalization process takes place during the initialization
phase, when the learning plan is created. In the TARGET project, a tool called Competence
Analyzer is provided as an input device to assign selected competences to the User
Competence Profile (UCP) or the Target Competence Profile (TCP). When the learner
finishes, the UCP should include all competences the learner demonstrates at this point in
time. The TCP should include all competences the learner would like to achieve during the
execution of the resulting learning plan. Based on these profiles the first story
recommendation can be provided and presented to the learner. Within the virtual
environment the learner is represented as an avatar and has to interact (non-verbally and
verbally) with so called non-playable characters (NPCs) to master story-dependent tasks.
Stories are tailored to contribute to a learner’s competence development. The story
description encompassing tasks, characters, and background information is presented to the
learner at the very beginning. After reading the initial story manual the learner enters a scene
of the Game scenario. The player learns and is being assessed till the end of the game is
reached, which happens either when the story tasks have been mastered successfully or the
playing time has expired. In any case, the learner gets the chance to reflect on a diagram that
presents her or his performance of the story competences throughout the last game play. A
next story is offered to the learner.
4. Conclusion and Outlook
Focus of this paper lies on the CbKST based modeling of user competences to support the
adaptive guidance through competence based learning and assessment in a DEG. A brief
insight into the implementation and the application of the algorithm was provided.
Evaluations of the overall TARGET platform have started. Initial feedback from users
indicates that recommended stories are experienced slightly above the medium difficulty
level. Further studies will provide information about the appropriateness of selected
competences addressed in these stories. Subjects of adjustment could be the thresholds
related to competence probabilistic that lead to the competence selection and also the
number of competences addressed by one story, in order to improve the adaptation.
Acknowledgements
This paper is part of the EC-Project TARGET funded by the 7th Framework Program of the
European Commission. The authors are solely responsible for the content of this paper. It
does not represent the opinion of the EC, and the EC is not responsible for any use that
might be made of data appearing therein.
References
[1] Augustin, T., Hockemeyer, C, Kickmeier–Rust, M. D., Podbregar, P., & Albert, D. (submitted). The
simplified updating rule. Journal of Computational Science.
[2] Bedek, M.A., Petersen, S. A., Heikura, T.(2011). From Behavioral Indicators to Contextualized
Competence Assessment. In: Proceedings of the 11th IEEE International Conference on Advanced
Learning Technologies, pp. 274 – 276.
[3] Brusilovsky, P., & Millán, E. (2007). User Models for Adaptive Hypermedia and Adaptive Educational
Systems. (P. Brusilovsky, A. Kobsa, & W. Nejdl, Eds.)LNCS: The Adaptive Web, 4321, 3 - 53. Springer.
doi:10.1007/978-3-540-72079-9_1
[4] Csikszentmihalyi, M.: Flow: The psychology of optimal experience. Harper & Collins, New York (1990)
[5] Falmagne, J.-C., & Doignon, J.-P. (Eds.). (2011). Learning Spaces. Springer.
doi:10.1007/978-3-642-01039-2
[6] Fischer G. (2001). User Modeling in Human-Computer Interaction. User Modeling and User-Adapted
Interaction, Kluwer Academic Publishers, 2001, 11, 65-86.
[7] Heller, J., Steiner, C., Hockemeyer, C., & Albert, D. (2006). Competence-Based Knowledge Structures
for Personalised Learning. International Journal on E-Learning, 5(1), 75-88.
[8] Kickmeier-Rust, M. D., Hockemeyer, C., Albert, D., & Augustin, T. (2008). Micro adaptive,
non-invassive assessment in educational games. In M. Eisenberg, Kinshuk, M. Chang, & R. McGreal
(Eds.), Proceedings of the second IEEE International Conference on Digital Game and Intelligent Toy
Enhanced Learning (pp. 135-137), November 17-19, 2008, Banff, Canada.
[9] Steiner, C. M., Kickmeier-Rust, M. D., Mattheiss, E., Göbel, S, Albert, D. (2012). Balancing on a High
Wire: Adaptivity, a Key Factor of Future Learning Games. In: An Alien's Guide to Multi-Adaptive
Educational Computer Games. Informing Science Press, Santa Rosa, USA
[10] TARGET project, http://www.reachyourtarget.org/, retrieved at 2012-05-28