Content uploaded by Onjira Sitthisak
Author content
All content in this area was uploaded by Onjira Sitthisak on Mar 11, 2020
Content may be subject to copyright.
Adaptive Learning Using an integration of competence model with Knowledge
Space Theory
Onjira Sitthisak
School of computer and Information
Technology
Faculty of Science, Thaksin University
Thailand
onjira.sitthisak@gmail.com
Lester Gilbert
School of Electronics and Computer
Science
University of Southampton
Únited Kingdom
lg3@ecs.soton.ac.uk
Dietrich Albert
Knowledge Management Institute, Graz
University of Technology, Austria
Department of Psychology, University
of Graz, Austria
dietrich.albert@{tugraz.at,uni-graz.at}
Abstract— A model for the interactions in an assessment to
support learning identifies the need for response options and
for contingent feedback, both of which pose problems when
computer-aided. Apart from the difficulties of allowing
arbitrary student responses, of judging them for correctness or
error, and of providing appropriate specific and contingent
feedback, explicitly identifying a range of options or
alternatives from which a student may make selections remains
an unsolved research problem. The “Knowledge Space Theory
(KST)” model of the domain “problems” provides some
opportunity for response options. The “Competence Based
Assessment (COMBA)” model of the required knowledge
provides some opportunity for relevant feedback. The paper
explores ComKoS, a model which integrates both approaches.
We propose to apply ComKoS and IMS QTI in Moodle to
instantiate the design and development of one kind of adaptive
testing system. This implementation overcomes limitations in
adaptability, interoperability, portability, and reusability. Key
benefits of this implementation are identified and possibilities
suggestion for future work is provided.
Keywords-knowledge space theory; competence; computer
adaptive test; Moodle; IMS QTI; e-assessment
I. INTRODUCTION
A number of computer adaptive test methods and
technologies allow learners to be tested at a level which is
appropriate to their knowledge and skill. An adaptive test
changes its behaviour and structure depending on the
learner’s responses and detected knowledge state.
A model for the interactions in an assessment to support
learning identifies the need for response options and for
contingent feedback, both of which pose problems when
computer-aided. The “Knowledge Space Theory (KST)”
model of the domain “problems” provides some opportunity
for response options. The “Competence Based Assessment
(COMBA)” model of the required knowledge provides
some opportunity for relevant feedback. The paper explores
ComKoS, a model which integrates both approaches,
identifies key benefits and some disadvantages. We propose
to apply ComKoS and IMS QTI in Moodle to instantiate the
design and development of one kind of adaptive testing
system. Adaptive learning in Moodle requires a number of
developments of ComKoS to make it more useful. The main
contributions are summarized. An outlook on future research is
provided.
II. INTERACTIVITY IN ASSESSMENT
In broad terms, an “atomic” learning and teaching
transaction might be considered to involve three key
interactions: presenting information and supporting
materials, “show and tell”; asking the student to undertake
some appropriate learning activity, “ask”; and providing
pertinent feedback on performance, “feedback” [1]. Where
such a transaction explicitly comprises an assessment for
learning (formative assessment as distinct from summative
assessment of learning) [2], Fig. 1 illustrates a more
developed model. In particular, this model of an interactive
cycle explicitly identifies the need for choices or alternatives
to be available to the student, from which one or more
selections must be made as a response to the “ask” and
which will be judged for correctness and “feedback”
provided.
Figure 1. Cycle of interactivity [3].
Fig.1 identifies a number of the difficulties faced by any
computer-aided assessment system intended to support
learning, and particularly one which wishes to do so
automatically or semi-automatically. Apart from the
difficulties of allowing arbitrary student responses, of
judging them for correctness or error, and of providing
appropriate specific and contingent feedback, explicitly
identifying a range of options or alternatives (shaded box in
Figure 1) from which a student may make selections remains
an unsolved research problem.
III. KNOWLEDGE SPACE THEORY (KST)
Knowledge Space Theory provides a framework for
domain and learner knowledge representation and supports
the implementation of intelligent e-learning solutions.
Albert & Lukas [4] developed KST as a technique for
modelling problems or questions which students needed to
be able to answer and the solutions which teachers needed
to teach. The key insight was to model these problems as a
dependency graph or network. In such a network, given a
particular problem a, another problem b depends upon it if
the solution to a is required before b can be satisfactorily
solved. In KST, the dependency relationship is referred to as
a prerequisite relationship. A domain of knowledge is
represented by a set of typical assessment problems
(subsequently denoted by Q). The knowledge state of an
individual is identified with the subset of assessment
problems the person is capable of solving. Not all potential
knowledge states (i.e. subsets of problems) will be expected
to be observable in practice. By associating assessment
problems with learning objects, units of learning, or similar,
a structure can be established which constitutes a basis for
learning paths which are adapted to the learners knowledge
state [5].
In this way, a knowledge structure can provide the basis
for creating personalised learning paths. Furthermore, a
knowledge structure is at the core of an adaptive assessment
procedure [6]. By exploiting the prerequisite relationships
among the problems and presenting problems depending on
the learner’s previous answers, the knowledge state of a
learner can be determined by presenting him/her with only a
subset of the problems. The result of such an assessment can
be utilised as a starting point for realising individualised
learning.
KST’s approach to modelling problems makes it
relatively easy for teachers and trainers to populate a KST
knowledge structure with the content of what needs to be
taught and what needs to be assessed. The competences
being addressed and assessed remain implicit in such a
structure, however, and there is no obvious way of
automating the provision of feedback for right or wrong
answers to the encoded problems. Such feedback would
typically reference the desired competence which the
assessment item tests, yet in a KST structure these
competences are not explicit.
IV. COMPETENCY MODEL
The issue of how to represent competency as a rich data
structure is focused on supporting collaboration between
different communities and the tracking of the knowledge
state of the learner. The same competencies may appear in
more than one place in the competency hierarchy. Thus, it
makes sense to capture the data model of those competencies
in some reusable form, so they have to be defined only once.
A competency model, named Competence-Based learner
knowledge for personalized Assessment (COMBA), was
proposed by Sitthisak, Gilbert & Davis [7]. The heart of this
model is the treatment of knowledge, not as possession, but
as a contextualized multidimensional space of either actual
or potential capability.
When combined with an ontology, COMBA has been
used to automate question generation in adaptive assessment
systems [8]. The system focuses on the identification and
integration of appropriate subject matter content (represented
by a content taxonomy) and appropriate cognitive ability
(represented by a capability taxonomy) into a hierarchy of
competencies. The resulting competencies structure has been
shown to be able to generate questions and tests for
formative and summative assessment. These questions can
be expressed as IMS Question and Test Interoperability
(IMS QTI) compatible XML files to enable interoperability.
COMBA is informed by the results of comparing the
competency standards against the desired taxonomy of
competence [9]. A competency involves a capability
associated with subject matter content and optionally a
contextualisation (the situation or scenario, tools, and
standard of performance). A competency can be linked to
one or more resources, and a student may evidence a
competency in one or more ways.
COMBA’s approach to modelling competence makes
explicit the competences being addressed and assessed, and
there are obvious opportunities to automate the construction
of assessment items and the provision of feedback for right
or wrong answers. On the other hand, teachers and trainers
find it relatively difficult to populate a COMBA competence
structure with the intended learning outcomes of what needs
to be taught.
V. COMKOS: INTEGRATING KST AND COMBA
While assessment items and appropriate feedback can, in
principle, be generated from a COMBA competence
structure, there are numerous practical problems in doing
this. Yet a KST network already comprises the set of
required assessment items, which practitioners find
straightforward to identify, articulate, and structure. Further,
while the assessments in a KST network have no explicit
connection to the underlying competences which are being
taught and tested, a COMBA structure already comprises the
set of required competences. Integrating the KST and
COMBA models promises to provide the best of both: a
knowledge structure more easily constructed by
practitioners, with the required assessments explicitly
articulated and their underlying competences explicitly
identified and associated. The resulting merged structure
might be called a Competence Knowledge Space (ComKoS).
Integrating KST and COMBA models may be expected
to provide some further practical advantages derived from
the process of merging a KST domain structure with the
associated domain COMBA structure. In parsing a KST
structure in preparation for its merge with a corresponding
COMBA structure, the practitioner or instructional designer
would check that each KST assessment or problem could be
associated with a COMBA competence, and that the KST
prerequisite assessments or problems were in turn associated
with COMBA enabling competences. Similarly, in parsing a
COMBA structure in preparation for its merge with the
corresponding KST structure, the practitioner or instructional
designer would check that each COMBA competence could
be associated with one or more KST assessments or
problems, and that the COMBA enabling competences were
in turn associated with KST prerequisite assessments or
problems.
In addition, the problem identified earlier of explicitly
identifying a range of options or alternatives from which a
student may make selections could be solved by appropriate
processing of a ComKoS structure. Distracting or alternative
options could be extracted from competence nodes which
were neighbours of the target competence, and could be
added to the KST problem or assessment statement.
Finally, a ComKoS structure would be expected to give a
longer life to a KST structure of assessments and problems
when merged with its underlying COMBA structure.
VI. COMKOS CONCEPTUAL MODEL
ComKoS would be a competence structure where each
competence node would be associated with a number of
assessment or problem items. Conceptually, the data model
would be the simple structure of Fig.2.
Figure 2. ComKos conceptual model
Constructing a ComKoS structure would involve the
merging of a COMBA structure with an associated KST
structure. In practice, it is likely that a practitioner or
instructional designer would have a list of competences or
ILOs which were considered relevant to a particular domain
rather than a developed COMBA structure. Similarly, it is
likely that the practitioner or instructional designer would
have a list of assessment items or problems rather than a
developed KST structure. We propose the following process.
1. Preliminary KST structure
The assessments or problems are arranged into a linked
list or hierarchy, such that assessments or problems earlier in
the list or towards the top of the hierarchy are linked to
prerequisite items which are later in the list or lower in the
hierarchy. Every KST problem must link to at least one
other.
2. Preliminary COMBA structure
The ILOs or statements of competence are arranged into
a linked list or hierarchy, such that competences or ILOs
earlier in the list or towards the top of the hierarchy are
linked to enabling items which are later in the list or lower in
the hierarchy. Items which remain unlinked are listed
separately. The linkage process in this step may be omitted,
and the result of the step would simply be an unstructured
list of all the competences or ILOs of the domain.
3. Merge
For each KST problem, associate the relevant
competence. Where there is more than one candidate
competence, associate the one which is lower or lowest in
the COMBA structure. It is expected that a given
competence will be associated with more than one KST
problem or assessment item. That is, a given competence or
ILO could well be assessed in a number of ways or through
a number of different assessment items. (a) It is likely that,
at first sight, a given problem might be associated with more
than one competence. Resolve this either by associating the
problem with a competence higher in the preliminary
COMBA structure or with a competence listed separately, or
construct a new competence whose components (capability,
subject matter, context) underlie the problem under
consideration. (b) It is possible that a particular KST
problem is not, at first sight, associated with any
competence. Either construct a new competence as earlier
which underlies the problem, or remove the problem from
further consideration. At the end of the first pass, every KST
problem is associated with a COMBA competence. There
may be unassociated COMBA competences.
4. Preliminary ComKoS structure
Identify those competences which have associated
problems. Construct a tree structure of these competences,
and list unassociated competences separately. For each
competence node in the tree, link to their enabling
competence(s) which are those associated with the KST
problem(s) which are prerequisites to the competence’s
associated KST problem.
5. Refine the ComKoS structure
Consider the competences which do not have associated
problems. For each, either construct an appropriate problem
and iterate steps 3 and 4, or remove the competence from
further consideration.
VII. A PROPOSED COMKOS ADAPTIVE TESTING IN
MOODLE
We propose to apply ComKoS and IMS QTI in Moodle
to instantiate the design and development of one kind of
adaptive testing system. The system would be implemented
using QTI-compliant adaptive items scored over a sequence
of attempts. This would allow the student to alter their
answer following feedback or to be posed additional
questions based on their answer as in Fig.3. Designers would
need to construct a predefined set of context situations at
design time, or could use QTI properties and conditions to
modify runtime behaviour.
An adaptive sequence would start with a problem
selected adaptively from a problem set as in Fig. 4. If the
learner can solve the problem, the next problem would be a
more difficult one selected from the ComKoS structure. For
adaptation information, the characteristics of the actual
context of execution can be captured such as the learner's
knowledge state, observation of the learner’s progress, or
time taken by the learner to answer the assessment. A
suitable adaptation would be selected and applied to fine-
tune the adaptive process.
Figure 3. QTI delivery service displaying a question, receiving an answer,
and giving feedback
Figure 4. QTI delivery service showing a QTI test in Moodle
VIII. CONCLUSIONS
The ambition of the ComKoS model is to provide all the
elements of the interactivity cycle of assessment to underpin
any computer-aided assessment system intended to support
learning, particularly one which wishes to do so
automatically or semi-automatically. In principle, the model
holds the promise of being able to provide two key elements
currently missing from practical implementations of
computer-assisted assessment – identifying plausible
options or alternatives in objective assessments, and
providing more satisfying feedback to support learning.
Plausible options may be obtained by extracting distracting
alternatives from neighbouring competence nodes and
associated KST problem statements. Better feedback given
an answer to a KST problem may be obtained by extracting
relevant information from the COMBA competence
statement which is associated with the problem
The proposed system supports consistency checking,
assessing differences in knowledge state, and comparing
achievement in related domains. This system also has the
advantage of providing a more detailed identification of
learners’ performance.
Future work is planned to evaluate a prototype of the
proposed system in a real situation involving the domain of
photosynthesis, where finding optimised learning paths will
be explored. The value of structuring and visualizing a
knowledge domain around its needed competences and its
corresponding assessment problems will also be explored.
REFERENCES
[1] L. Gilbert, et al., "Modelling the Learning Transaction," in In
Proceedings of the 5th IEEE International Conference on Advanced
Learning Technologies, Kaohsiung, Taiwan, 2005.
[2] D. Whitelock and S. Cross, "Authentic assessment: What does it
mean and how is it instantiated by a group of distance learning
academics? ," International Journal of e-Assessment, vol. 2, 2012.
[3] L. Gilbert and V. Gale, Principles of eLearning Systems
Engineering: Chandos, 2007.
[4] D. Albert and J. Lukas, Knowledge Spaces: Theories, Empirical
Research, and Applications. Mahwah, NJ: Lawrence Erlbaum
Associates, 1999.
[5] C. M. Steiner and D. Albert, "Personalising Learning through
Prerequisite Structures Derived from Concept Maps," in Advances in
Web-Based Learning ICWL 2007, H. Leung, et al., Eds., ed.
Edinburgh, UK: Springer-Verlag, 2008, pp. 43-54.
[6] O. Conlan and V. P. Wade, "Evaluation of APeLS–an adaptive
eLearning service based on the multi-model, metadata-driven
approach," in Adaptive Hypermedia and Adaptive Web-Based
Systems, 2004, pp. 291-295.
[7] O. Sitthisak, et al., "Transforming a competency model to
assessment items," in The 4th International Conference on Web
Information Systems and Technologies (WEBIST), Funchal,
Madeira - Portugal, 2008.
[8] O. Sitthisak, et al., "Ontology-driven automatic generation of
questions from competency models," in The 9th International
Conference on Computing and Information Technology, Thailand,
2013.
[9] O. Sitthisak, et al., "Adapting health care competencies to a formal
competency model," in the ICALT, Niigata, Japan, 2007.