ArticlePDF Available

Abstract and Figures

Personalized eLearning Systems tailor the learning experience to characteristics of individual learners. These tailored course offerings are often comprised of discrete electronic learning resources, such as text snippets, interactive animations, diagrams, and videos. An extension of standard metadata schemas developed for facilitating the discovery and reuse of such adaptive learning resources can also be utilized by the eLearning systems for realizing the adaptivity. An important feature of such reuse supporting adaptive systems is the clear distinction of separate models and components within the teaching process.
Content may be subject to copyright.
Metadata Driven Approaches to Facilitate Adaptivity in
Personalized eLearning Systems
Owen Conlan
1
, Cord Hockemeyer
2
, Vincent Wade
1, &
Dietrich Albert
2
1
Knowledge and data Engineering Group, Trinity College Dublin, Ireland
{Owen.Conlan|Vincent.Wade}@cs.tcd.ie
2
Cognitive Science Section, University of Graz, Austria
{Cord.Hockemeyer|Dietrich.Albert}@uni-graz.at
Abstract
Personalized eLearning Systems tailor the
learning experience to characteristics of
individual learners. These tailored course
offerings are often comprised of discrete
electronic learning resources, such as text
snippets, interactive animations, diagrams, and
videos. An extension of standard metadata
schemas developed for facilitating the
discovery and reuse of such adaptive learning
resources can also be utilized by the eLearning
systems for realizing the adaptivity. An
important feature of such reuse supporting
adaptive systems is the clear distinction of
separate models and components within the
teaching process.
Keywords: Adaptive tutoring systems, Per-
sonalized eLearning, Metadata, System ar-
chitecture.
1 Introduction
There are a number of reasons to utilize
adaptive techniques to produce personalized
eLearning courses. The primary of which is
that no learners learn in the same way. In a
traditional classroom situation learners are
taught through a ‘one size fits all’ approach,
where the teacher/lecturer aims not to alienate
any of the learners with their pedagogical
approach. With personalized courses, however,
we can do better than trying not to alienate the
learner – we can actively engage the learner
with a teaching strategy and material that
appeals to the learner’s knowledge, style of
learning, etc. It would be costly and unfeasible
to ask a teacher/lecturer (the knowledge
domain expert) to produce an individualized
course for each learner in their course and to
realize the private teacher approach.
Using hypermedia systems it is possible to
deliver information outside the traditional
bounds of a classroom, but unless the material
is tailored to the learners requirements the
learner may not be engaged by the material and
suffer from the same problems as the ‘one size
approach’. By coupling the hypermedia
technologies with personalization strategies we
can deliver, to each individual learner, a course
offering that is tailored to their learning
requirements and learning styles [16]. The
benefits of effective personalization is that
Examples and case studies that appeal
to the learner’s background may be
used.
The time taken to learn the material
may be reduced.
The learner’s retention may be im-
proved.
Underlying any eLearning adaptation there
should be sound pedagogical principles and the
knowledge of a domain expert. Without the
former any eLearning system can suffer from
the polar problems of ‘lost in hyperspace’ [9]
or the learners feeling they are being dictated
to and constrained. The latter, i.e. the domain
expert, ensures that the material presented is
done in a coherent and structured manner.
The goal of the EASEL (Educator Access the
Services in the Electronic Landscape) project
[19], funded by the EC within its IST
programme, was to develop a framework in
which educators could assemble new
educational course offerings from existing
educational services and from material in local
and remote content repositories. The key to the
search and discovery aspects of such a system
is effective, descriptive metadata. Metadata
records include features of the learning
resources such as title, description, keywords,
author, technical requirements etc. These
records describe the resource and facilitate
inclusion of that resource in a new course
offering. In EASEL, the primary resources
considered were remote Adaptive eLearning
Services and traditional eLearning content. In
EASEL, Trinity College, Dublin [28] and
University of Graz [14] were involved in the
design and development of such services. This
paper describes metadata-driven and model-
based approaches to realizing Adaptive
eLearning Systems developed as part of
EASEL.
2 Realizing Adaptivity through
Metadata
The vast amount of information online
available has led to the development of
metadata specifications that enable the
cataloguing and searching of online resources
more efficiently. While early approaches
offered a non-standardized inline specification
of metadata, e.g. in the HTML language [8],
standardized schemas for separate metadata
specifications were developed more recently
for general [15] as well as for application
specific purposes [e.g. 25,26]. So far, such
metadata have had merely a descriptive
function as they were mostly applied to static
content.
In case of adaptive content, however, metadata
also facilitate the description of the adaptive
features of the resource, e.g. what is adapted
and what it is adapted to. Such categories have
been described in more detail in [6,20]. This
information can not only be utilized for search,
e.g. for finding material supporting certain
adaptivity techniques but also for realizing the
adaptivity when constructing courses from
existing material. An adaptive engine may in
this case select, sequence, and present
resources based on the adaptivity metadata
attached to these individual resources.
In the sequel, a first approach to realizing
adaptivity through non-standardized metadata
describing relationships between the learning
objects of a course is described.
2.1 A Non-standardized, Metadata-
based Approach to Adaptive Hyper-
media Services
The relational adaptive tutoring hypertext
system RATH [22,24,31], funded by the EC
within its HCM programme is a prototype of a
system realizing adaptivity based on the
metadata information of the content.
Adaptivity in RATH is based on the theory of
knowledge spaces [5,17,18]. This is a model
from mathematical psychology for structuring
a domain of knowledge based on prerequisite
relationships between the individual items (e.g.
learning objects or test problems). Knowledge
space theory was connected to a relational
formulation of the Dexter Hypertext Model
[21] to obtain a hypertext tutoring system
adapting to the individual user’s current
knowledge. Hyperlinks between the learning
objects are adaptively hidden whenever a
learner has not yet learned the contents of the
prerequisite learning objects.
Technically, RATH expects metadata on
prerequisite relationships for each learning
object within the respective HTML (content)
file through the HTML <META> tag, i.e. each
HTML file within a RATH course should
contain <META> entries of type
prerequisite pointing to those other
learning objects which are deemed as a nec-
essary prerequisite for understanding the
current object.
When a course is fed into the RATH system,
all prerequisite relationship information are
extracted from the individual files and stored in
a relational database. While a learner browses
through the course the learner model (i.e. the
system’s model of the learner’s knowledge) is
extended by each visited learning object. At
certain points, test problems have additionally
to be solved thus validating the learner model.
As RATH is a prototypical system, its im-
plementation has been kept rather simple.
Whenever a learning object is requested, the
prerequisites for each linked document are
retrieved from the database and compared to
the current learner model. All communication
between web server and database is done
through the standard CGI interface and small
programs for retrieving the necessary
information from the database.
2.2 The competence-performance ap-
proach as a factor of reusability
Describing the prerequisite structure between
the learning objects through direct prerequisite
links as it was done in the RATH system
involves difficulties in dynamic domains.
Whenever learning objects are changed, added,
or deleted, the prerequisite relationships to and
from many other objects have to be rechecked
for validity. This problem has already been
discussed with respect to knowledge space
theory without convincing results [4].
A solution to this can be found in Korossy’s
competence-performance approach [29,30].
Investigating the cognitive background of
knowledge spaces, Korossy differentiated
between observable performances and the
underlying, not directly observable compe-
tencies. He defines a complete competence-
performance structure by the structures within
the sets of competencies and performances,
respectively, and by the mappings between
competencies and performances.
Based on experiences in developing a course
for RATH [2,31], Hockemeyer takes up
Korossy’s approach in the form of mappings
between learning objects and underlying
competencies. Furthermore dividing the set of
competencies assigned to an object into two
subsets of required and of taught competencies
he obtains teaching structures of competencies
[23].
Such assignments of required and of taught
competencies for a learning object can directly
be expressed through metadata. An adaptive
tutoring system can then use a learner model of
competencies and can adapt to the learner’s
current knowledge by comparing the
competencies required for the learning object
in question with the learner’s competence state.
This approach has been applied in the APeLS
system described in Section 3 below. The
benefits with respect to reusability of learning
objects and to the dynamics of courses have
been proven building a course on mechanics
out of sections from two different courses. All
learning objects were described with metadata
on required and taught competencies. The new
course could simply be built by feeding all the
individual learning objects into the APeLS
system.
2.3 Standardized Metadata for De-
scribing and Realizing Adaptivity
As already mentioned above, standards for
metadata describing eLearning resources have
been developed in recent years [e.g. 26].
However, these standards are not capable of
describing adaptivity of learning resources.
Within the EASEL project [19], extensions of
an existing metadata schema covering
adaptivity information have been proposed
[3,10]. The basic idea is an adaptivity block
within the education related metadata that
contains an arbitrary number of adaptivitytype
entries, one for each type of adaptivity realiz-
able with this piece of content. These adap-
tivitytype entries then contain candidates that
may be hierarchically grouped by sets
allowing, e.g., an and-or structure between the
candidates. The candidates contain the real
values, possibly in a sequence of langstrings
allowing to specify the same values within
multiple languages.
The following example shows an adaptiv-
itytype entry describing the competencies
required for understanding the current
document. There exists a set of candidates of
which all should be known. The competence A
is described in two languages but the
surrounding candidate block clearly states that
competence-A and Kompetenz-A denote the
same competence in different languages.
<adaptivitytype
name"competencies.required">
<set type"all">
<candidate>
<langstring lang="en">
competence-A
</langstring>
<langstring lang="de">
Kompetenz-A
</langstring>
</candidate>
<candidate>
...
</candidate> ...
</set>
</adaptivitytype>
Within the adaptivitytype tag it is also possible
to specify a reference document explaining the
terms used for the entries.
Figure 1 – Architecture of the PLS [Conlan et al, 2002a]
3 Models and Candidacy in Adap-
tive eLearning Systems
In this section, an architecture for an adaptive
system based on separate data models and on
separation of concepts and contents is
introduced which may help solving the
problems mentioned in Section 2.1 above.
The principle element of any eLearning system
is the learner, or more accurately how precisely
the system models the learner. Most eLearning
Systems that support adaptive techniques have
two other models – Content Model and
Narrative Model, though these models are
often intertwined. The content model
represents the learning resources within the
system and the narrative model embodies the
ways in which that content may be sequenced
for the learner. It is the reconciliation of these
three models that produces personalized
courses.
The Personalized Learning Service (PLS) [12],
developed by Trinity College, Dublin,
separates these three models into discrete
elements of the service.
The advantage of this separation into discrete
models (see Fig. 1) is that the content is now
independent of the narrative and can be reused
in other eLearning services or courses. The
PLS also supports a candidacy architecture
[12] that enables the narrative to refer to
learning concepts, rather than individual pieces
of content. This approach enables an individual
concept to be fulfilled by an appropriate
candidate at runtime (see 3.4 Metadata and
Candidates as a basis for Adaptation).
3.1 Generic Standards-based Approach
to Adaptive Hypermedia Services
As part of EASEL, two approaches to de-
veloping adaptive hypermedia services were
explored. [11] detail the service architecture
and describe the differences between explored
Adaptive Hypermedia Services and Adaptive
Hypermedia Systems. The first approach, the
Personalized Learning Service [12] explored
many of the basic principles employed and
enhanced in the second iteration. The primary
goals of the second adaptive hypermedia
service, called the Adaptive Personalized
eLearning Service (APeLS), were to
Ensure that the flexibility of the rules
engine was maintained.
Expand the candidacy approach to
cater for n models (rather than just
three).
Rules
Engine
Candidate
Selector
Learner
Metadata
Repository
Candidate
Content
Groups
Candidate
Narrative
Groups
Content
Metadata
Repository
Content
Repository
Narrative
Repository
Personalized
Course Model
Personalized
Course
Content
Adaptive
Engine
Learner Input
Learner
Modeler
Narrative
Metadata
Repository
Utilize an approach that can use this
metadata as a means to produce
adaptive effects.
Maintain the standards-based metadata
for describing the content model.
3.2 Flexible Rules Engine
The rules engine employed is based on JESS
[27], as was the original PLS rules engine. The
narrative paradigm, where an individual
narrative embodies the flow and conditions for
assembling a personalized course offering, was
extended to facilitate
Open course structure
Reusable sub-narratives
Metadata-based decision making
Re-usable selection processes
The original PLS was constrained to the course
model represented internally in the adaptive
engine implementation. This model was of a
traditional course-section-unit-content form. In
APeLS it was decided that this model was too
restrictive to represent the variety of
pedagogical approach that the course
(narrative) author may wish to express. To this
end, APeLS was designed to allow narratives
to build any DOM (Document Object Model)
they required directly from the narrative rules.
The DOM could then be expressed in XML
and passed through a transformation, as in the
PLS, to produce a rendering of that course.
APeLS was also designed to utilize the ca-
pability of JESS to call other sets of rules,
called batches, from within another rule set.
This translates to narratives being able to call
sub-narratives. As all narratives have
associated metadata the calling of sub-nar-
ratives can use the same principles of can-
didacy used in the narrative selection and
content selection processes of the PLS. The
ability to use finer grained narratives to
constitute a larger narrative enables the course
author to produce re-usable narratives and
build a repository of such narratives (described
by their associated metadata). If the design
DOM hierarchy is replicated at different levels
within the course produced then, in theory, the
sub-narrative could be inserted at any point in
the course and still produce a valid DOM.
3.3 N Models and Collections
The original PLS was based on three models –
Learner, Content and Narrative. This approach,
however, precluded the possibility of
expanding to other models. For example, it
may be desirable to represent aspects
pertaining to the learning environment,
learning device (PDA, WAP, eBook etc.),
learner’s peers or overall curricula. Separate
models that the narratives can reference, if
required, should represent each of these
aspects. With the capabilities of metadata-
based decision making it is possible to query
any metadata model. The problem remains,
however, of how to organize the models in
such a way that the metadata is accessible and
the principle of candidacy is maintained.
The data storage of the PLS was based on a
relational database model that was capable of
storing any XML structure in a generic fashion.
The downside of this approach is that for large
numbers of records the tables grew very large
with no mechanism for segregating and
identifying the different models represented.
For multiple models to be feasible it is
necessary to collect like models together to
ease querying of the metadata. To this end,
Xindice [7] was chosen as the data storage
facilitator. Xindice (originally dbXML) is a
database that thinks in terms of XML. Most
relational databases offer XML import and
export facilities, but the underlying structure is
still relational. Xindice utilizes relational
principles, but natively understands XML and
offers XPath query services. It is also capable
of organizing XML documents into collections,
facilitating the dynamic creation of such
collections as well.
Using Xindice it is possible to have n models,
each distinctive model being stored in a
separate collection. For example, APeLS could
be used with four models – Learner, Content,
Device and Narrative. The Device model may
represent aspects of the learning device such
as, screen real estate (resolution), network
bandwidth, input device (stylus, mouse or
touchpad). The narratives can access this
metadata information and use it as a basis for
modifying the course structure or at the content
selection stage they can choose a candidate that
best suits the device. As the modification or
addition of models and collections does not
require a recompilation of the engine the
course author who can decide to add or change
models as required when developing a new
course.
3.4 Metadata and Candidates as a basis
for Adaptation
Using the mechanisms outlined above one can
use the descriptive metadata of any of the
models as a basis for adaptation. This enables
the course author to create narratives that add
either candidate groups of content or candidate
groups of sub-narratives to a narrative based on
the comparison of their metadata with that on
the learner. The execution of the narrative may
utilize any metadata relating to the content for
realizing the adaptation. For example, the
metadata may describe required and taught
competencies (in accordance with a model
such 17,18]) and its cognitive extensions [5]
and compare these values with the learner’s
learned competencies as the basis for adap-
tation.
When a personalized course is created the first
step creates a personalized course model that
details the candidate groups, from which
candidates will later be selected, that fulfil the
learners learning objectives. This is a fuzzy
form of the adaptive course as, in the example
of content for instance, it says what concept
should be delivered, but not which content
candidate will be delivered at runtime.
Similarly sub-narratives do not need to be
reconciled until runtime enabling changes in
the learner model to influence later candidate
selection without impacting on candidates
already selected. If the course author desires
this approach can support an evolving form of
adaptivity, where the whole course is not
recompiled, but the blanks (candidate groups)
are filled as required allowing the decisions
that fill those blanks to be made using the latest
user information.
The second advantage of candidate groups is
that more candidates may be created for a
candidate group as required. As narratives refer
to candidate groups, rather than individual
candidates, the narrative requires no re-
authoring if more candidates are created. This
is true for both sub-narrative candidates and
content candidates. If required the selection
process may be updated to account for the new
candidates, but this is often unnecessary if the
candidates are described using the same
metadata schema. This way, also the problem
with dynamic domains experienced with the
RATH system (see above, Section 2.1) is
solved.
As both candidates are described using
standards-based metadata they may be in-
corporated into many courses or added to a
content repository for searching and discovery.
This dramatically increases the contents
potential reuse [13]. As quality eLearning
material is expensive, both in terms of time and
financially, to produce the disadvantage of
having to author accompanying metadata is
outweighed by the potential for reuse. The fact
that aspects of this metadata may be used as
part of the adaptive process increases its value.
4 Conclusion
In this paper, we have presented an approach to
realize adaptivity in eLearning systems in a
metadata and model driven way. Thus,
metadata originally developed for the
description of adaptivity can also be applied
for its implementation. An important element
for this is the application of a candidacy
architecture, i.e. the separation of abstract
concepts to be taught from their concrete
instantiation as learning objects. This
separation corresponds to the distinction
between competencies and performances in the
psychological theory of knowledge spaces
which facilitates the application of that theory
for adaptive and personalized elearning.
Acknowledgments
Most of the research reported in this paper was
funded by the EC within its IST programme
through Grant IST-1999-10051 to the EASEL
consortium. The development of RATH was
funded by the EC through a Marie Curie
Fellowship (Grant ERBFMBICT983377) to the
second author.
References
1. Albert, D. & Hockemeyer, C.. Adaptive
and dynamic hypertext tutoring systems
based on knowledge space theory. In
du Boulay, B. & Mizoguchi, R., editors,
Artificial Intelligence in Education:
Knowledge and Media in Learning
Systems, volume 39 of Frontiers in Ar-
tificial Intelligence and Applications, pp.
553-555, Amsterdam, 1997. IOS Press.
2. Albert, D. & Hockemeyer, C. Applying
demand analysis of a set of test problems
for developing an adaptive course.
Proceedings of the International Con-
ference on Computers in Education
ICCE2002, accepted for publication.
3. Albert, D., Hockemeyer, C., Conlan, O. &
Wade, V. Reusing Adaptive Learning
Resources. In C.-H. Lee et al. (Eds).
Proceedings of the International Con-
ference on Computers in Educa-
tion/SchoolNet2001. Incheon, Korea:
Incheon National University of Education.
Vol. 1, pp. 205-210.
4. Albert, D. & Kaluscha, R. Adapting
knowledge structures in dynamic domains.
In Herzog, C., editor, Beiträge zum Achten
Arbeitstreffen der GI--Fachgruppe
1.1.5/7.0.1 ``Intelligente Lehr--
/Lernsysteme'', September 1997, Duisburg,
Germany, pp. 89-100. TU München, 1997.
5. Albert, D. & Lukas, J.: Knowledge Spaces:
Theories, Empirical Research, and
Applications. Berlin: Springer, 1999.
6. Albert, D. & Mori, T. (2001). Contribu-
tions of cognitive psychology to the future
of e-learning. Bulletin of the Graduate
School of Education, Hiroshima
University, Part I (Learning and
Curriculum Development), 50, 25-34.
7. Apache Xindice. http://xml.apache.org/
xindice.
8. Berners-Lee, D. & Connolly, D.: Hyper-
Text Markup Language Specification
2.0. IETF Request for Comments 1866,
1995.
9. Conklin, J.. Hypertext: An introduction
and survey. IEEE Computer, 20(9), 17-41,
1987.
10. Conlan, O.; Hockemeyer, C.; Lefrere, P.;
Wade, V.; Albert, D.. "Extending
Educational Metadata Schemas to Describe
Adaptive Learning Resources." Hypertext
'01: Proceedings of the Twelfth ACM
Conference on Hypertext and Hypermedia.
New York: ACM, 2001. pp.161-162.
11. Conlan, O.; Hockemeyer, C.; Wade, V.;
Albert, D.; Gargan, M. An Architecture for
integrating Adaptive Hypermedia Service
with Open Learning Environments.
Proceedings of ED-MEDIA 2002, World
Conference on Educational Multimedia,
Hypermedia & Telecommunications,
Denver, Colorado, June 2002.
12. Conlan, O.; Wade, V.; Bruen, C.; Gargan,
M. Multi-Model, Metadata Driven
Approach to Adaptive Hypermedia
Services for Personalized eLearning. In: de
Bra, P., Brusilovsky, P., & Conejo, R.
(Ed.), Adaptive Hypermedia and Adaptive
Web-Based Systems, pp. 100 – 111. New
York: Springer.
13. Conlan, O.; Dagger, D.; Wade, V. Towards
a Standards-based Approach to e-Learning
Personalization using Reusable Learning
Objects. E-Learn 2002, World Conference
on E-Learning in Corporate, Government,
Healthcare and Higher Education,
Montreal, September 2002 (paper
accepted).
14. Cognitive Science Section, Department of
Psychology, University of Graz, Austria.
URL: http://wundt.uni-graz.at/.
15. Dublin Core Metadata Initiative. URL:
http://dublincore.org.
16. De Bra, P., Brusilovsky, P., & Conejo, R.
(ed.) (2002). Adaptive Hypermedia and
Adaptive Web-Based Systems. New York:
Springer-Verlag.
17. Doignon, J.-P. & Falmagne, J.-C. Spaces
for the assessment of knowledge.
International Journal of Man-Machine
Studies, 23, 175-196, 1985.
18. Doignon, J.-P. & Falmagne, J.-C. (1999).
Knowledge Spaces. Springer-Verlag,
Berlin.
19. EASEL: Educators Access to Services in
the Electronic Landscape. EC project IST-
1999-10051. URL: http://www.fd
group.co.uk/easel.
20. Easel Consortium. D03 Requirements
Specification, v. 1.4. URL: http://www.
fdgroup.co.uk/easel/documents/D3-
Update.doc.
21. Halasz, F. & Schwartz, M.. The Dexter
hypertext reference model. In Moline, J.,
Benigni, D., & Baronas, J., editors.
Proceedings of the Hypertext
Standardization Workshop, volume 500-
178 of NIST Special Publications, pp. 95 -
133. Gaithersburg, MD: National Institute
of Standards and Technology.
22. Cord Hockemeyer. RATH --- A Relational
Adaptive Tutoring Hypertext WWW--
Environment. Technical Report 1997/3,
Institut für Psychologie, Karl-Franzens-
Universität Graz, Austria, 1997.
23. Hockemeyer C., Held, T., & Albert, D.
RATH - a relational adaptive tutoring
hypertext WWW-environment based on
knowledge space theory. In Christer
Alvegård, ed., CALISCE`98: Proceedings
of the Fourth International Conference on
Computer Aided Learning in Science and
Engineering, pp. 417-423, Göteborg,
Sweden: Chalmers University of
Technology, 1998.
24. Hockemeyer, C. Extending the
Competence-Performance-Approach for
Building Dynamic Adaptive Tutoring
Systems. Talk at the 33
rd
European
Mathematical Psychology Group Meeting
EMPG 2002.
25. IEEE Learning Technology Standards
Committee (LTSC). URL: http://ltsc.
ieee.org.
26. IMS Learning Resource Meta-data
Specification. URL: http://imsproject.
org/metadata/index.html.
27. JESS: Java Expert System Shell, Version
6.0, URL: http://herzberg.
ca.sandia.gov/jess/.
28. Knowledge and Data Engineering Group,
Department of Computer Science, Trinity
College, Dublin. URL:
http://kdeg.cs.tcd.ie.
29. Korossy, K.tending the theory of
knowledge spaces: A competence-
performance approach. Zeitschrift für
Psychologie, 205,53-82, 1997.
30. Korossy, K. Modelling knowledge as
competence and performance. In Albert, D.
& Lukas, J., editors, Knowledge Spaces:
Theories, Empirical Research,
Applications, pp. 103-132. Lawrence
Erlbaum Associates, Mahwah, NJ, 1999.
31. RATH (Relational Adaptive Tutoring
Hypertext),URL: http://wundt.uni-graz.
at/rath/.
... In a first step, an extension to learning object metadata (LOM) standards was developed that allows a generic description of adaptive features of the material, i.e. this adaptivity element is not limited to specific types of adaptivity [16,17]. These metadata specifications are primarily used for search and retrieval but they can also be used by the eLearning system for realising the adaptivity itself [18]. ...
... Based on these developments, the APeLS system was developed which is based on knowledge space theory extended by the competencies approach. This system is ready to be used from an LMS which then also does user identification and similar administrative tasks [18]. ...
... For information services to comfortably replace human counterparts such as museum guides [54,69,74], teachers [45,27] or personal tutors and guidance [45,37,42], the services need to take the characteristics of individual users and of user groups into account to decide what to present, how to present it and how to structure or order the presentation. For authors to easily (or at least easier than starting from scratch) create adaptive applications, like adaptive courses, it is important to have tools and frameworks to help them [12,18,26,67]. ...
... from the document space is indexed by some concepts from the knowledge base that describes the content representation and hierarchical structure. In APeLS [18] the concepts are encapsulated into a "Narrative" structure where each narrative can be hierarchically split into sub-narratives. ...
Thesis
Full-text available
The focus of this work is the evaluation of adaptive courses created and delivery by AHA! (the Adaptive Hypermedia Architecture) and GALE (the Generic Adaptation Language and Engine developed in the EU FP7 project GRAPPLE). The main goal of these evaluations is to understand the influence of adaptation on students’ learning in an adaptive hypertext course. The evaluation methods are divided into qualitative and quantitative ones. The quantitative methods consist of analysis of the students’ logs, the performed tests and assignment grades. The analysis of questionnaires is part of the qualitative method. In this work we also performed an evaluation of the modularity and extensibility of the GALE system as part of the concern in having such a system with a single core adaptation engine that can be extended in order to use it for different types of adaptation. This thesis also presents tools for the analysis of navigation logs and test and quiz results of adaptive courses in GALE. The main goal of these tools is to assist the courses’ authors to retrieve statistical measurements for their own courses, allowing them to analyze the structure of the course from the point of view of the students’ navigation. At the end we discuss future work, and in particular suggest changes to the setup of GALE (for developers that needs to extend the system) and to the structure of hypertext courses based on the observed student behavior as well as the student feedback.
... Different research groups have addressed the Open-Corpus Problem by developing more flexible architectures. A subset of the approaches discussed in the literature is KnowledgeTree (Brusilovsky, 2004), ADAPT 2 (Brusilovsky et al., 2005b), The Personal Reader (Henze and Herrlich, 2004), APeLS (Conlan et al., 2003) and AHA! (Bra et al., 2003). ...
... APeLS (Conlan et al., 2003), as another example provides a distributed service based architecture and sophisticated adaptivity mechanisms. The individual services in APeLS are (a) the Adaptive Hypermedia Service providing the content and (b) the Learning Environment. ...
Thesis
Full-text available
Web users are continuously confronted with vast amounts of information. This phenomenon is known as the information explosion and can lead to disorientation and decreased productivity as users attempt to navigate through information to discover what they need. Web Personalisation techniques, such as Adaptive Hypermedia, have been employed to overcome this issue. However, these techniques are typically focussed on closed collections of content, housed on a single website. This is at odds with the typical web browsing paradigm, where users navigate across multiple sites to identify relevant information. This thesis introduces the Cross-Site Personalisation (CSP) approach to Web Personalisation. This approach seamlessly personalises the support offered to each individual user as they browse across multiple websites. The approach models the user’s interactions and then tailors the information on each independent website to support their needs. The Cross-Site Personalisation approach is realised as a third-party personalisation service that interfaces with multiple websites via standard extension modules. The approach is non-intrusive and does not negatively impact on the user’s web browsing experience. It is also sensitive to the user’s privacy concerns by not divulging information about a user to the websites being personalised. Rather, this approach recommends how each website may adapt their information and navigation structures to meet the user’s needs. To evaluate the appropriateness of this approach two authentic use cases have been scoped, designed and developed. The first use case focuses on CSP within closed domains, such as those typically seen in enterprise website federations. The second use case addresses open domains, where the user is navigating across websites on the open web. This thesis details the State of the Art that underpins the Cross-Site Personalisation approach. It provides a detailed design and corresponding implementations that realise the two use cases. Furthermore, this thesis presents a thorough evaluation of the applicability and appropriateness of the approach in these authentic settings.
... Several adaptive hypermedia systems for education have emerged and have even influenced a number of recent systems. Examples include the INTERBOOK system [12], the SmexWeb system [13], the ELM-ART system [14], the KOD system [15], the Calls system [16], the ALFANET system [17], the AHA [10], the NET COACH system [18] and TANGOW system [19], etc. Table 1 looks at some of the most cited systems in the literature by presenting the techniques deployed for the realization of adaptation and theirs limitations. ...
Article
The Advances in adaptive learning systems (ALS) and the rapid development of ubiquitous systems (US) mean that e-learning is not limited to certain parameters. However, the e-learning system (E-LS) must take into account the context-aware of learners to help them to complete their activity. The traditional online educational systems suffer from lack of immediate help and limitations in the presentation of pedagogical content, which yield the learner unable to receive learning resources that meet his/her needs and reduces the effectiveness of learning. In this paper, we suggest a new framework to sense the learner’s context and adapt it automatically to the adequate presentation content, which meet the learner’s preferences and needs of learners who suffer from hearing and visual impairments. This framework is based on mixing Dynamic Adaptive Hypermedia System (DAHS) and Ubiquitous Learning System (ULS). It proposes a box tools that performs information, transcoding and location in order to generate a new presentation content based on the context-aware of the learner.
... cation model (AHAM) [5], the Munich model [6], the Goldsmith model with Goldsmith's adaptive hypermedia application model (GAHM) [7], and layered models [8,9], including the layered adaptive hypermedia system authoring model (LAOS). The developed systems that we can name that substantially influenced research in the adaptive hypermedia realm are AHA! [10], KBS Hyperbook [11], APeLS [12], ELM-ART [13], and Interbook [14]. It is also worth mentioning some recent solid developments, such as TANGOW and TANGOW-based systems [15], GOMAWE [16], and CoMoLe [17]. ...
Article
Full-text available
Web personalization is a process that utilizes a set of methods, techniques, and actions for adapting the linking structure of an information space or its content or both to user interaction preferences. The aim of personalization is to enhance the user experience by retrieving relevant resources and presenting them in a meaningful fashion. The advent of big data introduced new challenges that locate user modeling and personalization community in a new research setting. In this paper, we introduce the research challenges related to Web personalization analyzed in the context of big data and the Semantic Web. This paper also introduces some models and approaches that can bridge the gap between the two. Future challenges and opportunities related to Web personalization, analyzed from the big data and Semantic Web perspective, are also presented. The research challenges outlined in this paper involve the scrutability of user models in personalization, generic personalization, meta-personalization, open corpus personalization, and semantic data modeling.
... The assumption that the one-size-fits-all method for delivering the learning material in a static manner without any further content and pedagogical revision would only be valid if all individuals learn in the same way ( Alavi et al., 2002). Even though the concept of adaptation is discussed in the literature (e.g., Berlanga & Garcia, 2005;Conlan, Hockemeyer, Wade, & Albert, 2002;Dreher, Scerbakov, & Helic, 2004;McAndrew & Weller, 2005;Sassen & Schwartz-Reinken, 2005;Towle & Halm, 2005;Van Rosmalen & Boticario, 2005), described with concepts ( Reiners, Reiß, Schulze, & Voß, 2003;Reiners & Sassen, 2007), and demonstrated in first prototypes ( Nussbaumer, Gütl, & Albert, 2007), the methodology to overcome the static and linear learning paths is not yet found in mainstream courseware and virtual learning environments. Reiners & Sassen, 2007 for a detailed description including further extension of the concept). ...
Conference Paper
Full-text available
In modem learning environments, the lecturer or educational designer is often confronted with multi-national student cohorts, requiring special consideration regarding language, cultural norms and taboos, religion, and ethics. Through a somewhat provocative example we demonstrate that taking such factors into account can be essential to avoid embarrassment and harm to individual learners’ cultural sensibilities and, thus, provide the motivation for finding a solution using a specially designed feature, known as adaptive learning paths, for implementation in Learning Management Systems (LMS). Managing cultural conflicts is achievable by a twofold process. First, a learner profile must be created, in which the specific cultural parameters can be recorded. According to the learner profile, a set of content filter tags can be assigned to the learning path for the relevant students. Example content filter tags may be “no sex” or “nudity ok, but not combined with religion”. Second, the LMS must have the functionality to select and present content based on the content filter tags. The design of learning material is presented via a meta-data based repository of learning objects that permits the adaptation of learning paths according to learner profiles, which include the cultural sensibilities in addition to prior knowledge and learning and categorized learning content - a detailed example is given.
Article
Bivariate Markov processes (BMPs) described by Ephraim and Mark (2012) consist of a pair of stochastic processes in the continuous time, one observable and the other latent, that are jointly Markov. In the present article, the navigation behavior and the learning process of a user of a web-based tutoring system are jointly modeled as BMPs constrained by assumptions that are coherent with the concepts of competence-based knowledge space theory. Such constraints are expressed as formal assumptions about the web-based system and about the nature of the learning process. Scenarios are considered where the observed process is the navigation of an individual through the pages of an intelligent tutoring system, whereas the latent learning process consists of transitions among states in a competence structure. The approach seems to be rather general and flexible in modeling learning scenarios with different assumptions. As an example, BMP models are developed for some exemplary scenarios. Maximum likelihood parameter estimation via expectation–maximization algorithm is presented. The results of a simulation study showed that the parameter values are well-recovered by the estimation algorithm. The results of the application of a bivariate Markov model to the real data of students navigating the intelligent tutoring system Stat-Knowlab showed that the proposed approach provides useful insight into students’ learning processes.
Chapter
Personalisation is a key requirement to motivate learners to use learning technology and self-paced content. Whereas most research and technologies focus on personalisation of content, this paper focuses on the personalisation of the tools and platform technologies for learning. When designing a learning environment, most organisations worked in the past on their internal business processes and content but did not focus on what the learner really does with the learning tools the organisation provided to them. Changing the perspective to the user shows, that they create today “around the organisational solutions” their own technology-enhanced learning world using a whole set of technologies: Learning management system (LMS) of the company, learning management system of a further education institution or of a university, different social network platforms, search engines, open web services in the internet like blogs or wikis, and a lot more other applications. Therefore the challenge for organisations today is how they can manage this variety of technologies by also enforcing the creativity and motivation of the users to personalise and individualise their learning environment. This paper proposes a solution by describing an architecture for a responsive and open learning environment. It delivers examples and a procedure how such a solution can be built step-by-step. The approach can be used in schools, higher education institutions, corporations or further education institutions.
Article
Full-text available
Metadata standards allowing the description, discovery, management and reuse of learning objects are of focal interest in the educational domain. However, current standards do not reflect recent developments in e-learning stressing the importance of adaptability of learning resources to the learners' needs and preferences. Within the EASEL project (URL: http://www.fdgroup.com/easel), our objective is to extend and use current metadata standards to support the discovery of adaptive content as well as its management and reuse. Thus, in this paper, we focus on specifications describing the adaptivity of learning contents. A generic extension for current metadata schemas is suggested that is independent of the pedagogical model underlying the adaptivity.
Article
Full-text available
Knowledge space theory provides a formal model for representing students' knowledge and describing the structure of a domain of knowledge. A similar formal structure can be used to described the structure of hypertexts. The combination of knowledge space theory and the formal hypertext model leads to a framework for intelligent tutoring systems which provides individualized learning paths to a student. Using powerful procedures from relational database systems and from knowledge space theory, we get e. g. an efficient selection of appropriate teaching documents. 1 Introduction and previous results Doignon and Falmagne [1] introduced the theory of knowledge spaces which provides a mean to formally describe the structure of a given domain of knowledge. We introduce the theoretical concepts in Section 1.1 below. The basic idea is the description of a student's knowledge by the set of problems (items) he or she is able to solve. The set of possible knowledge states is restricted by prereq...
Article
Full-text available
RATH is an adaptive tutoring WWW software prototype combining a mathematical model for the structure of hypertext with the theory of knowledge spaces from mathematical psychology. Using prerequisite relationships between different items in a domain of knowledge and using the knowledge about the pupil's current knowledge state RATH presents only those links in a hypertext document to the pupil which point to a document for which he/she fulfils all prerequisites and which he/she therefore should be able to understand. In a first prototype course, this idea is applied to the field of elementary probability theory. This tiny course is based on prior research of the second author. Thus, RATH is an important step in bringing psychological theories into a working tutorial system.
Article
This chapter develops an extension of Doignon and Falmagne's knowledge struc-tures theory by integrating it into a competence-performance conception. The aim is to show one possible way in which the purely behavioral and descriptive knowledge structures approach could be structurally enriched in order to account for the need of explanatory features for the empirically observed solution behav-ior. Performance is conceived as the observable solution behavior of a person on a set of domain-specific problems. Competence (ability, skills) is understood as a theoretical construct accounting for the performance. The basic concept is a mathematical structure termed a diagnostic, that creates a correspondence be-tween the competence and the performance level. The concept of a union-stable diagnostic is defined as an elaboration of Doignon and Falmagne's concept of a knowledge space. Conditions for the construction and several properties of union-stable diagnostics are presented. Finally, an empirical application of the competence-performance conception in a small knowledge domain is reported that shall illustrate some advantages of the introduced modeling approach.
Article
The information regarding a particular field of knowledge is conceptualized as a large, specified set of questions (or problems). The knowledge state of an individual with respect to that domain is formalized as the subset of all the questions that this individual is capable of solving. A particularly appealing postulate on the family of all possible knowledge states is that it is closed under arbitrary unions. A family of sets satisfying this condition is called a knowledge space. Generalizing a theorem of Birkhoff on partial orders, we show that knowledge spaces are in a one-to-one correspondence with AND/OR graphs of a particular kind. Two types of economical representations of knowledge spaces are analysed: bases, and Hasse systems, a concept generalizing that of a Hasse diagram of a partial order. The structures analysed here provide the foundation for later work on algorithmic procedures for the assessment of knowledge.
Conference Paper
Web systems suffer from an inability to satisfy heterogeneous needs of many users. A remedy for the negative effects of the traditional “one-size-fits-all” approach is to develop systems with an ability to adapt their behavior to the goals, tasks, interests, and other features of individual users and groups of users. Adaptive Web is a relatively young research area. Started in with a few pioneering works on adaptive hypertext in early 1990, it now attracts many researchers from different communities such as hypertext, user modeling, machine learning, natural language generation, information retrieval, intelligent tutoring systems, cognitive science, and Web-based education. Currently, the established application areas of adaptive Web systems are education, information retrieval, and kiosk-style information systems. A number of more recent projects are also exploring new application areas such as e-commerce, medicine, and tourism. While research-level systems constitute the majority of adaptive Web systems, a few successful industrial systems show the commercial potential of the field. This talk will review a number of adaptation techniques that have been developed and evaluated in the field of adaptive hypermedia and applied in adaptive Web systems. It will also present several examples of adaptive Web systems in different application areas. To answer the conference motto “interaction in motion” the talk will specially address the issue of developing adaptive systems for ubiquitous computing and mobile Web. It will discuss the needs and challenges of “adaptation in motion” and present some known success stories.
Article
The knowledge structures theory developed by Doignon & Falmagne is a purely descriptive approach to the representation of knowledge and is free of any cognitive interpretation. The aim of this paper is to show one possible way in which this theory can be reconciled with traditional explanatory features of knowledge assessment by extending it to a competence-performance conception. Performance is conceived as the observable solution behavior of a person on a set of domain-specific problems, whereas competence (ability, skills) is understood as a theoretical construct explaining performance. The basic concept is a mathematical structure termed a diagnostic, that creates a relationship between a family of competence states and a family of performance states. A diagnostic is said to be a union-stable diagnostic, when the family of competence states as well as the family of performance states is union-stable and when there exists a union-preserving function that maps the set of competence states onto the set of performance states. Several properties of union-stable diagnostics are presented. An empirical investigation is reported which illustrates the practical application of union-stable diagnostics. Finally, some benefits and problems of the introduced modeling approach are discussed.
Article
This paper presents the Dexter hypertext reference model. The Dexter model is an attempt to capture, both formally and informally, the important abstractions found in a wide range of existing and future hypertext systems. The goal of the model is to provide a principled basis for comparing systems as well as for developing interchange and interoperability standards. The model is divided into three layers. The storage layer describes the network of nodes and links that is the essence of hypertext. The runtime layer describes mechanisms supporting the user's interaction with the hypertext. The within-component layer covers the content and structures within hypertext nodes. The focus of the model is on the storage layer as well as on the mechanisms of anchoring and presentation specification that form the interfaces between the storage layer and the within-component and runtime layers, respectively. The model is formalized in the specification language Z [Spiv89], a specification language...