ArticlePDF Available

ActiveMath: A Generic and Adaptive Web-Based Learning Environment

Authors:
  • DHC Business Solutions

Abstract and Figures

ActiveMath is a generic web-based learning system that dynamically generates interactive (mathematical) courses adapted to the student's goals, preferences, capabilities, and knowledge. The content is presented in a semantic XML-based format. For each user, the appropriate content is retrieved from a knowledge base and the course is generated individually according to pedagogical rules. Then the course is presented to the user via a standard web-browser. One of the exceptional features of ActiveMath is its integration of stand-alone mathematical service systems. This offers the means for exploratory learning, realistically complex exercises as well as for learning proof methods. The article provides a comprehensive account of the current version of ActiveMath. (http://aied.inf.ed.ac.uk/members01/archive/vol_12/melis/full.html)
Content may be subject to copyright.
ActiveMath:
A Generic and Adaptive Web-Based Learning Environment
Erica Melis, Eric Andr`es, Jochen B¨udenbender, Adrian Frischauf,
George Goguadze, Paul Libbrecht, Martin Pollet, Carsten Ullrich
DFKI Saarbr¨ucken, D-66123 Saarbr¨ucken, Germany
November 20, 2001
Abstract
ActiveMath is a generic web-based learning system that dynamically generates
interactive (mathematical) courses adapted to the student’s goals, preferences, capa-
bilities, and knowledge. The content is represented in an semantic xml-based format.
For each user, the appropriate content is retrieved from a knowledge base and the
course is generated individually according to pedagogical rules. Then the course is
presented to the user via a standard web-browser. One of the exceptional features
of ActiveMath is its integration of stand-alone mathematical service systems. This
offers the means for exploratory learning, realistically complex exercises as well as for
learning proof methods. The article provides a comprehensive account of the current
version of ActiveMath.
1 Introduction
Because of the enormous development and increasing availability of the Internet, web-
based learning systems become more important for long-distance learning, for completing
and complementing the traditional teaching, and for supporting life-long learning. They
serve as centrally available systems that allow a user to learn in her own environment and
whenever it is appropriate to her.
Hence, several web-based learning systems have been developed such as the commer-
cial Learning Spaces1or Blackboard2. These systems offer fixed multimedia web pages
and facilities for user management and communication but most of them lack support of
truly interactive problem solving and real user-adaptivity. Moreover, they use proprietary
knowledge representation formats rather than a standardized knowledge representation
which is exchangeable between systems. Some user-adaptivity is offered by systems such as
ELM-ART (Weber & Brusilovsky, 2001) and Metalink (Murray, Condit, Shen, Piemonte,
& Khan, 1999).
Tools for mathematical problem solving have been developed before, e.g., the dynamic
geometry system CabriGeometre (Balacheff N, 1993). The academic PACT tutors (An-
derson, Corbett, Koedinger, & Pelletier, 1995; Corbett, Koedinger, & Anderson, 1997)
and their descendents, the cognitive tutors from CarnegieLearning, focus on mathemati-
cal problem solving at school-level in a fixed curriculum and provide predefined feedback
The project which led to the results presented in this paper was funded by the German Bundesminis-
terium f¨ur Bildung und Forschung. The authors are responsible for the content of this publication.
1http://www.ibm.com/mindspan
2http://www.blackboard.com
and/or simple dialogs. For instance, Ms.Lindquist 3web-delivers a course for “algebra
word problems” which employs human-like dialogs. However, they do not provide a
textbook-like reference or web-based system to explore different parts of a curriculum
and have specific knowledge representation formats. In particular, representation and
(presentation) functionalities are not separated.
In order to provide problem solving orientation and support but also all the other useful
features, in Saarbr¨ucken – at the DFKI and the University of Saarland – we are develop-
ing the generic, web-based, adaptive, and interactive learning environment ActiveMath.
The first area it is used for is mathematics. In particular, the first on-line course we ex-
perimented with in ActiveMath has been the college level ‘Algebra Interactive’ (Cohen,
Cuypers, & Sterk, 1999).
ActiveMath is realized as a client-server web-architecture that can be accessed using
standard web-browsers. ActiveMath provides a generic framework that can be filled
with instruction content as well as pedagogical knowledge. Because of its modular design
it is easily configurable with respect to components. Because of the separate representa-
tion of pedagogical knowledge, the system is configurable with pedagogical strategies and
therefore a tool for experimentation on appropriate learning settings.
In a nutshell, notable features of the current version of ActiveMath are user-adapted
content selection, sequencing, and presentation, support of active and explorative learning
by external tools, use of (mathematical) problem solving methods, and re-usability of the
encoded content as well as inter-operability between systems.
This article describes these and other features in more detail. It briefly substanti-
ates why certain goals have been targeted and realized. It describes the architecture
and explains the components of the distributed system as well as their integration and
communication.
2 Principles of ActiveMath’ Design
To begin with we describe some general principles of the system and briefly discuss them
from a pedagogical and from a technical point of view. In particular, we discuss the need
for explorative problem solving orientation, personalization, for scrutability, as well as for
architectural openness, the separation of knowledge representation and functionality, and
the use of ontologies and standards.
Pedagogical Goals
ActiveMath’ design aims at supporting truly interactive, exploratory learning and as-
sumes the student to be responsible for her learning to some extent. Therefore, a relative
freedom for navigating through a course and for learning choices is given and by default,
the user model is scrutable (see Kay (2000)), i.e., inspectable and modifyable.
Adaptivity Most previous intelligent tutor systems did not rely on an adaptive choice
of content. A reason might be that the envisioned use was mostly in schools, where
traditionally every student learns the same concepts for the same use. In colleges and
universities, however, the same subject is already taught differently for different groups
of users and in different contexts, e.g., statistics has to be taught differently for students
of mathematics, economy, and medicine. Therefore, the adaptive choice of content to be
presented as well as examples and exercises is pivotal.
Moreover, web-based systems can be used in several learning contexts, e.g., long-
distance learning, homework, and teacher-assisted learning. Personalization is required
3http://www.algebratutor.org/
in all of them because even within teacher-assisted learning in a computer-free classroom
with 30 students and one teacher truly individualized learning cannot be realized. Ac-
tiveMath’s current version provides adaptive content, adaptive presentation features,
and adaptive appearance. Each user can take public and private personalized notes as
well.
Exploration and Use of Mathematical Services During the last decades the math-
ematics pedagogy community recognized that students learn mathematics more effectively,
if the traditional rote learning of formulas and procedures is supplemented with the possi-
bility to explore a broad range of problems and problem situations (Schoenfeld, 1990). In
particular, the international comparative study of mathematics teaching, TIMSS (Baumert
et al., 1997), has shown (1) that teaching with an orientation towards active problem solv-
ing yields better learning results in the sense that the acquired knowledge is more readily
available and applicable especially in new contexts and (2) that a reflection about the
problem solving activities and methods yields a deeper understanding and better perfor-
mance. Therefore, ActiveMath is designed to offer not only multiple choice questions
but also more interactive kinds of exercises.
In these exercises, ActiveMath does not guide the user strictly along a predefined
expert solution. It can, however, support the student: The integrated mathematical
services can provide feedback on the user’s activities. In particular, they can check the
correctness of her solution. Moreover, the integrated mathematical services, e.g. Computer
Algebra Systems (CAS) or calculators, can support the user by automated problem solving.
That is, they can take over certain parts in problem solving and thereby help the user to
focus on certain learning tasks and to delegate routine tasks. The use of CAS, function
plotters, geometry systems etc. has been demanded since long and has partly been realized
with stand-alone tools. Those tools make realistic and complex problems an achievable
topic for a lesson and allow the students to train fundamental capabilities and skills such
as mathematical modeling for realistic problems.
Conveying Proof Planning Knowledge We believe that teaching mathematical meth-
ods and know-how and know-when has to be introduced into mathematics teaching, apart
from the traditional axioms, theorems, and procedures. Indeed, first experiments (Melis,
Glasmacher, Ullrich, & Gerjets, 2001) suggest that instruction materials based on descrip-
tions of mathematics methods yield a better subsequent problem solving performance than
traditional (textbook like) instruction material.
Technical Goals
In order to use the potential power of existing web-based systems they need an open ar-
chitecture to integrate and connect to new components including student management,
assessment, collaboration tools, problem solving tools, and dialog system. ActiveMath
has an open client-server architecture whose client can be restricted to a browser. This
architecture serves not only the openness but also the platform independence which is de-
sirable and important. Moreover, the components of ActiveMath have been deliberately
designed in a modular way in order to guarantee exchangeability and robustness.
Building quality hypermedia content is a time-consuming process, hence the content
should be reusable in different contexts. However, most today’s interactive textbooks con-
sist of a collection of predefined documents, typically canned html pages and multimedia
animations. This situation makes a reuse in other contexts and a re-combination of the
encoded knowledge impossible and inhibits a radical adaption of course presentation and
content to the user’s needs.
ActiveMath’ knowledge representation contributes to re-usability and interoperability.
In particular, it is compliant with the emerging knowledge representation and communi-
cation standards such as Dublin Core, OpenMath,MathML, LOM4. This will ensure a
long-term employment of the new technologies in browsers etc. The knowledge represen-
tation used by ActiveMath is detailed in section 4. Some of the buzzwords here are
meta data, ontological XML for mathematics, and standardized content packaging.
3 How does ActiveMath Work
Before we go into details of the system we want to describe how ActiveMath appears
for a user and what the architecture and components are behind this appearance. Let’s
briefly explain ActiveMath’s functionalities with an example.
A User’s View
Eva, a student of mathematics, wants to learn everything needed to understand group
morphisms. She logs on to ActiveMath. As this is the first time, Eva is using Ac-
tiveMath, she has to fill a registration form (see Figure 1). where she specifies personal
Figure 1: The registration page of ActiveMath
preferences (e.g., field, colorful/grey presentation, preferred language...). Moreover, she
can self-assess her mastery of concepts in the overall course in the hierarchically structured
content list shown at the bottom of Figure 1). Then, the main menu is presented to Eva.
She chooses her learning scenario and learning goals for that session. (in a school context,
a teacher could have chosen scenario and goals). Say, she decides to request a ‘guided
tour’ scenario with the goal concept ’morphism’. The required course material (‘book’)
is generated at once. This ‘book’ can be a full course or just the part of a bigger course
that teaches today’s topic. In another session, she may require another ’book’, e.g., the
part devoted to the next lesson in school. Eva can also require a predefined ’book’. Each
’book’ is a hierarchical structure of pages whose table of content is annotated by colors
4http://ltsc.ieee.org/wg12/
Figure 2: A screen shot of an ActiveMath session
indicating the user’s mastery level. Figure 2 shows a screen shot of a ’book’. On the left
hand side of the screen a table of content is displayed. The user can navigate via the
linked table of content or next/previous buttons at the bottom of each page.
Eva can now browse the book, take notes, look up concepts in a dictionary, work on
multiple-choice-questions, and solve exercises by using mathematical systems, e.g., a CAS.
If Eva wants to investigate one of the concepts in the book, she clicks on its occurrence
and obtains in a dictionary window (see Figure 3) the course element defining the concept
as well as related concepts together with the relations. This way she may learn a larger,
more holistic view of the content.
When the user Bert logs on, he may fill the questionnaire differently and may ask for
other goal concepts and scenario. If he did not choose the same predefined book as Eva,
this user receives another personalized course (see Figure 4) also with other examples and
exercises.
Realization
Now we explain in a bird’s eye view how the course generation is realized and which
components carry out which task. In what follows, we refer to Figure 5 which gives an
overview over the ActiveMath components and their communication.
When a user logs on to ActiveMath, the browser connects to the web server which
functions as the bridge between the client’s browser and the ActiveMath system. The
requests from the user and responses from the system pass through it. The web server
can fulfill some requests by itself, others are passed to the appropriate ActiveMath
components. The web server contacts the session manager that sends the questionnaire
via the web server to the browser. The information provided via the questionnaire is used
to initialize and create a user model.
When the user has chosen her goal concepts and scenario, the session manager sends
this request to the course generator. The course generator is responsible for choosing
and arranging the content to be learned. The course generator contacts the mathematical
knowledge base,MBase (Franke & Kohlhase, 2000), in order to calculate which mathe-
matical concepts are required for understanding the goal concepts, checks the user model
Figure 3: A screen shot of the dictionary
Figure 4: A screen shot of Bert’s document
Browser
Webserver
Filter
Pedagogical
Rules
Session Manager
html
XSL+URL request
MBase
Proxy
http
Omega−
GUI
Console
CAS−
: XML−RPC Connection
: Java or other Connection
User Model
HistoryProfile Omega
Course
Generator
CAS
Figure 5: Architecture of ActiveMath
in order to find out about the user’s prior knowledge and preferences, and uses pedagogical
rules to select, annotate, and arrange the content – including examples and exercises –
in a way that is suitable for the user. The resulting linearized instructional graph, a list
of IDs of MBase-items, is sent to the session manager. From MBase the session man-
ager retrieves the actual mathematical content corresponding to the IDs. This content is
represented in an xml-format for encoding mathematics. Eventually, the session manager
sends the xml-content to a filter that transforms the xml-data to html-pages which are
then presented via the user’s browser.
Besides the dynamic generation of a book, ActiveMath offers predefined courses, i.e.,
courses whose content is predetermined by some author or teacher. Both, the predefined
and the dynamically created course are presented to the user as a book which is a hierarchy
of pages.
The user’s actions are analyzed by (currently very simple) evaluators that calculate
updates of the user model. When the user logs out, her modified user model is stored.
The dictionary facility is based on the semantic information of the knowledge repre-
sentation. The presentation filter introduces links between a concept in the book and its
semantic representation in the knowledge base. From this information, a dictionary page
is generated dynamically.
4 The Knowledge Representation
As opposed to other interactive textbooks which use a collection of html-pages (Cohen
et al., 1999) or L
A
T
E
X-units (Dahn & Wolters, 2000) ActiveMath uses a semantic xml-
based representation of mathematical knowledge, an extension of OMDoc (Kohlhase, 2001;
Kohlhase, 2000).
In ActiveMath the knowledge representation is separated from the system’s (pre-
sentational) functionalities. This separation is a key for the multiple use of the same
knowledge representation in different contexts, for reusing and combining knowledge from
different sources, and for managing knowledge with different systems and for different
functionalities.
Abstractly, the knowledge representation of the course content is a standardized se-
mantic xml expressing objects and their relations as well as meta data. This is stored in a
data base. Moreover, pedagogical knowledge is formalized in so-called pedagogical rules.
In ActiveMath, the content representation consist of items. These objects are either
concepts or additional items. A course is organized around concepts, i.e., definitions, ax-
ioms, assertions (theorems, lemmas, conjectures), proof methods, algorithms. Additional
items related to concepts are, e.g., example, exercise, elaboration, motivation, introduc-
tion for a concept. ActiveMath represents several kinds of relations between concepts
(mathematical dependency, pedagogical prerequisites, references) and between concepts
and related items such as for (example for a concept, exercise for a concept, motivation
for a concept, proof for an assertion). The concepts, related items, and their relations pro-
vide a generic mathematical ontology that can be used and is actually used by different
systems managing mathematical knowledge.
More concretely, our knowledge representation OMDoc is an extension of the OpenMath
standard (Caprotti, 1998). OpenMath is a semantic xml-based markup language and a
general framework for encoding mathematical objects.OpenMath exclusively deals with
the representation of the mathematical objects rather than with mathematical documents
which may have a complex structure of their own and need additional information such as
structures and copyright. Therefore, OpenMath needed several extensions that are realized
in OMDoc. Since educational systems need additional pedagogical information we extended
the core OMDoc by pedagogical and structural elements such as types of exercises and
difficulty of examples and exercises.
OMDoc encodes mathematical objects and facts such as definitions, theorems/assertions,
proofs, examples, exercises as well as omtext (remarks, motivations) in an OMDoc docu-
ment. We refer to all the entities of a document as OMDoc items. More specifically, axioms,
definitions, theorems/assertions, structures, and proof methods are concepts. Every item
has an ID. OMDoc items consist of elements characterizing this item, e.g., attributes, types,
etc.
Figure 6 shows an OMDoc representation of a definition of a monoid. The basic items
are symbols, that denote basic mathematical entities. For instance, in the figure the
symbol for an ordered-pair, for a unit, etc occur. A symbols points to OpenMath object
in a so-called content dictionary (cd), e.g., OMS cd="logic1"..., where logic1 denotes
a standard OpenMath content dictionary. The symbols are the simplest items carrying a
mathematical meaning. A formula is built from symbols in an OMOBJ element.
Every item can include a natural language formulation as well as formal (OpenMath)
objects. In the figure, the CMP (commented mathematical property) element contains the
natural language formulation of the definition. It includes OpenMath representations for
the objects, e.g., ordered-triple, times, unit, etc. The FMP (formal mathematical property)
contains an OpenMath object (a formula), that formalizes the content of the CMP. In the
example, the FMP encodes the following logical formula:
monoid((M, , e)) (semigroup(M, )unit(e, ))
This means, that the triple (M, , e) is a monoid if and only if the couple (M, ) is a
semigroup and eis a unit w.r.t. the operation . Formal objects in OMDoc are necessary
to provide a formal input for the external problem solving systems, e.g. for a CAS or for
a proof planner.
OMDoc allows the annotation of items and documents with meta data and – for some
items – with attributes such as type and for. The meta data scheme of OMDoc is compliant
to the DC meta data scheme and includes contributor,Creator,translator,subject,
<definition id="c6s1p4_Th2_def_monoid" for="c6s1p4_monoid" type="simple">
<metadata>
<Title xmlns="http://purl.org/DC" xml:lang="en">Definition of a monoid</Title>
<extradata><depends-on>
<ref theory="c6s1p1_Th3" name="structure" />
<ref theory="c6s1p3_Th1" name="semigroup" />
<ref theory="c6s1p3_Th2" name="unit" />
</depends-on></extradata>
</metadata>
<CMP xml:lang="en" format="omtext">
A <ref xref="c6s1p1_Th3_def_structure">structure</ref>
<OMOBJ xmlns="http://www.openmath.org/OpenMath">
<OMA>
<OMS cd="elementary" name="ordered-triple" />
<OMV name="M" />
<OMS cd="c6s1p4_Th2" name="times" />
<OMV name="e" />
</OMA>
</OMOBJ> in which
<OMOBJ xmlns="http://www.openmath.org/OpenMath">
<OMA>
<OMS cd="elementary" name="ordered-pair" />
<OMV name="M" />
<OMS cd="c6s1p4_Th2" name="times" />
</OMA>
</OMOBJ>
is a <ref xref="c6s1p3_Th1_def_semigroup">semigroup</ref> with
<ref xref="c6s1p3_Th2_def_unit">unit</ref>
<OMOBJ xmlns="http://www.openmath.org/OpenMath"><OMV name="e" /></OMOBJ>
is called a monoid.
</CMP>
<FMP><OMOBJ>
<OMA><OMS cd="logic1" name="equivalent" />
<OMA><OMS cd="c6s1p4_Th1" name="monoid" />
<OMA>
<OMS cd="elementary" name="ordered-triple" />
<OMV name="M" />
<OMS cd="c6s1p4_Th2" name="times" />
<OMV name="e" />
</OMA>
</OMA>
<OMA><OMS cd="logic1" name="and" />
<OMA><OMS cd="c6s1p3_Th1" name="semigroup" />
<OMA>
<OMS cd="elementary" name="ordered-pair" />
<OMV name="M" />
<OMS cd="c6s1p4_Th2" name="times" />
</OMA>
<OMA>
<OMS cd="c6s1p3_Th2" name="unit" />
<OMV name="e" />
<OMS cd="c6s1p4_Th2" name="times" />
</OMA>
</OMA>
</OMA>
</OMOBJ>
</FMP>
</definition>
Figure 6: A definition of a monoid in OMDoc representation
title,description,publisher,date,type,format, identifier, source, language,
relation, coverage, and rights. These metadata elements are not mandatory. For the
specific needs in ActiveMath, we extended the OMDoc meta data scheme. Currently, the
additional metadata elements are depends-on,difficulty,abstractness, and field.
depends-on contains references to the symbols for all objects that are mentioned in
the exercise. field specifies from which field the content of the item comes. The meta
data difficulty and abstractness are still problematic because they have no unique
values and thus depend a priori on the author’s view. Among others, Figure 6 illustrates
the depends-on in the metadata element that defines dependencies on other concepts.
depends-on is a relation which is employed by the course generator, as we shall see later.
The relatively verbose, semantic representation buys several clear advantages over
html- or other purely syntactic representations such as L
A
T
E
X, among others,
it provides an ontology for the content of the course which is indispensable for
a reuse of teaching and learning material and for a combination of materials for
different purposes.
It allows for an adaptive presentation which has not to be determined (in advance)
in the knowledge representation.
The presentation formats are pretty flexible rather than restricted to html. The
OMDoc representation and current technology allow not only for rendering html-
pages but also other presentation formats such as DVI, SVG, or Flash that can be
used, e.g., for high quality printing.
The meta data annotations needed for learning systems are compliant with standard
developments.
5 Adaptive Presentation
Every learner is different, and even the same learner will have different goals in different
sessions. Therefore, the ActiveMath system offers dynamically constructed courses that
suit the learners goals, preferences, and knowledge.
The presentation tools of ActiveMath include a course generator and pedagogical
rules employed by the course generator as well as the session manager and presentation
filters.
Course Generator
The process of course construction takes different kinds of information into account:
The goal concepts the user wants to learn. Currently, a learning goal is a concept
contained in the knowledge base rather than abstract or pedagogical goals such as
improving dependency knowledge.
The scenario the user chooses. Currently, ActiveMath offers presentation gen-
eration for six scenarios: exam,examPreparation,overview,detailedOverview,
guidedTour, and detailedGuidedTour.
The user’s knowledge mastery and action history.
The user’s capabilities to work with one of the external systems integrated into
ActiveMath.
Pedagogical rules. The course generator employs pedagogical rules to determine,
when which items should be presented and in which order.
The course generator realizes the following steps.
1. Starting from the goal concepts chosen by the user (or by a teacher), all concepts
they depend upon are collected recursively. This process uses the depends-on meta
data information contained in the OMDoc representation. The result is a collection of
all concepts that need to be known by the learner in order to be able to understand
the goal concepts.
2. The second step collects all additional items for the concepts, such as examples.
exercises, and elaboration texts.
3. In a third step, pedagogical information represented by pedagogical rules is applied
which take information from the user model into account. The rules are used to
select and structure the gathered collection of content into an instructional graph.
This process is detailed in Section 5. For example, in examPreparation definitions,
assertions, methods, and exercises will be presented only, whereas in the guidedTour
scenario ActiveMath presents examples and motivations in addition.
Moreover, since employing an external system when working on exercises and exam-
ples requires a certain minimal familiarity with the systems, ActiveMath presents
those exercises only, if the capability is confirmed. In addition, pedagogical infor-
mation may restrict the available features of an external system. For instance, a
student learning about mathematical integration and derivation should not use a
CAS to solve his exercises completely, whereas using the CAS as a calculator for
auxiliary calculation is acceptable.
4. Finally, the pages are ordered and put into a hierarchy.
The result of the generation is a linearized collection of IDs adapted to the user’s needs,
preferences, and knowledge that can be transformed to html-pages
Pedagogical Rules
The pedagogical rules provide the backbone for the configurability of ActiveMath. Cur-
rently, they contain some know-how on which content to present, how to present the
content in a user-adapted way, and under which conditions which service systems should
be available. So far, this know-how is obvious and simple enough. It is, of course, subject
of future research for which we invite other groups to contribute and use ActiveMath
as a tool for experimentation .
In the third step of the course generation pedagogical rules are employed to decide
which additional information should be presented along with a concept,
which exercises and examples should be presented,
whether or not to include exercises and examples that make use of a particular
external system,
in which order the information should appear on a page.
For the evaluation of the rules ActiveMath uses Jess((Friedman-Hill, 1997)), an
expert system shell. In this process, information requested from the user model and the
collected IDs of OMDoc items (annotated with the user’s knowledge mastery levels) are
(defrule PatternForDetailedGuidedTour
(scenario DetailedGuidedTour)
=>
(assert (introductions))
(assert (definitions))
(assert (elaborations))
(assert (motivations))
(assert (assertions))
(assert (proofs))
(assert (examples))
(assert (generals))
(assert (methods))
(assert (exercises))
(assert (order motivations introductions definitions methods assertions proofs
examples exercises generals elaborations)))
Figure 7: A pedagogical rule for determining a pattern for the detailedGuidedTour
scenario
(defrule PatternForExamPrep
(scenario ExamPrep)
=>
(assert (definitions))
(assert (methods))
(assert (assertions))
(assert (exercises))
(assert (order definitions assertions methods exercises)))
Figure 8: A rule determining the pattern for the examPreparation scenario
entered as facts into the JESS knowledge base. Then the rules are evaluated and generate
a collection of items to be presented.
The left hand side of a rule specifies the conditions that have to be fulfilled for the rule
to fire and the right hand side specifies the actions to be taken when the rule fires. In the
following, we provide examples of pedagogical rules for two different types of decisions,5
where a variable is distinguished by a leading question mark.
Figure 7 shows a rule that fires, if the scenario guidedTour is chosen. In this case, the
rules checks whether there exists a fact (scenario detailedGuidedTour) in the knowl-
edge base of Jess. On the right hand side the action of asserting the items (proofs)
... (exercises) are specified, i.e., these facts are added to JESS’ knowledge base which
means that the corresponding OMDoc items will be presented, if available. The last action
(order motivations ... elaborations) specifies the order in which the items will
appear on each page.
Figure 8 shows a pedagogical rule selecting the content for the examPreparation sce-
nario. Compared with a guided tour, only a subset of the available items will be presented.
Figure 9 shows an example for a rule that chooses exercises with an appropriate dif-
ficulty level. If exercises should be presented at all (indicated by (exercises)), and if
there exists a definition din the knowledge base of Jess, then d’s name is bound to the
variable ?definition and the user’s knowledge of dis bound to ?user-knowledge.Jess
5The rules are a bit simplified for better readability.
(defrule RequireAppropriateExercises
(exercises)
(definition
(name ?definition)
(userKnowledge ?user-knowledge))
(test (< ?user-knowledge 0.3))
=>
(assert (choose-exercise-for ?definition 0.3))
(assert (choose-exercise-for ?definition 0.3))
(assert (choose-exercise-for ?definition 0.5))
(assert (choose-exercise-for ?definition 0.7)))
Figure 9: A rule choosing number and difficulty of exercises
allows to specify Boolean functions (indicated by test) whose value determines whether
a rules fires or not. That is, the rule in the figure fires, if the user’s knowledge is less than
0.3. In this case, facts are inserted into JESS’ knowledge base which in turn trigger the
selection of examples for dwith difficult levels 0.3, 0.3, 0.5, and 0.7 for the presentation.
The application of the pedagogical rules transforms the heap of IDs that is gathered
in the first phase of the course generation to a sorted and grouped selection of material
that can be passed to the session manager.
The examples illustrates how nicely the rules can be used to configure ActiveMath.
For instance, for a German teaching style the definition and theorems might be presented
before the examples, whereas for an American teaching style the examples come first.
Session Management and Presentation
Essentially, the session manager has two main functionalities. The first is the actual
realization of the presentation. The second is to store and reload shorthands of documents
for resuming a course.
The actual presentation preparation is realized by servlets contained in the session
manager, e.g., a servlet calling the course generator for scenario XY. The servlets use
URL parameters to react to a web server request and deliver part of a session. These
parameters include the name of the user and session in a human readable form.
Currently ActiveMath offers no security and pages of anyone can be read freely even
though there is a login mechanism. The simplicity of the request syntax, however, enables
a tunneling of them, that means to serve all requests (including exercise proxy-messages,
see Section 7) through a single TCP/IP port, thereby allowing access from behind the
strongest firewall. This same tunnelling will be used to secure the connections from the
client to the server using the secure HTTP protocol.
For every user, each new ‘book’ she opens creates a new session. In order to allow for
a return to the same document after a logout, a session state has to be referred to. The
information stored about a session has to be sufficient for the return. Therefore, hooked
to a session, the following information is stored:
references to mathematical systems to allow the restart of the exercise in case of a
connection failure,
the linearized instruction graph with page separation to allow the presentation of
pages at request,
names of user and session (for URLs), goals and scenario-name
Content Presentation
Lets go into more detail of the actual presentation preparation which starts with a list
of OMDoc IDs. A servlet processes the list, i.e., it fetches OMDoc items from MBase and
replaces the IDs by the actual OMDoc content. In particular, the textual content, the CMP,
is extracted from MBase and inserted in place of the ID. The resulting big OMDoc is then
rendered for the browser through a series of transformations6described in the following.
The transformations are produced by several xslt stylesheets7: (1) the official style-
sheet converting OMDoc to html, (2) a stylesheet for transforming mathematical object ref-
erences to mathematical symbols and (3) the ActiveMath-specific stylesheets for adding
features and user-adaptation (for user-adaptive display, appropriate URLs for interactivity,
etc.).
The presentation of mathematical symbols is adapted to the user’s and possibly to a
teacher’s preference. For instance the monoid from Figure 6 can be presented symbolically
as monoid(M, , e) but also as monoid(M , , unit).
An ActiveMath-specific transformation enriches the OMDoc representation with dy-
namic information leading to widget-like features in the final display of the pages. One
feature is the adaptive annotation of the table-of-content entries. The color of a bullet
indicates the current average mastery level of the entry which is computed from the mas-
tery values of the underlying/included concepts which are retrieved from the user model.
Other planned features are a drag-and-drop facility of mathematical expressions (currently
symbols only) and a note facility that allows to annotate the OMDoc items with private or
public notes. Furthermore, depending on the configuration of the presentation, a referenc-
ing mechanism can take an OpenMath object and produce a hyper-link from it. Similarly,
an explicit ref element can be presented as a hyper-link.
Then xslt stylesheets convert the enhanced OMDoc-pages to html. This includes
stylesheets used to adapt the html-appearance to the user’s preferences and to the cho-
sen scenario. Similarly, the presentation of slides needs stylesheets that differ from the
stylesheet for a typical book. The last transformation adjusts URLs so that they contain
all the session and user information.
Finally, css stylesheets (Bos, Lie, Lilley, & Jacobs, 1998) determine the general layout,
e.g, color, font size, and graphical icons that indicate the type of a paragraph, i.e., indicate
whether it is a definition, example, etc. Currently, we have designed two stylesheets, one
for a colorful style and the other for a grey style, whose differences can be observed in
Figure 10.
Symbol Rendering
The rendering of mathematical symbols and expressions is a general problem for browser
delivery in web-based systems. In the past, each mathematical expression had to be pre-
sented by a a specific picture. The problem is not totally resolved yet but the OMDoc
representation allows for an essential improvement, namely the separation of the presen-
tation from the elements themselves.
Presentation tags for rendering mathematical symbols are contained in OMDoc files.
They provide an xslt template that describes, in a parametric fashion, how to produce
html for the given symbol. The xslt engine is responsible for picking the proper presenta-
tion. The browser renders symbols from html tags. Currently, we use Unicode for symbol
rendering. The rendering and layout is still highly dependent on the browser installation
and font availability and thus the resulting presentations differ for different browsers.
6These transformations are also called filters.
7http://www.w3.org/Style/XSL/
Figure 10: Two different presentations of the same content
Presentation tags can also exist for any other output target. For example L
A
T
E
X output
is under work together with a serving mechanism of the resulting layout to a vector graphics
format. MathML is also considered.
It is important to note that the presentation tags are separated from the elements
themselves. Thus it is a relatively easy task to adapt the presentation to another setting
or other tastes. Thus ActiveMath achieves the principle of separation of content and
presentation.
6 User Modeling
As ActiveMath’ presentation is user-adaptive, it needs to incorporate persistent infor-
mation about the user as well as a representation of the user’s learning progress. That
is, ‘static’ (wrt. the current session) properties such as field, scenario, goal concepts, and
preferences as well as the ‘dynamic’ properties such as the knowledge mastery values for
concepts and the user’s actual behavior, have to be stored in the user model. These dif-
ferent kinds of information are stored separately and hence, the user model consists of the
components history and static and dynamic profile.
The profile is initialized with the user’s entries submitted to ActiveMath’ registration
page which describe the preferences (static), scenario, goals (static for the current session),
and self-assessment values for knowledge, comprehension, and application of concepts
(dynamic).
How is this information used for adapting the course document? For instance, the ‘field’
property plays an important role for adapting the content and presentation to users who
study a topic for the application in different fields. For instance, statistics can be taught
differently for mathematicians, computer scientists, biologists, economists, psychologists
and certainly with different examples and exercises. Currently, such a diverse statistics
course is being prepared. The ’goals’ and the ’scenario’ essentially determine the items
included into the presentation and ’preferences’ determines the appearance of the gener-
ated document. Currently, the choice of presentation preferences is still pretty simplistic
(colorful and grey) but future experiments will provide psychological and instructional
evidence for different presentation configurations and appearances that are supportive for
different groups of learners.
Currently, the history information is used for the course generation and in a suggestion
mechanism. However, this information will as well be valuable for future enhancements of
ActiveMath such as dialog and feedback.
User Model Components
Since we did not want to commit once and forever to a particular user model technology
but rather be able to experiment with different technologies without having to modify a
major part of the system, we specified the abstract interfaces and can thereby integrate
different user model technologies and updating functionalities. ActiveMath’ distributed
architecture supports this implementation strategy. For instance, the history storage can
consist of xml-format files (as in a previous version) or of a data base (as in the current
version). Similarly, we shall experiment with a table, a data base, and Bayesian Net
technology for the concept mastery values.
History Often, a progress in learning manifests itself not just in tests but in other
activities of the user and certainly achievements such as improved collaboration, self-
regulation, self-monitoring, and other important skills cannot be judged from the success
of exercising only (Fosnot, 1996; Schifter, 1996). Hence, for a more constructivist account
of learning it is indispensable to monitor the user’s activities.
The history component stores information about the actions the user performed. Its
elements, called HistoryAtoms, contain information such as the IDs of the content of a
read page or the ID of an exercise, the reading time, the success rate of the exercise. The
HistoryAtoms are organized in sessions. Presently, the granularity of a HistoryAtom
for reading is the page level because ActiveMath does not yet monitor the user’s more
detailed reading activities. However, a poor man’s eye-tracker is already implemented and
this will allow to trace the user’s attention and reading time at a more detailed level.
Profile In order to choose and present the content user-adaptively, the user’s preferences
have to be stored. And for adapting the content annotations and computing user-adaptive
suggestions, information about the user’s concept mastery level is required.
To represent the concept mastery assessment, the current (dynamic) profile contains
values for a subset of the competences of Bloom’s mastery taxonomy (Bloom, 1956):
Knowledge
Comprehension
Application.
An entry of the user model is displayed in Figure 11. The user model also stores a
justification for each value, that is, a list of pointers to the HistoryAtoms which were
responsible for changes of the knowledge mastery assessment. This list indicates, among
others, when the changes occurred and why (e.g., whether the changes occurred because
the user modified the mastery value, because she delivered some solution for an exercise,
or because she read a text).
The example in Figure 11 represents the mastery of the concept with the ID c6s6p3 As2
which the user knows with a likelihood of 30% and which she can apply with a likelihood
of 10%. The justification for this assessment is stored in the HistorySteps 5 and 7.
Updating Mechanisms
Finishing an exercise or going to another page triggers an updating of the user model.
Since different types of user actions can reflect and uncover different competencies (of
Bloom’s classification) they serve as sources for primarily modifying the values of cor-
responding competencies. In particular, reading concepts corresponds to ’knowledge’,
following examples corresponds to ’comprehension’, and solving exercises corresponds to
<Concept>
<MBaseId>c6s6p3_As2</MBaseId>
<BloomProperties>
<Knowledge>0.3</Knowledge>
<Comprehension>0.1</Comprehension>
<Application>0.1</Application>
</BloomProperties>
<Justification>
<HistoryStep>5</HistoryStep>
<HistoryStep>7</HistoryStep>
</Justification>
</Concept>
Figure 11: An excerpt from the knowledge mastery storage
’application’. One more competency, transfer, will be tested in the future corresponding
to certain exercises.
When the user model receives the notification that a user has finished reading a page,
an evaluator fetches the list of its items and their types (concept, example, . . .) and delivers
an update of the values of those items which depends on the relative reading time. When
the user finishes an exercise, an appropriate evaluator delivers an update of the values of
the involved concepts that depends on the difficulty and on the rating of how successful
the solution was with respect to these concepts.
For user model technologies with a built-in propagation mechanism, the dependen-
cies between the different competencies as well as dependencies of concepts are used to
propagate the value changes amongst entries of the user model.
The evaluator itself is an easy-to-exchange component. Currently, ActiveMath can
be configured with the following evaluators:
an incremental updater
a Bayesian updater (Pearl, 1988).
The incremental updater adds fixed values to the user’s mastery assessment. In addi-
tion, it applies a function to the incrementation values that takes into account how often
the student has already seen this concept. The Bayesian updater increases or decreases
mastery values and in addition it propagates among the competency values in a way that
simulates a simple Bayesian net. In a Bayesian net user model the propagation will be
based on the conditional dependencies of concepts and of competency values.
7 Use of External Systems
Since ActiveMath is designed to support exploratory learning, problem solving tools
are integrated. In addition to providing the basic facilities for exploring problems, these
systems can be useful because they allow the user to focus on a particular skill to solve
a specific problem; they allow the user to explore a problem interactively; in the context
of exercises they can provide feedback to the user on where it is promising to explore and
where dead ends are reached and they can check the correctness of the user’s calculation
or derivation and provide diagnose input for the teacher or for an evaluating function.
They can check mathematical truth even for a terribly awkward input of the user such as
0.5 + 1
1+1 = 1.
Several (mathematical) service systems are available, e.g., CAS such as Maple (Char,
Fee, Geddes, Gonnet, & Monagan, 1986), Mathematica (Wolfram, 1999), MuPAD (Sor-
gatz & Hillebrand, 1995), and the freely available GAP (CAS for group theory) (Sch¨onert,
1995) as well as statistics software (SPSS), and calculators. These can be used to ef-
ficiently solve computational mathematics problems. Our group has developed another
type of mathematical service system, the prototypical proof planner Ωmega.
The distributed web-architecture of ActiveMath is well-suited for integrating exter-
nal systems and also the generic, semantic knowledge representation by OMDoc is a basis for
integrating different systems because via translation engines (phrasebooks) these systems
can refer to the standard semantic knowledge representation.
In the following, we concentrate on an abstract description of how exercises with ex-
ternal systems are specified and realized in ActiveMath and provide concrete examples
for the use of a CAS (Maple) and of the proof planner Ωmega.
The abstract specification of exercises includes currently:
the specification of a problem to be solved by the user
three instructions to the external system:
astartup containing all the instructions to load libraries, problem data, variable
definitions, and other setups,
ashutdown instruction,
eval instruction to detect success or failure. eval evaluates the user’s interaction
and returns a success rate (number between 0 and 1) to the proxy and possibly
remarks to the user.
What is technically going on when the user chooses to solve an exercise? An user
interface is started on the client side. At the server side the external system is started and
aproxy is started on the servlet server. This proxy is the central server side object for the
exercise. It lives in the servlets’ virtual machine that is built for each new exercise and
for each user. It serves as a bridge between the user interface and the external system.
It reports start and end to the user model and it can receive monitoring queries and
instructions.
CAS Exercises
Rather than implementing new input editors for every CAS, the current version of Ac-
tiveMath offers a classical console, as shown in Figure 12, which requires an input in the
language of the given system. This approach relies on the popularity of the CAS and/or
on the objective to learn the CAS-language in the course.
We have implemented some CAS-exercises with the Maple and MuPAD systems.
They ask the user to perform a computation and return the result through a variable
whose name is fixed in advance. That is, the problem specified by an exercise is the
definition of a variable and the task is to compute and bind the computation result to the
variable.
The eval instruction is executed after each user input and returns an advise. Since
the CAS functionalities could be abused to compute the correct solution, an author can
restrict their use in an exercise context.
For a CAS, the abstract features of exercises are specialized as follows.
The startup for a CAS contains the input instructions for the problem, the variable
definitions, library loading, user welcoming instructions, and maybe a restriction of
the CAS-methods to be used.
The shutdown for a CAS exercise is sent by the proxy to free-up any resource.
Figure 12: A simple Maple exercise
The eval for a CAS exercise checks whether the user input is a correct solution,
they compute the success rate, determine if the goal is achieved, and may print
suggestions or comments to the console.
The proxy of the console applet exercise type for a CAS calculation offers the function-
ality for a remote teacher to view the exercise activity as well as to provide help messages
to the learner.
Proof Planning Exercises
A proof planner is a tool for supporting mathematical theorem proving. It applies so-
called methods to perform proofs. These problem solving methods represent typical proof
steps such as induction, the application of a theorem, or the simplification of a term.
The learning of methods is one of the key issues for Ωmega’s use within the learning
environment: a user can learn in which situation which method can be applied, what
happens when it is applied, and memorize the methods. Moreover, the automatic expert
solver in the background can provide feedback, when the user encounters a dead end in
her proof attempt or when an application condition of a method does not hold.
Originally, the proof planner Ωmega was conceived merely as a proof assistant for
logic experts. In an educational environment, however, the typical user has quite different
goals and needs, so the tool and in particular its GUI is being modified. A few new
features have been introduced into Ωmega’s graphical user interface (GUI). To avoid
overloading, we designed an interaction console, shown at the bottom of Figure 13. All
the interaction functionality needed to construct a proof is accessible through this window.
The user can choose which subgoal to prove next and which method to apply from a list
of automatically selected methods. Furthermore variables can be instantiated and proof
steps can be backtracked. Whenever the user gets stuck, the automatic proof planner can
be called to perform the next step.
The proof in Figure 13 is one of the exercise problems about homomorphisms in the
Algebra Interactive (Cohen et al., 1999) used as a first testbed for ActiveMath. On
the left side the proof tree is shown. The right side can contain either the formal proof,
Figure 13: The user interface for exercises with Ωmega.
consisting of proof lines with formulae, or its verbalization. In Figure 13 the verbalization
of a partial proof is shown too. This multi-modal presentation of a proof may be helpful
for students.
The instructions for exercises are concretely specified for a proof planner as follows.
The startup contains the loading instructions for the goal and assumptions, the
strategies and methods that are allowed in that exercise, and the theory libraries to
be loaded.
The current, still preliminary, shutdown for the proof planner mediates to the proxy
the success/failure in terms of proved and open lines and of the time taken for the
solution process.
eval instructions for the proof planner are not yet implemented.
Communication
In order to use existing mathematical software, ActiveMath needs to communicate with
these systems. The central communication component for any exercise activity is a proxy.
It is responsible of redirecting messages from the client-side GUI to the wrapper of a
computational tool. It informs the user model of the results of the user’s activity and
allows for monitoring the learner’s activity by a teacher.
As a service-management and -addressing infrastructure ActiveMath uses the Math-
Web (Franke & Kohlhase, 1999) service broker that directs requests to the appropriate
services in a network/Web and provides wrappers for the execution of stand-alone compu-
tational tools. The xml-rpc8communication with the wrappers made the computational
tools available to ActiveMath’ servlets.
8http://www.xmlrpc.org
Uniform Exercise Architecture The generic exercise architecture is shown in Figure
5. There the arrows indicate the communications, mostly xml-rpc connections. The
connections to MBase and to the MathWeb broker are both referred by static URLs,
whereas for an exercise the URL of its server proxy is dynamically created.
An exercise can be offered on the client by a Java applet or by any kind of user
interface. Its launch is triggered by an http request. This request is handled by the
proxy servlet that creates a proxy instance of the proper type and delivers the requested
data including parameters to connect to the proxy. The proxy triggers the display of the
user interface, requests the service of the computational tool needed for the exercise, loads
the authoring content from MBase, and sends it to the tool. For more technical details
see Libbrecht, Melis, Pollet, and Ullrich (2001).
8 Related Work
Some related work in the intelligent tutoring systems community is mainly documented in
this volume. ELM-ART (Weber & Brusilovsky, 2001) and its descendants are web-based
tutoring systems. ELM-ART II was designed for learning programming in LISP and inte-
grates a LISP compiler. Similar to ActiveMath it provides adaptive navigation support
by annotating links in a traffic-light metaphor and adaptive sequencing. The courses in
ELM-ART are fixed and therefore it is not possible to construct courses according to the
user’s goals and chosen scenarios as in ActiveMath. ELM-ART has been under devel-
opment since long and has reached an impressive state of maturity and, as Weber and
Brusilovsky point out in this volume, versatility. ActiveMath does not yet offer the
same range of communication tools (e.g., discussion lists, chat rooms), but the integration
of existing tools is planned.
APHID-2 (Kettel, Thomson, & Greer, 2000) extends an existing hypermedia genera-
tion system with adaptivity by defining rules that map learner information (e.g., learning
style: example) to constraints (e.g. the number of examples per page) that influence the
course generation.
The Dynamic Courseware Generator (DCG) (Vassileva, 1997) is similar to the Ac-
tiveMath system in that it generates individual courses according to the user’s goals and
knowledge. In DCG, a planner searches for sub-graphs connecting the goal concept with
concepts known by the user. A linearized version of this plan is offered to the user to follow.
If a user fails to perform successfully on tests related to a certain concept, a new course
plan can be generated. Major differences to ActiveMath are the underlying knowledge
representation of the learning material, its usage, and ActiveMath’ integration of tools
for true interactivity which is not intended for DCG. In DCG, concepts have links to fix
html-pages that present the actual content to be learned, whereas in ActiveMath the
content is generated dynamically from the generic representation.
Van Marcke (Marcke, 1998) presents with GTE (Generic Tutoring Environment) an
intriguing approach to realise a generic instructional knowledge base. He defines a wide
range of instructional goals that correspond to teaching task (e.g., Clarify-Concept) and
instructional methods that achieve or decompose the instructional goals (e.g., Clarify-
with-Analogy). Currently we are thinking about redesigning our course generator to use
a similar planning mechanism (see also Vassileva (1998)).
The interactive mathematics textbooks (Cohen et al., 1999; Dahn & Wolters, 2000)
use a collection of predefined L
A
T
E
X ddocuments or html pages and include a fixed set
of examples and exercises, the second contains interactive explorations and exercises. In
Dahn and Wolters (2000), courses are split to small units (slices) that can be combined
in order to construct a textbook. The only adaptivity consists in selecting those pages
connected to a certain goal.
Dynamic geometry systems such as CabriGeometre and geolog are neither web-based,
nor user-adaptive, and do not generate presentations but they have some other features
in common with ActiveMath. In a restricted way, the system geolog (Holland, 1996)
employs an approach similar to proof planning in ActiveMath. Holland investigated
empirically how students can learn geometrical constructions and proofs in a systematic
way similar to what we call (knowledge-based) proof planning (Melis & Siekmann, 1999).
For completeness reasons, we review some relevant commercial systems. It is hard to
find commercial solutions comparable to ActiveMath. Two categories of systems have
features in common with ActiveMath: (1) Web presentation systems for mathematics
and (2) learning environments.
(1) Presenting mathematics on the Web has always been a relatively delicate task.
The fine type setting quality required for mathematical documents imposes constraints
on previewers of the presentation. Essentially, such a quality has been achieved by the
TeX presentation system which is, however, not targeted to Web presentations and strictly
presentational.
Display of mathematics in Web browsers can be made by tools such as Latex2html9,
MathType10, Maple or Mathematica11 which provide formulas as pictures or applets. The
MathML emerging standard is a better approach but its support by browsers is still
insufficient. Still, presentational MathML is a purely presentational approach.
As opposed to presentational approaches, the separation of representation from presen-
tation in ActiveMath employs all the advantages of ontological xml and makes polishing,
adaptation for various Web browsers, or the generation of high quality presentations for
print (e.g., in Macromedia Flash) a task separated from authoring.
(2) Commercial learning portals such as WebCT 12 , Blackboard 13, Learning Spaces 14,
or TopClass 15 are mostly designed to provide portals for courses, facilities like chat-
rooms, live group-teaching or forums, and administer content and users. As these tools
are designed to manage fixed html files or their proprietary formats, they cannot adapt
their content selection and presentation to the user.
ActiveMath is somewhat complementary to the commercial software in that it
presents content on the Web that is represented purely semantically and already offers
some of the advantages of such an encoding such as the user-adaptive choice of content.
Moreover, it updates the user model, and integrates truly interactive exercises – features
that none of these systems offers.
9 Conclusion and Future Work
This paper describes the design principles and their realization in the first version of
ActiveMath. The system dynamically generates interactive (mathematical) documents
according to the user’s content needs and presentational preferences and provides facilities
for a user interaction with (mathematical) service systems. A demonstration of Active-
Math is available at http://www.activemath.org/demo.
ActiveMath is a generic web-based learning system with a distributed open archi-
tecture. It is a shell system that can be filled with content and pedagogical knowledge. It
allows to easily configure pedagogical strategies for choosing, presenting, and sequencing
9http://www.latex2html.org
10http://www.mathtype.com
11http://www.mathematica.com
12http://www.webct.com
13http://www.blackboard.com
14http://www.ibm.com/mindspan
15http://www.wbtsystems.com/
learning content, exercises, and examples as well as using scaffolding mechanisms. The
open architecture can integrate new components. Currently, a prototypical proof planner
and two Computer Algebra Systems are integrated into ActiveMath.
Its standardized xml-representation of knowledge and the dynamic generation of the
actual presentation are well-suited not only for interactive mathematics documents but
more generally for knowledge-intensive systems whose knowledge acquisition and repre-
sentation is tedious and should therefore be reused.
Future Work The research and development of ActiveMath is still in an early stage.
Aside from a series of experiments in the near future, we are currently preparing a dy-
namic suggestion mechanism, a drag-and-drop mechanism that relies on the semantic
representation, improved rendering of formulas, the integration of statistics software and
multimedia data bases, and an intelligent authoring tool. In the near future, we also plan
to integrate some of the existing open-source facilities like chats and user-management
into ActiveMath.
More and diverse courses, multi-modal forms of feedback and an integration of natural
language dialogs will follow as well as research on a pedagogically valuable and construc-
tivist use of proof planning.
10 Acknowledgment
We thank Andreas Franke, Armin Fiedler, Hakim Freihat, Andreas Meier, Volker Sorge,
and J¨urgen Zimmer for their support in getting ActiveMath off the ground. We are
indebted to Michael Kohlhase, the principal developer of OMDoc, for his close cooperation.
References
Anderson, J., Corbett, A., Koedinger, K., & Pelletier, R. (1995). Cognitive tutors: Lessons
learned. The Journal of the Learning Sciences,4(2), 167-207.
Balacheff N, G. M. (1993). Diagnostic et raisonnement explicatif dans cabri-geometre.
In B. P. Karsenty L. (Ed.), Explication et cooperation homme-machine : vers la co-
construction d’explications (p. 1-6). CNAM. (http://www.cabri.net/index-e.html)
Baumert, J., Lehmann, R., Lehrke, M., Schmitz, B., Clausen, M., Hosenfeld, I., K¨oller,
O., & Neubrand, J. (1997). Mathematisch-naturwissenschaftlicher Unterricht im
internationalen Vergleich. Leske und Budrich.
Bloom, B. (Ed.). (1956). Taxonomy of educational objectives: The classification of educa-
tional goals: Handbook I, cognitive domain. New York, Toronto: Longmans, Green.
Bos, B., Lie, H., Lilley, C., & Jacobs, I. (1998). Cascading Style Sheets, level 2 CSS2
Specification. W3C Recommendation (Tech. Rep.).
Caprotti, O., & Cohen, A. M. (1998). Draft of the open math standard. (Open Math
Consortium, http://www.nag.co.uk/projects/OpenMath/omstd/)
Char, B., Fee, G., Geddes, K., Gonnet, G., & Monagan, M. (1986). A tutorial introduction
to MAPLE. Journal of Symbolic Computation,2(2), 179–200.
Cohen, A., Cuypers, H., & Sterk, H. (1999). Algebra interactive! Springer-Verlag.
Corbett, A., Koedinger, K., & Anderson, J. (1997). Intelligent tutoring systems. In P. P. V.
Helander M. G., Landauer T. K. (Ed.), Handbook of human-computer interaction (p.
849-874). The Netherlands: Elsevier Science.
Dahn, B., & Wolters, H. (2000). Analysis individuell. Springer-Verlag.
Fosnot, C. T. (1996). Constructivism: A psychological theory of learning. In C. T.
Fosnot (Ed.), Constructivism: Theory, perspectives, and practice (p. 8-31). New
York, London: Teachers College Press.
Franke, A., & Kohlhase, M. (1999). System description: MathWeb, an agent-based com-
munication layer for distributed automated theorem proving [LNAI]. In H. Ganzinger
(Ed.), 16th international conference on automated deduction (Vol. 397, p. 217-221).
Springer-Verlag.
Franke, A., & Kohlhase, M. (2000). MBase: Representing mathematical knowledge in
a relational data base. In F. Pfenning (Ed.), Proc. 17th international conference on
automated deduction (cade). Springer-Verlag.
Friedman-Hill, E. (1997). Jess, the java expert system shell (Tech. Rep. No. SAND98-
8206). Sandia National Laboratories.
Holland, G. (1996). Geolog-win. ummler.
Kay, J. (2000). Stereotypes, student models and scrutability. In K. V. G.Gauthier,
Claude Frasson (Ed.), Intelligent tutor systems, 5th international conference, its
2000 (Vol. 1839, p. 19-29). Springer-Verlag.
Kettel, L., Thomson, J., & Greer, J. (2000). Generating individualized hypermedia
applications. In P. Brusilovski (Ed.), Proceedings of ITS-2000 workshop on adaptive
and intelligent web-based education systems (pp. 28–36). Montreal.
Kohlhase, M. (2000). OMDoc: Towards an openmath representation of mathematical
documents (Seki Report No. SR-00-02). Fachbereich Informatik, Universit¨at des
Saarlandes. (http://www.mathweb.org/ilo/omdoc)
Kohlhase, M. (2001). OMDoc: Towards an internet standard for mathematical knowledge.
In E. R. Lozano (Ed.), Proceedings of artificial intelligence and symbolic computa-
tion, aisc’2000. Springer Verlag. (forthcoming)
Libbrecht, P., Melis, E., Pollet, M., & Ullrich, C. (2001). Interac-
tive exercises in the ActiveMathlearning environment. In Issac-2001
workshop on internet accessible mathematical computation. (url:
http://icm.mcs.kent.edu/research/iamc01proceedings.html)
Marcke, K. van. (1998). GTE: An epistemological approach to instructional modeling.
Instructional Science,26, 147-191.
Melis, E., Glasmacher, C., Ullrich, C., & Gerjets, P. (2001). Automated proof planning
for instructional design. In Annual conference of the cognitive science society (p.
633-638).
Melis, E., & Siekmann, J. (1999). Knowledge-based proof planning. Artificial Intelligence,
115 (1), 65-105.
Murray, T., Condit, C., Shen, T., Piemonte, J., & Khan, S. (1999). Metalinks - a
framework and authoring tool for adaptive hypermedia. In S. Lajoie & M. Vivet
(Eds.), Proceedings of aied-99 (p. 744-746). IOS Press.
Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible
inference. San Francisco, California: Morgan Kaufmann. (revised second printing)
Schifter, D. (1996). A constructivist perspective on teaching and learning mathematics.
In C. T. Fosnot (Ed.), Constructivism: Theory, perspectives, and practice (p. 73-80).
New York, London: Teachers College Press.
Schoenfeld, A. (Ed.). (1990). A source book for college mathematics teaching. Washington,
DC: Mathematical Association of America.
Sch¨onert, M. (1995). GAP – Groups, Algorithms, and Programming. Aachen, Germany.
Sorgatz, A., & Hillebrand, R. (1995). MuPAD - Ein Computeralgebra System I. Linux
Magazin(12/95).
Vassileva, J. (1997). Dynamic course generation on the WWW. In B. d. Boulay & R. Mi-
zoguchi (Eds.), Proceedings of ai-ed’97, 8th world conference on artificial intelligence
in education (p. 498-505). Amsterdam.
Vassileva, J. (1998). DCG+GTE: Dynamic courseware generation with teaching expertise.
Instructional Science,26, 317-332.
Weber, G., & Brusilovsky, P. (2001). ELM-ART an adaptive versatile system for web-
based instruction. Artificial Intelligence and Education.
Wolfram, S. (1999). The Mathematica Book (fourth edition ed.). Cambridge University
Press.
... A number of studies have developed ITS in providing adaptive exercises that focus on the student knowledge progress [3] [8]. ITS research initially refers to the ACT-R as cognitive research that adopts a general cognition architecture theory that focuses on the memory process until students achieve 'mastery' on domain [3]. ...
... The aim is to apply the student modelling process, based on the estimated probability of learned state, that allows tutors to monitor student progress on programming exercises and adjust exercise sequences based on student needs. In the further development of the student model, there is ActiveMath, which proposes exploratory learning scenarios by assuming students have self-determination in learning [8]. One of the attributes in the student model refers to competencies of Bloom's taxonomy, which makes the system able to reflect and adapt based on actual students' knowledge. ...
... One of the attributes in the student model refers to competencies of Bloom's taxonomy, which makes the system able to reflect and adapt based on actual students' knowledge. However, neither of these studies [3] [8] have yet discussed the relation to students' misconceptions on specific domains. ...
... In the context of e-learning systems, adaptation is defined as a procedure for tailoring the educational environment to these differences [14] with the aim of improving learning outcome [31]. Different learner characteristics can be considered such as knowledge level [32] and learning styles [35]. Learners with dyslexia, like everyone, are different in their needs, preferences, and their skill level. ...
... For example, SQL-Tutor [33] and ELM-ART [43] both consider learners' knowledge level in order to adapt learning content. Also, ActiveMath adapts mathematical content to learners' knowledge level, goals and media preferences [32]. ...
Chapter
Dyslexia is a universal reading difficulty where each individual with dyslexia can have a different combination of underlying reading difficulties. For instance, errors in letter identification or omissions or transpositions within or between words. Nowadays, adaptive e-learning and gamification have become more common. Different learner characteristics have been used when adapting e-learning systems, such as the user’s learning style or knowledge level. However, little attention has been directed towards understanding the benefits of using dyslexia type or reading skill level when adapting systems for learners with dyslexia. This, despite dyslexia type and reading skill level being significant factors in their education and learning. This paper reports on research which aims to improve this understanding through empirical studies designed to evaluate the benefits of adaptation with native Arabic speaking children with dyslexia. A mixed-methods approach was used. In the first experiment the focus is on a qualitative understanding of the effects of adaptation based on dyslexia type. The second experiment provides a quantitative analysis of the effects of adaptation based upon the reading skill level of learners with dyslexia. Findings revealed that the majority of learners are motivated when adapting learning material to dyslexia type. Analysis of the results indicated that adapting based on reading skill level does achieve improved learning gain and lead to greater learner satisfaction compared to a non-adaptive version. Implications of these experiments are discussed.
... Their purpose and functionality would depend on the larger application scenario. For instance, in the tutoring scenario the proof tutoring system might be embedded in a larger environment for learning mathematics, such as LeActiveMath (Melis et al., 2001(Melis et al., , 2006 which is itself a complex system incorporating dedicated components for curriculum development, exercise sequencing, learner modelling, and others. In the interactive proof construction scenario, proofs might be constructed in a mathematical document authoring environment with sophisticated mathematical expression editing capabilities, requiring a complex graphical interface; as in, for instance, (Wagner et al., 2007;Wagner and Lesourd, 2008). ...
... Most studies involve interactions between students and human tutors or between peer students. While some studies do report on educational uses of computer programs such as Computer Algebra Systems (Schneider, 2000;Heid and Edwards, 2001), proof tutor systems (Suppes and Morningstar, 1972;Goldson et al., 1993;Scheines and Sieg, 1994;Abel et al., 2001;Borak and Zalewska, 2007) or web-based environments for learning mathematics, also learning to prove (Ravaglia et al., 1999a,b;Melis et al., 2001Melis et al., , 2006Hendriks et al., 2010), at the time this project began there was no available data on natural language computer-based tutoring of proofs. Therefore, in order to learn about the characteristics of tutorial dialogues on proofs, in particular, about the students' language, we performed a series of controlled experiments to collect data in our target scenario. ...
Book
https://publikationen.sulb.uni-saarland.de/bitstream/20.500.11880/30854/1/wolskamathematicalProofskomplette.pdf#page=4
... In other words, the fact that the majority of students in the system of distance education are successful in English courses stems from the fact that computer materials are very effective teaching tools for addressing more senses and that the retention of learned knowledge increases. Equality of opportunity and flexibility provided by the system (Melis et al. 2001;Sözen, 2003) and the possibility of unlimited repetition of subjects (Dinçer, 2006) are also very important factors in the success of the majority of students in the English course. ...
... (2) (Melis et al., 2007) (1) (Qin et al., 2020) (2) (Galliers et al., 2017) ( Table 5. AI examples of support for students with intellectual disability AJIM Yaquob and Hamed (2019), meanwhile, introduced an adaptive e-learning system called dyslexia adaptive e-learning management system (DAELMS). The proposed system correlated each type of dyslexia with the preferred learning techniques associated with it. ...
Article
Purpose Students with developmental/intellectual disabilities (ID/DD) often have serious health issues that require additional medical care and supervision. Serious health issues also mean increased absence and additional lags in academic achievement and development of adaptive and social skills. The incorporation of artificial intelligence in the education of a child with ID/DD could ameliorate the educational, adaptive and social skill gaps that occur as a direct result of persistent health problems. Design/methodology/approach The literature regarding the use of artificial intelligence in education for students with ID/DD was collected systematically from international online databases based on specific inclusion and exclusion criteria. The collected articles were analyzed deductively, looking for the different gaps in the domain. Based on the literature, an artificial intelligence–based architecture is proposed and sketched. Findings The findings show that there are many gaps in supporting students with ID/DD through the utilization of artificial intelligence. Given that the majority of students with ID/DD often have serious and chronic and comorbid health conditions, the potential use of health information in artificial intelligence is even more critical. Therefore, there is a clear need to develop a system that facilitates communication and access to health information for students with ID/DD, one that provides information to caregivers and education providers, limits errors, and, therefore, improves these individuals' education and quality of life. Practical implications This review highlights the gap in the current literature regarding using artificial intelligence in supporting the education of students with ID/DD. There is an urgent need for an intelligent system in collaboration with the updated health information to improve the quality of services submitted for people with intellectual disabilities and as a result improving their quality of life. Originality/value This study contributes to the literature by highlighting the gaps in incorporating artificial intelligence and its service to individuals with ID/DD. The research additionally proposes a solution based on the confounding variables of students’ health and individual characteristics. This solution will provide an automated information flow as a functional diagnostic and intervention tool for teachers, caregivers and parents. It could potentially improve the educational and practical outcomes for individuals with ID/DD and, ultimately, their quality of life.
... In other words, the fact that the majority of students in the system of distance education are successful in English courses stems from the fact that computer materials are very effective teaching tools for addressing more senses and that the retention of learned knowledge increases. Equality of opportunity and flexibility provided by the system (Melis et al. 2001;Sözen, 2003) and the possibility of unlimited repetition of subjects (Dinçer, 2006) are also very important factors in the success of the majority of students in the English course. ...
... Emotional stability refers to the opposite of neurotism, i.e., someone who is calm and not easily upset. Robison et al., 2010;Nunes and Hu, 2012;Alhathli et al., 2016Alhathli et al., , 2018bDennis et al., 2016 Psychological aspects Self-efficacy Beal and Lee, 2005a;Mcquiggan et al., 2008 Mental effort Salden et al., 2006;Okpo et al., 2018 Motivation Beal andLee, 2005a;Cocea and Weibelzahl, 2006 Affective state and mood Beal and Lee, 2005b;Forbes-Riley et al., 2008;Graesser et al., 2008;Odo et al., 2018 Behavior and performance Learner competence Mitrović et al., 1996;Davidovic et al., 2003;Tsiriga and Virvou, 2004;Cheng et al., 2008;Corbalan et al., 2008 Problem solving skills Melis et al., 2001;Pholo and Ngwira, 2013 Help seeking behavior and self-regulated learning Koutsojannis et al., 2001;Azevedo, 2005 Learner progress Brusilovsky et al., 1996;Revilla et al., 2008;Trotman and Handley, 2008;Verdú et al., 2012 Cognition Cognitive style Triantafillou et al., 2004;Graesser et al., 2007;Mampadi et al., 2011;Lo et al., 2012;Alhathli et al., 2018a Knowledge state and prior knowledge Shute, 1995;Ray and Belden, 2007;Kelly, 2008;Petrovica, 2013 Learning style Magoulas et al., 2003;Sun and Cheng, 2007;Germanakos et al., 2008;Latham et al., 2012;El-Bishouty et al., 2014;Alhathli et al., 2017 Learner objectives Kelly and Tangney, 2002;Vassileva and Bontchev, 2006 Personal information Learner profile, demographics, cultural background, and preferences Hwang, 1998;Widyantoro et al., 1999;Chang et al., 2000;Sugiyama et al., 2004;Reategui et al., 2008;Adamu Sidi-Ali et al., 2019 Openness to experience refers to those who are interdependentminded, and intellectually strong. Conscientiousness refers to being disciplined, organized, and achievement-oriented. Finally, Agreeableness refers to being good-natured, helpful, trustful, and cooperative (Miller, 1991). ...
Article
Full-text available
This paper investigates how humans adapt next learning activity selection (in particular the knowledge it assumes and the knowledge it teaches) to learner personality and competence to inspire an adaptive learning activity selection algorithm. First, the paper describes the investigation to produce validated materials for the main study, namely the creation and validation of learner competence statements. Next, through an empirical study, we investigate the impact on learning activity selection of learners' emotional stability and competence. Participants considered a fictional learner with a certain competence, emotional stability, recent and prior learning activities engaged in, and selected the next learning activity in terms of the knowledge it used and the knowledge it taught. Three algorithms were created to adapt the selection of learning activities' knowledge complexity to learners' personality and competence. Finally, we evaluated the algorithms through a study with teachers, resulting in an algorithm that selects learning activities with varying assumed and taught knowledge adapted to learner characteristics.
... Große and Renkl suggest this learning progression: correct examples, incorrect examples where the location of an error is shown, then incorrect examples where the location of an error is not shown. Erica Melis extended this line of research using a computer-based tutoring system [58]. Melis' research revealed that weak students who were asked to find errors were likely to give up. ...
Conference Paper
Full-text available
There is general agreement that people can learn from their own mistakes, but the idea of intentionally incorporating errors into curricula is controversial. Some think it will introduce misconceptions, while others think it could help students learn concepts, avoid common errors and misconceptions, and reduce anxiety about errors. This paper develops a theoretical framework that supports this technique, and reviews the experiences and experiments of over a dozen research teams that have employed it for mathematics and computer science education. It also reports on a 14-week study that investigates this issue, where undergraduates learning to program with Python solved web-based practice problems during weekly labs. Approximately 80% of the problems were the same for all students. For the remaining 20%, the treatment group got problems with carefully designed bugs in supplied code. The control group got similar problems without bugs. According to two pre/post measures, the treatment group learned as much as the control group, with no statistical difference in learning gains. Instructional designs, design principles, and future research plans are discussed.
Article
Full-text available
Mathematics teachers face great difficulties during virtual communication with their students, mainly because scientific and technical subjects use a great many mathematical symbols, abstract objects and a wide variety of technical nomenclatures that are not directly represented on the keyboards of computers, tablets or other communication devices. This paper aims to facilitate oral and written communication between teachers and students in mathematics subjects. In this regard, we describe realistic procedures for use in virtual office hours. We analyse different methods that combine the use of computers, tablets, webcams and an institutional video lecture system. The improvement in oral and written communication will have a positive effect on academic performance and decrease the dropout rate in distance learning courses, because the students feel more closely accompanied in their learning.
Chapter
Remediation is an integral part of adaptive instructional systems that provide a supplement to lectures in case the delivered content proves too difficult for a user to fully grasp in a single class session. To extend the delivery of current remediation methods from single type of sources to combinations of different material types, we propose an adaptive remediation system with multi-modal remediation content. The system operates in four main phases: ingesting a library of multi-modal content files into bite-sized chunks, linking them based on topical and contextual relevance, then modeling users’ real-time knowledge state when they interact with the delivered course through the system and determining whether remediation is needed, and finally identifying a set of remediation segments addressing the current knowledge weakness with the relevance links. We conducted two studies to test our developed adaptive remediation system in an advanced engineering course taught at an undergraduate institution in the US and evaluated our system on productivity. Both studies show that our system is effective in increasing the productivity by at least 50%.
Chapter
The goal of intelligent tutoring systems (ITSs) would be to engage the students in sustained reasoning activity and to interact with the student based on a deep understanding of the students' behavior. This chapter begins by providing an overview of intelligent tutoring systems. The chapter comments on competing research goals in the field, followed by descriptions of two successful systems: (1) the Pittsburgh urban math project algebra tutor project and (2) the SHERLOCK project. These two systems are being deployed in real-world educational environments with substantial success. This chapter briefly describes the underlying theory and implementation of each system, to motivate later discussions. A description of the standard components in an intelligent tutoring system is also presented along with a discussion of human-computer interaction assessment issues unique to educational software. This chapter also provides a prescription for ITS design and development methods. Several issues concerning ITS design principles are also addressed in the chapter.
Article
Teachers who begin to base their practice on principles of constructivism should not expect to develop a finished repertoire of behaviors that, once achieved, will become routine, Ms. Schifter warns. There is no point of arrival, but rather a path that leads on to further growth and change. Anne Hendry is a veteran first-grade teacher in rural western Massachusetts. She describes the way she began a unit on measurement shortly before Thanksgiving in 1990. Before school, moving desks and chairs and using masking tape, I outlined the shape of a boat on the classroom floor, 16 by 6 feet. This was to represent the Mayflower. I also prepared a scroll for a child to read to the class and then to post on the bulletin board with our initial problem involving measurement on it. I selected one child, instructing him that at math time he would be a messenger from the King bringing an "Edict" to the Pilgrims.