Content uploaded by Dimitra Tsovaltzi
Author content
All content in this area was uploaded by Dimitra Tsovaltzi
Content may be subject to copyright.
Domain-Knowledge Manipulation for
Dialogue-Adaptive Hinting
Armin Fiedler and Dimitra Tsovaltzi
Department of Computer Science, Saarland University,
P.O. Box 15 11 50, D-66041 Saarbrücken, Germany.
1. Introduction
Empirical evidence has shown that natural language (NL) dialogue capabilities are a
crucial factor to making human explanations effective [6]. Moreover, the use of teach-
ing strategies is an important ingredient for intelligent tutoring systems. Such strategies,
normally called dialectic or socratic, have been demonstrated to be superior to pure ex-
planations, especially regarding their long-term effects [8]. Consequently, an increas-
ing though still limited number of state-of-the-art tutoring systems use NL interaction
and automatic teaching strategies, including some notion of hints (e.g., [3,7,5]). On the
whole, these models of hints are somehow limited in capturing their various underlying
functions explicitly and relating them to the domain knowledge dynamically.
Our approach is oriented towards integrating hinting in NL dialogue systems [11].
We investigate tutoring proofs in mathematics in a system where domain knowledge, di-
alogue capabilities, and tutorial phenomena can be clearly identified and intertwined for
the automation of tutoring [1]. We aim at modelling a socratic teaching strategy, which
allows us to manipulate aspects of learning, such as help the student build a deeper under-
standing of the domain, eliminate cognitive load, promote schema acquisition, and ma-
nipulate motivation levels [13,4,12], within NL dialogue interaction. In contrast to most
existing tutorial systems, we make use of a specialised domain reasoner [9]. This design
enables detailed reasoning about the student’s action and elaborate system feedback [2]
Our aim is to dynamically produce hintsthat fit the needs of the student with regard
to the particular proof. Thus, we cannot restrict ourselves to a repertoire of static hints,
associating a student answer with a particular response by the system. We developed a
multi-dimensional hint taxonomy where each dimension defines a decision point for the
associated cognitive function [10]. The domain knowledge can be structured and ma-
nipulated for tutoring decision purposes and generation considerations within a tutorial
manager.Hint categoriesabstract fromthe strict specific domain informationandthe way
it is used in the tutoring, so that it can be replaced for otherdomains. Thus, the teaching
strategy and pedagogical considerations core of the tutorial manager can be retained for
different domains. More importantly, the discourse managementaspects of the dialogue
manager can be independently manipulated.
2. Hint Dimensions
Our hint taxonomy [10] was derivedwith regard to the underlying function of a hint that
can be common for different NL realisations. This function is mainly responsible for the
educational effect of hints. To capture all the functions of a hint, which ultimately aim
at eliciting the relevant inference step in a given situation, we define four dimensionsof
hints: The domain knowledge dimension captures the needs of the domain, distinguish-
ing different anchoring points for skill acquisition in problem solving. The inferential
role dimension captures whether the anchoring points are addressed from the inference
per se, or through some control on top of it for conceptual hints. The elicitation status di-
mension distinguishes between information being elicited and degrees to which informa-
tion is provided. The problem referential perspective dimension distinguishes between
views on discovering an inference (i.e., conceptual, functional and pragmatic).
In our domain, we defined the inter-relations between mathematical concepts as well
as between concepts and inference rules, which are used in proving [2]. These concepts
and relations can be used in tutoring by making the relation of the used concept to the
required concept obvious. The student benefits in two ways. First, she obtains a better
grasp of the domain for making future reference (implicitly or explicitly) on her own.
Second, she is pointed to the correct answer, which she can then derive herself. This
derivation process, which we do not track but reinforce, is a strong point of implicit
learning, with the main characteristic of being learner-specific by its nature. We call
the central concepts which facilitate such learning and the building of schemata around
them anchoring points. The anchoring points aim at promoting the acquisition of some
basic structure, called schema, which can be applied to different problem situations [13].
We define the following anchoringpoints: a domain relation, that is, a relation between
mathematical concepts; a domain object, that is, a mathematical entity, which is in the
focus of the current proof step; the inference rule that justifies the current proof step;
the substitution needed to apply the inference rule; the proof step as a whole, that is, the
premises, the conclusion and the applied inference rule.
3. Structuring the Domain
Our general evaluation of the student input relevant to the task, the domain contribution,
is defined based on the concept of expected proof steps, that is, valid proof steps accord-
ing to some formal proof. In order to avoid imposing a particular solution and to allow
the student to follow her preferred line of reasoning,we use the theorem proverΩMEGA
[9] to test whether the student’s contribution matches an expected proof step. Thus, we
try to allow for otherwise intractable ways of learning.
By comparing the domain contribution with the expected proof step we first obtain
an overall assessment of the student input in terms of generic evaluation categories, such
as correct, wrong, and partially correct answers.Second, forthe partiallycorrect answers,
we track abstractly defined domain knowledge that is useful for tutoring in general and
applied in this domain. To this end, we defineda domain ontologyof concepts,which can
serve as anchoring points for learning proving,or which reinforce the defined anchoring
points. Example concepts are the most relevant concept for an inference step, that is, the
major concept being manipulated, and its subordinate concept, that is, the second most
relevant concept. Both the domain contribution category and the domain ontology con-
stitute a basis for the choice of the hint category that assists the student at the particular
state in the proof and in the tutoringsession according to a socratic teaching model[10].
4. Using the Domain Ontology
Structured domain knowledge is crucial for the adaptivity of hinting. The role it plays
is twofold. First, it influences the choice of the appropriate hint category by a socratic
tutoring strategy [2]. Second, it determines the content of the hint to be generated.
The input to the socratic algorithm, which chooses the appropriate hint category
to be produced, is given by the so-called hinting session status (HSS), a collection of
parameters that cover the student modelling necessary for our purposes. The HSS is only
concerned with the current hinting session but not with inter-session modelling, and thus
does not represent if the student recalls any domain knowledge between sessions. Special
fields are defined for representing the domain knowledgewhich is pedagogically useful
for inferences on what the domain-related feedback to the student must be. These fields
help specify hinting situations, which are used by the socratic algorithm for choosing the
appropriate hint category to be produced.
Once the hint category has been chosen, the domain knowledge is used again to in-
stantiate the category yielding a hint specification. Each hint category is defined based
on generic descriptions of domain objects or relations, that is, the anchoring points. The
role of the ontology is to assist the domain knowledge module (where the proof is rep-
resented) with the mapping of the generic descriptionson the actual objects or relations
that are used in the particular context, that is, in the particular proof and the proof step.
For example, to realise a hint that gives away the subordinate concept the generatorneeds
to know what the subordinate concept for the proof step and the inference rule at hand
is. This mapping is the first step to the hint specifications necessary. The second step is
to specify for every hint categorythe exact domain information that it needs to mention.
This is done by the further inclusion of information that is not the central point of the
particular hint, but is needed for its realisation in NL. Such information may be, for in-
stance, the inference rule, its NL name and the formula which represents it, or a new
hypothesis needed for the proof step. These are not themselves anchoring points, but
specify the anchoring point for the particular domain and the hint category. They thus
provide the possibility of a rounded hint realisation with the addition of information of
the other aspects of a hint, captured in other dimensions of the hint taxonomy. The fi-
nal addition of the pedagogically motivated feedbackchosen by the tutorial manager via
discourse structure and dialogue modelling aspects completes the information needed by
the generator.
References
[1] C. Benzmüller et al. Tutorial dialogs on mathematical proofs. In Proceedings IJCAI
Workshop on Knowledge Representation and Automated Reasoning for E-Learning Systems,
pp. 12–22, Acapulco, 2003.
[2] A. Fiedler and D. Tsovaltzi. Automating hinting in an intelligent tutorial system. In Pro-
ceedings IJCAI Workshop on Knowledge Representation and Automated Reasoning for E-
Learning Systems, pp. 23–35, Acapulco, 2003.
[3] G. Hume et al. Student responses and follow up tutorial tactics in an ITS. In Proceedings 9th
Florida Artificial Intelligence Research Symposium, pp. 168–172, Key West, FL, 1996.
[4] E. Lim and D. Moore. Problem solving in geometry: Comparing the effects of non-goal
specific instruction and conventional worked examples. Journal of Educational Psychology,
22(5):591–612, 2002.
[5] N. Matsuda and K. VanLehn. Modelling hinting strategies for geometry theorem proving. In
Proceedings 9th International Conference on User Modeling, Pittsburgh, PA, 2003.
[6] J. Moore. What makes human explanations effective? In Proceedings 15th Annual Meeting
of the Cognitive Science Society, Hillsdale, NJ, 1993.
[7] N. Person et al. Dialog move generation and conversation management in AutoTutor. In
C. Rosé and R. Freedman, eds., Building Dialog Systems for Tutorial Applications—Papers
from the AAAI Fall Symposium, pp. 45–51, North Falmouth, MA, 2000. AAAI press.
[8] C. Rosé et al. A comparative evaluation of socratic versus didactic tutoring. In J. Moore
and K. Stenning, eds., Proceedings 23rd Annual Conference of the Cognitive Science Society,
University of Edinburgh, Scotland, UK, 2001.
[9] J. Siekmann et al. Proof development with ΩMEGA. In A. Voronkov, ed., Automated Deduc-
tion — CADE-18, number 2392 in LNAI, pp. 144–149. Springer, 2002.
[10] D. Tsovaltzi et al. A Multi-Dimensional Taxonomy for Automating Hinting. In Intelligent
Tutoring Systems — 6th International Conference, ITS 2004, LNCS. Springer, 2004.
[11] D. Tsovaltzi and E. Karagjosova. A dialogue move taxonomy for tutorial dialogues. In
Proceedings 5th SIGdial Workshop on Discourse and Dialogue, Boston, USA, 2004.
[12] B. Weiner. Human Motivation: metaphor, thoeries, and research. Sage Publications, 1992.
[13] B. Wilson and P. Cole. Cognitive teaching models. In D. Jonassen, ed., Handbook of Research
for educational communications and technology. MacMillan, 1996.