Conference PaperPDF Available

Domain-Knowledge Manipulation for Dialogue-Adaptive Hinting.

Authors:

Abstract

The 12th International Conference on Artificial Intelligence in Education (AIED-2005) is being held July 18--22, 2005, in Amsterdam, the beautiful Dutch city near the sea. AIED-2005 is the latest in an on-going series of biennial conferences in AIED ...
Domain-Knowledge Manipulation for
Dialogue-Adaptive Hinting
Armin Fiedler and Dimitra Tsovaltzi
Department of Computer Science, Saarland University,
P.O. Box 15 11 50, D-66041 Saarbrücken, Germany.
1. Introduction
Empirical evidence has shown that natural language (NL) dialogue capabilities are a
crucial factor to making human explanations effective [6]. Moreover, the use of teach-
ing strategies is an important ingredient for intelligent tutoring systems. Such strategies,
normally called dialectic or socratic, have been demonstrated to be superior to pure ex-
planations, especially regarding their long-term effects [8]. Consequently, an increas-
ing though still limited number of state-of-the-art tutoring systems use NL interaction
and automatic teaching strategies, including some notion of hints (e.g., [3,7,5]). On the
whole, these models of hints are somehow limited in capturing their various underlying
functions explicitly and relating them to the domain knowledge dynamically.
Our approach is oriented towards integrating hinting in NL dialogue systems [11].
We investigate tutoring proofs in mathematics in a system where domain knowledge, di-
alogue capabilities, and tutorial phenomena can be clearly identified and intertwined for
the automation of tutoring [1]. We aim at modelling a socratic teaching strategy, which
allows us to manipulate aspects of learning, such as help the student build a deeper under-
standing of the domain, eliminate cognitive load, promote schema acquisition, and ma-
nipulate motivation levels [13,4,12], within NL dialogue interaction. In contrast to most
existing tutorial systems, we make use of a specialised domain reasoner [9]. This design
enables detailed reasoning about the student’s action and elaborate system feedback [2]
Our aim is to dynamically produce hintsthat fit the needs of the student with regard
to the particular proof. Thus, we cannot restrict ourselves to a repertoire of static hints,
associating a student answer with a particular response by the system. We developed a
multi-dimensional hint taxonomy where each dimension defines a decision point for the
associated cognitive function [10]. The domain knowledge can be structured and ma-
nipulated for tutoring decision purposes and generation considerations within a tutorial
manager.Hint categoriesabstract fromthe strict specific domain informationandthe way
it is used in the tutoring, so that it can be replaced for otherdomains. Thus, the teaching
strategy and pedagogical considerations core of the tutorial manager can be retained for
different domains. More importantly, the discourse managementaspects of the dialogue
manager can be independently manipulated.
2. Hint Dimensions
Our hint taxonomy [10] was derivedwith regard to the underlying function of a hint that
can be common for different NL realisations. This function is mainly responsible for the
educational effect of hints. To capture all the functions of a hint, which ultimately aim
at eliciting the relevant inference step in a given situation, we define four dimensionsof
hints: The domain knowledge dimension captures the needs of the domain, distinguish-
ing different anchoring points for skill acquisition in problem solving. The inferential
role dimension captures whether the anchoring points are addressed from the inference
per se, or through some control on top of it for conceptual hints. The elicitation status di-
mension distinguishes between information being elicited and degrees to which informa-
tion is provided. The problem referential perspective dimension distinguishes between
views on discovering an inference (i.e., conceptual, functional and pragmatic).
In our domain, we defined the inter-relations between mathematical concepts as well
as between concepts and inference rules, which are used in proving [2]. These concepts
and relations can be used in tutoring by making the relation of the used concept to the
required concept obvious. The student benefits in two ways. First, she obtains a better
grasp of the domain for making future reference (implicitly or explicitly) on her own.
Second, she is pointed to the correct answer, which she can then derive herself. This
derivation process, which we do not track but reinforce, is a strong point of implicit
learning, with the main characteristic of being learner-specific by its nature. We call
the central concepts which facilitate such learning and the building of schemata around
them anchoring points. The anchoring points aim at promoting the acquisition of some
basic structure, called schema, which can be applied to different problem situations [13].
We define the following anchoringpoints: a domain relation, that is, a relation between
mathematical concepts; a domain object, that is, a mathematical entity, which is in the
focus of the current proof step; the inference rule that justifies the current proof step;
the substitution needed to apply the inference rule; the proof step as a whole, that is, the
premises, the conclusion and the applied inference rule.
3. Structuring the Domain
Our general evaluation of the student input relevant to the task, the domain contribution,
is defined based on the concept of expected proof steps, that is, valid proof steps accord-
ing to some formal proof. In order to avoid imposing a particular solution and to allow
the student to follow her preferred line of reasoning,we use the theorem proverMEGA
[9] to test whether the student’s contribution matches an expected proof step. Thus, we
try to allow for otherwise intractable ways of learning.
By comparing the domain contribution with the expected proof step we first obtain
an overall assessment of the student input in terms of generic evaluation categories, such
as correct, wrong, and partially correct answers.Second, forthe partiallycorrect answers,
we track abstractly defined domain knowledge that is useful for tutoring in general and
applied in this domain. To this end, we defineda domain ontologyof concepts,which can
serve as anchoring points for learning proving,or which reinforce the defined anchoring
points. Example concepts are the most relevant concept for an inference step, that is, the
major concept being manipulated, and its subordinate concept, that is, the second most
relevant concept. Both the domain contribution category and the domain ontology con-
stitute a basis for the choice of the hint category that assists the student at the particular
state in the proof and in the tutoringsession according to a socratic teaching model[10].
4. Using the Domain Ontology
Structured domain knowledge is crucial for the adaptivity of hinting. The role it plays
is twofold. First, it influences the choice of the appropriate hint category by a socratic
tutoring strategy [2]. Second, it determines the content of the hint to be generated.
The input to the socratic algorithm, which chooses the appropriate hint category
to be produced, is given by the so-called hinting session status (HSS), a collection of
parameters that cover the student modelling necessary for our purposes. The HSS is only
concerned with the current hinting session but not with inter-session modelling, and thus
does not represent if the student recalls any domain knowledge between sessions. Special
fields are defined for representing the domain knowledgewhich is pedagogically useful
for inferences on what the domain-related feedback to the student must be. These fields
help specify hinting situations, which are used by the socratic algorithm for choosing the
appropriate hint category to be produced.
Once the hint category has been chosen, the domain knowledge is used again to in-
stantiate the category yielding a hint specification. Each hint category is defined based
on generic descriptions of domain objects or relations, that is, the anchoring points. The
role of the ontology is to assist the domain knowledge module (where the proof is rep-
resented) with the mapping of the generic descriptionson the actual objects or relations
that are used in the particular context, that is, in the particular proof and the proof step.
For example, to realise a hint that gives away the subordinate concept the generatorneeds
to know what the subordinate concept for the proof step and the inference rule at hand
is. This mapping is the first step to the hint specifications necessary. The second step is
to specify for every hint categorythe exact domain information that it needs to mention.
This is done by the further inclusion of information that is not the central point of the
particular hint, but is needed for its realisation in NL. Such information may be, for in-
stance, the inference rule, its NL name and the formula which represents it, or a new
hypothesis needed for the proof step. These are not themselves anchoring points, but
specify the anchoring point for the particular domain and the hint category. They thus
provide the possibility of a rounded hint realisation with the addition of information of
the other aspects of a hint, captured in other dimensions of the hint taxonomy. The fi-
nal addition of the pedagogically motivated feedbackchosen by the tutorial manager via
discourse structure and dialogue modelling aspects completes the information needed by
the generator.
References
[1] C. Benzmüller et al. Tutorial dialogs on mathematical proofs. In Proceedings IJCAI
Workshop on Knowledge Representation and Automated Reasoning for E-Learning Systems,
pp. 12–22, Acapulco, 2003.
[2] A. Fiedler and D. Tsovaltzi. Automating hinting in an intelligent tutorial system. In Pro-
ceedings IJCAI Workshop on Knowledge Representation and Automated Reasoning for E-
Learning Systems, pp. 23–35, Acapulco, 2003.
[3] G. Hume et al. Student responses and follow up tutorial tactics in an ITS. In Proceedings 9th
Florida Artificial Intelligence Research Symposium, pp. 168–172, Key West, FL, 1996.
[4] E. Lim and D. Moore. Problem solving in geometry: Comparing the effects of non-goal
specific instruction and conventional worked examples. Journal of Educational Psychology,
22(5):591–612, 2002.
[5] N. Matsuda and K. VanLehn. Modelling hinting strategies for geometry theorem proving. In
Proceedings 9th International Conference on User Modeling, Pittsburgh, PA, 2003.
[6] J. Moore. What makes human explanations effective? In Proceedings 15th Annual Meeting
of the Cognitive Science Society, Hillsdale, NJ, 1993.
[7] N. Person et al. Dialog move generation and conversation management in AutoTutor. In
C. Rosé and R. Freedman, eds., Building Dialog Systems for Tutorial Applications—Papers
from the AAAI Fall Symposium, pp. 45–51, North Falmouth, MA, 2000. AAAI press.
[8] C. Rosé et al. A comparative evaluation of socratic versus didactic tutoring. In J. Moore
and K. Stenning, eds., Proceedings 23rd Annual Conference of the Cognitive Science Society,
University of Edinburgh, Scotland, UK, 2001.
[9] J. Siekmann et al. Proof development with MEGA. In A. Voronkov, ed., Automated Deduc-
tion — CADE-18, number 2392 in LNAI, pp. 144–149. Springer, 2002.
[10] D. Tsovaltzi et al. A Multi-Dimensional Taxonomy for Automating Hinting. In Intelligent
Tutoring Systems — 6th International Conference, ITS 2004, LNCS. Springer, 2004.
[11] D. Tsovaltzi and E. Karagjosova. A dialogue move taxonomy for tutorial dialogues. In
Proceedings 5th SIGdial Workshop on Discourse and Dialogue, Boston, USA, 2004.
[12] B. Weiner. Human Motivation: metaphor, thoeries, and research. Sage Publications, 1992.
[13] B. Wilson and P. Cole. Cognitive teaching models. In D. Jonassen, ed., Handbook of Research
for educational communications and technology. MacMillan, 1996.
... While the UMLS has been used for different purposes in various applications, to the best of our knowledge, UMLS has not been previously used as the main knowledge source for inference or reasoning purposes in an intelligent tutoring system. The concept of partial correctness has been discussed in the context of tutoring systems [4,19], wherein a part of the solution is explicitly recognized as correct. Our notion of partial correctness is different and is assessed through knowledge inference rather than explicitly encoded knowledge. ...
... Our notion of partial correctness is different and is assessed through knowledge inference rather than explicitly encoded knowledge. Fiedler and Tsovaltzi [19] employ a domain ontology for tutoring mathematics theorem proving. The domain ontology of concepts contains some objects and relations defined as anchoring points, which serve as the basis for the content of the generated hints. ...
Article
While problem-based learning has become widely popular for imparting clinical reasoning skills, the dynamics of medical PBL require close attention to a small group of students, placing a burden on medical faculty, whose time is over taxed. Intelligent tutoring systems (ITSs) offer an attractive means to increase the amount of facilitated PBL training the students receive. But typical intelligent tutoring system architectures make use of a domain model that provides a limited set of approved solutions to problems presented to students. Student solutions that do not match the approved ones, but are otherwise partially correct, receive little acknowledgement as feedback, stifling broader reasoning. Allowing students to creatively explore the space of possible solutions is exactly one of the attractive features of PBL. This paper provides an alternative to the traditional ITS architecture by using a hint generation strategy that leverages a domain ontology to provide effective feedback. The concept hierarchy and co-occurrence between concepts in the domain ontology are drawn upon to ascertain partial correctness of a solution and guide student reasoning towards a correct solution. We describe the strategy incorporated in METEOR, a tutoring system for medical PBL, wherein the widely available UMLS is deployed and represented as the domain ontology. Evaluation of expert agreement with system generated hints on a 5-point likert scale resulted in an average score of 4.44 (Spearman's ρ=0.80, p<0.01). Hints containing partial correctness feedback scored significantly higher than those without it (Mann Whitney, p<0.001). Hints produced by a human expert received an average score of 4.2 (Spearman's ρ=0.80, p<0.01).
... The concept of partial correctness has been discussed in the context of tutoring sys- tems [3, 18], wherein a part of the solution is explicitly recognized as correct. Our notion of partial correctness is different and is assessed through knowledge inference rather than explicitly encoded knowledge. ...
... Our notion of partial correctness is different and is assessed through knowledge inference rather than explicitly encoded knowledge. Fiedler & Tsovaltzi [18] employ a domain ontology for tutoring mathematics theorem proving. The domain ontology of concepts contains some objects and relations defined as anchoring points, which serve as the basis for the content of the generated hints. ...
Conference Paper
Full-text available
Tutoring systems typically contain or generate a set of approved solutions to problems presented to students. Student solutions that don’t match the approved ones, but are otherwise partially correct, receive little acknowledgment as feedback, stifling broader reasoning. Additionally, feedback mechanisms rely on having the student model, which requires extensive effort to build. This paper provides an alternative to the traditional ITS architecture by using a hint generation strategy that bypasses the student model and instead leverages off of the domain ontology. Concept hierarchy and co-occurrence between concepts in the domain ontology are drawn upon to ascertain partial correctness of a solution and guide student reasoning towards the correct solution. We describe the strategy incorporated in a tutoring system for medical PBL, wherein the widely available UMLS is deployed as the domain ontology. Evaluation of expert agreement with system generated hints on a 5-point likert scale resulted in an average score of 4.44 (r = 0.9018, p < 0.05). Hints containing partial correctness feedback scored significantly higher than those without it (Wilcoxon Rank Sum, p < 0.001).
... Learning correlations across domains and modalities were investigated by [18]. Partialy correct responses are recognized by the work of [10] and [21]. Student knowledge models are build by examples in [23]. ...
Conference Paper
A lot of research has been done with respect to automated evaluation of students' knowledge. In this article we apply dominance relations in rough sets approximations for assessing students knowledge.
Article
Ranking of help functions with respect to their usefulness is in the main focus of this work. A help function is regarded as useful to a student if the student has succeeded to solve a problem after using it. Methods from the theory of partial orderings are further applied facilitating an automated process of suggesting individualised advises on how to proceed in order to solve a particular problem. The decision making process is based on the common assumption that if given a choice between two alternatives, a person will choose one. Thus, obtained partial orderings appeared to be all linear orders since each pair of alternatives is compared. In this paper, we propose ranking help functions in an intelligent tutoring system with respect to their usefulness.
Article
Full-text available
Hints are an important ingredient of natural lan-guage tutorial dialogues. Existing models of hints, however, are limited in capturing their various un-derlying functions, since hints are typically treated as a unit directly associated with some problem solving script or discourse situation. Putting em-phasis on making cognitive functions of hints ex-plicit and allowing for automatic incorporation in a natural dialogue context, we developed a multi-dimensional hint taxonomy where each dimension defines a decision point for the associated function. We take domain knowledge into account for choos-ing the hint to be given with regard to the domain-knowledge dimension only. In this paper we show how structured domain knowledge can be used to inform the choice of an appropriate hint class and the specification of the hint to be produced.
Chapter
Full-text available
Mathematics is the lingua franca of modern science, not least because of its conciseness and abstractive power. The ability to prove mathematical theorems is a key prerequisite in many fields of modern science, and the training of how to do proofs therefore plays a major part in the education of students in these subjects. Computer-supported learning is an increasingly important form of study since it allows for independent learning and individualised instruction.
Article
Full-text available
AutoTutor is an automated computer literacy tutor that participates in a conversation with the student. AutoTutor simulates the discourse patterns and pedagogical dialog moves of human tutors. This paper describes how the Dialog Advancer Network (DAN) manages AutoTutor's conversations and how AutoTutor generates pedagogically effective dialog moves that are sensitive to the quality and nature of the learner's dialog contributions. Two versions of AutoTutor are discussed. AutoTutor-1 simulates the dialog moves of normal, untrained human tutors, whereas AutoTutor-2 simulates dialog moves that are motivated by more ideal tutoring strategies.
Article
Full-text available
Despite empirical evidence that natural language dialog capabilities are necessary for the success of tutorial sessions, only few state-of-the-art tutoring systems use natural-language style interaction. Moreover, although hinting has been psychologically substantiated, most intelligent tutoring systems do not systematically produce hints. In the DIALOG project, we aim at a mathematical tutoring system that employs an elaborate natural language dialog component and a hinting tutoring strategy. To tutor mathematics, we use a formally encoded mathematical theory including definitions and theorems along with their proofs. We enhance this on-tology by making relations explicit and show how these relations can be used when planning the next utterance of the system. Moreover, we define a scheme for classifying the student's input in terms of the knowledge of the domain demonstrated. Finally , as a theory of tutoring we define a taxonomy of hints and a hint determining algorithm that implements the socratic tutoring strategy, whose decisive characteristic is the use of hints in order to achieve self-explanation. This algorithm takes into account both the mathematical ontology and the categories of the student's answers.
Conference Paper
Full-text available
This study characterizes hinting strategies used by a human tutor to help students learn geometry theorem proving. Current tutoring systems for theorem proving provide hints that encourage (or force) the student to follow a fixed forward and/or backward chaining strategy. In order to find out if human tutors observed a similar constraint, a study was conducted with students proving geometry theorems individually with a human tutor. When working successfully (without hints), students did not consistently follow the forward and/or backward chaining strategy. Moreover, the human tutor hinted steps that were seldom ones that would be picked by such tutoring systems. Lastly, we discovered a simple categorization of hints that covered 97% of the hints given by the human tutor.
Conference Paper
Full-text available
Hints are an important ingredient of natural language tutorial dia- logues. Existing models of hints, however, are limited in capturing their various underlying functions, since hints are typically treated as a unit directly associated with some problem solving script or discourse situation. Putting emphasis on making cognitive functions of hints explicit, we present a multi-dimensional hint taxonomy where each dimension defines a decision point for the associated func- tion. Hint categories are then conceived as convergent points of the dimensions. So far, we have elaborated fi ve dimensions: (1) domain knowledge reference, (2) inferential role, (3) elicitation status, (4) discourse dynamics, and (5) problem solving perspective. These fine-grained distinctions support the constructive gen- eration of hint specifications from modular knowledge sources.
Article
The purpose of this article is to review from an instructional-design (ID) perspective nine teaching programs developed by cognitive psychologists over the last ten years. Among these models, Collins' cognitive apprenticeship model has the most explicit prescriptions for instructional design. In the article, the cognitive apprenticeship model is analyzed, then components of the model are used as an organizing framework for understanding the remaining models. Differences in approach are noted between traditional ID prescriptions and the cognitive teaching models. Surprisingly, no design strategies were found to be common to all the model programs. Key differences among programs included: (1) problem solving versus skill orientation, (2) detailed versus broad cognitive task analysis, (3) learner versus system control, and (4) error-restricted versus error-driven instruction. The article concludes with an argument for the utility of continuing dialogue between cognitive psychologists and instructional designers.
Conference Paper
If computer-based instructional systems are to reap the benefits of natural language interaction, they must be endowed with the properties that make human natural language interaction so effective. To identify these properties, we replaced the natural language component of an existing Intelligent Tutoring System (ITS) with a human tutor, and gathered protocols of students interacting with the human tutor. We then compared the human tutor's responses to those that would have been produced by the ITS. In this paper, I describe two critical features that distinguish human tutorial explanations from those of their computational counterparts. Introduction There is growing interest in teaching real world problem-solving tasks using computer-based intelligent apprenticeship environments in which students learn by doing (Gott, 1989). Such skills typically involve complex chains of hidden reasoning and one goal of an apprenticeship environment is to help externalize the cognitive processes th...