ArticlePDF Available

Formalising Hinting in Tutorial Dialogues

Authors:
  • Deutsches Forschungszentrum für Künstliche Intelligenz Saaarbrücken

Abstract

The formalisation of the hinting process in Tutorial Dialogues is undertaken in order to simulate the Socratic teaching method. An adaptation of the BE&E annotation scheme for Dialogue Moves, based on the theory of social obligations, is sketched, and a taxonomy of hints and a selection algorithm is suggested based on data from the BE&E corpus. Both the algorithm and the tutor's reasoning are formalised in the context of the information state theory of dialogue management developedonthetrindi project. The algorithm is characterised using update rules which take into account the student model, and the tutor's reasoning process is described in terms of context accommodation.
Formalising Hinting in Tutorial Dialogues
Dimitra Tsovaltzi
Computational Linguistics
University of Saarland
Germany
Colin Matheson
Language Technology Group
University of Edinburgh
Scotland
Abstract
The formalisation of the hinting process in
Tutorial Dialogues is undertaken in order to
simulate the Socratic teaching method. An
adaptation of the BE&E annotation scheme for
Dialogue Moves, based on the theory of so-
cial obligations, is sketched, and a taxon-
omy of hints and a selection algorithm is sug-
gested based on data from the BE&E corpus.
Both the algorithm and the tutor’s reasoning
are formalised in the context of the informa-
tion state theory of dialogue management de-
veloped on the trindi project. The algorithm is
characterised using update rules which take
into account the student model, and the tutor’s
reasoning process is described in terms of con-
text accommodation.
1 Theoretical Background
1.1 Information State Update
The formalisation of the hinting process uses
the Information State (IS) approach, in which
dialogue is seen in terms of the information
that each participant has at every point in the
interaction. The formal account thus accords
with the proposals put forward on the trindi
project
1
(Bohlin et al. (1999), Larsson et al.
(1999)).
The IS representation consists of fields con-
taining different kinds of information about the
dialogue. This information is updated after
each new utterance using update rules of var-
ious types. The latter include details of how
each field, or information attribute, in the IS
should be updated, if at all.
2
The background
1
Telematics Applications Programme, Language En-
gineering Project LE4-8314.
2
Figure 2 contains an IS with relevant fields, and a
commented instance of an update rule can be found in
example (4) below.
theories that inform the IS update formalisa-
tion assumed here are the obligation-driven ap-
proaches discussed in Matheson et al. (2000)
and Traum and Allen (1994), and Context Ac-
commodation as described in Cooper et al.
(2000), Kreutel and Matheson (2000), and Lars-
son et al. (2000).
1.2 Discourse Obligations
The theory of obligations introduces the notion
of discourse and social obligation as a way of
analysing some of the social aspects of interac-
tions and provides an explanation for behaviour
that other theories do not predict. It is an aug-
mentation to the representation of the inten-
tions of dialogue participants that attempts to
capture the natural flow of conversation. There
are different genres of obligation-based research,
as evidenced by the above references.
Treating tutorial dialogues in terms of obli-
gations is an intuitive way of analysing and
predicting some specific kinds of dialogue be-
haviour that do not seem to follow the rules
of everyday discourse. For example, applying
the Shared Plans theory (see Rich and Sid-
ner (1998)) to tutorial dialogues would soon
prove problematic since the tutor does not fol-
low the principle of co-operativity which is cen-
tral to the theory. A prerequisite for achieving
goals, according to Shared Plans, is that the di-
alogue participants should be as clear as they
can about their beliefs and plans, when the one
thing that the tutor avoids doing in the hinting
process is exactly that. For example, Shared-
Plans do not explain why the tutor in utterance
T[2] in example 1 below does not just tell the
student what a sine-wave is, which she could
easily do.
According to the obligation-driven frame-
work, on the other hand, intentions are neces-
sary but not the only driving force behind an
utterance. For example, only if one considers it
the student’s obligation to address the tutor’s
questions and follow her directives can one in-
terpret the total lack of overt signals from the
student that he intends to cooperate, such sig-
nals being normally central to collaboration. In
the context of the overall obligations the tutor
knows what the student’s intentions are, and
will be able to interpret his actions correctly
because the tutorial dialogue genre will not per-
mit any other behaviour, since it is the student’s
obligation to follow the tutor’s directives.
2 Obligations and Dialogue Moves
2.1 Dialogue Moves in the BE&E
Corpus
An informal analysis of the dialogue moves
that are observed in the BE&E corpus has
been sketched taking into account considera-
tions based on obligation theory as outlined
above. Moves that do not contain any par-
ticular interest for the representation of obli-
gations have generally been adopted from the
BE&E annotation scheme. The BE&E (Basic
Electricity and Electronics) corpus consists of
recorded human-to-human dialogues that were
carried out via a computer keyboard. The stu-
dent performs actions in a lab, represented by a
GUI, towards a specific target such as measur-
ing current. The tutor observes the actions and
intervenes when necessary, or when the student
asks her to.
3
The move hint is given here as an
example for obvious reasons.
2.2 The Hint Move
It is clear from the corpus that the student
is obliged to address the tutor’s utterances,
whereas the tutor hardly ever answers questions
directly, contrary to the norm outside the genre.
Instead, she gives hints which withhold the an-
swer. A hint is a tutorial strategy that aims
to encourage active learning. It can take the
form of eliciting information that the student
is unable to access without the aid of prompts,
or information which he can access but whose
relevance he is unaware of with respect to the
problem at hand. Alternatively, a hint can point
to an inference that the student is expected
3
Also see http://www.hcrc.ed.ac.uk/jmoore
/tutoring/BEE
corpus.html
to make based on knowledge available to him,
which helps the general reasoning needed to deal
with a problem (Hume et al., 1996).
Partial answers from the tutor discharge the
obligation to address the student’s questions.
There are examples in the BE&E corpus where
the tutor explicitly states her method or the stu-
dent shows he is aware of it, such as “Very good.
You answered your own question” or “I’ll give
you another hint.”, from the tutor and “ I need
another hint”, from the student. The initiation
of hints can be due to various reasons:
(i) the tutor observes that the student is not
making any progress in the task, or that he
is taking steps in the wrong direction
(ii) the student asks a question and the tutor
does not want to answer it directly
(iii) the student gives the wrong answer or asks
the wrong question in response to a tutor’s
question
Figure 1 illustrates some of the points briefly
discussed above. The tutor gives a few hints
to try to make the student follow her reason-
ing (in T[2], T[4], T[6], and T[7]). She realises,
however, that the student does not remember
the lesson very well; he is so bad at interpreting
her hints that she is forced to give explanations
about basic concepts (T[9]). She therefore asks
him to read the lesson again (T[11]), not want-
ing to just give the answers away.
3 The Hinting Strategies
Hint is one of the moves that appear in tu-
torial dialogues. Although the surface struc-
ture of hints is heterogeneous, there appears to
be an underlying structure common to differ-
ent categories of hints, and we undertook the
formalisation based on these perceived regular-
ities. The taxonomy of strategies formalising
the hinting process in our model entails the fol-
lowing hints: Pragmatic Hint, Relevant Con-
cept, Following Step, Speak to Answer, Encour-
age, Rephrase Question, Logical Relation, Move
Backwards, More General Concept, Spell Out
the Task, Explain Fundamental Principles of
Domain, Narrow Down Choices, Explain, and
Point to Lesson. The names are intended to
be as descriptive of the content as possible, and
should in some cases be self-explanatory. Some
S[1]: I have no idea what a sinewave is.
Was this covered in the tutorial?
T[2]: Yes, remember the wave that rep-
resented alternating current in the
lesson?
S[3]: I think i remember it being repre-
sented as a on the ammeter con-
trol panel
T[4]: OK, that’s true about the multime-
ter’s function dial. But do you re-
member
S[5]: Nope
T[6]: a graph of a wave in the lesson that
represented alternating current?
(20 sec later)
T[7]: Do you remember reading about fre-
quency and amplitude and all that?
S[8]: I’m not sure. Is this a trick question
to see if you can get me to invent a
memory?
T[9]: No, this is not a psychology experi-
ment. :) I’m just trying to see how
much you remember. A sinewave
starts out at 0 and increases to the
maximum amplitude then decreases
past 0 in the negative direction and
then returns to 0 again. Does any of
this ring a bell?
S[10]: It really doesn’t. But I think I’m
following your explanation.
T[11]: Go ahead and reread the lesson.
Figure 1: Example tutorial dialogue
of the strategies are not real hints (for exam-
ple Point to the Lesson), but they have been
included in the taxonomy because they are part
of the general hinting process. Although the
strategies were derived by looking at data from
a specific domain, namely Basic Electricity and
Electronics (BE&E), the aim was to abstract
from the particular characteristics of that do-
main and produce a domain-independent tax-
onomy, to the extent that this is possible.
The hinting strategy can be realised either by
giving a small clue which the tutor deems nec-
essary to initiate the desired reasoning process,
or by eliciting the clue itself, that is by prompt-
ing the student to produce it. These bottom-
level moves are called informs and t-elicits
respectively.
3.1 An Example of a Hinting Strategy
Example (1) below contains an instance of the
hint relevant concept. In employing this
strategy the tutor points to concepts that are
relevant to the current problem in order to trig-
ger the information in memory which is re-
quired, or in order to correct the student’s er-
roneous reasoning. This can be done by asking
a question the answer to which is the relevant
concept, or the answer to which is only part of
the concept, by asking about the relevant con-
cept or by simply mentioning it. The relation
of the concept pointed to and the one at hand,
whether they are opposite, similar, related to
the problem in a similar way, and so on, is made
explicit:
(1)
T[1]: ...What are the instructions asking
youtodo?
S[2]: Make a circuit between the
source(battery) and the resis-
tance (rheostat) and then attach
the miliampmeter to measure the
resistance in ohms.
T[3]: OK, you’re close. But keep in mind
that a miliampmeter is a special case
of an ammeter. Do you remember
what an ammeter measures?
S[4]: Amps?
T[5]: Right, very good...
In example (1) the tutor is trying to make
the student see that the meter does not mea-
sure Ohms but Amperes. In utterance T[3]
she brings up the notion of ammeter, which
also measures Amperes. This strategy is anal-
ysed here as an example of a relevant concept
style of hint (formalised in (2) below). She
hopes that the student will remember this and
will infer that the meter measures Amperes and,
thus, current. The ultimate aim of course is
that the student will realise that he must use
the ohmmeter in order to measure resistance in
Ohms, which is the problem at hand. The stu-
dent follows the hint, as indicated by S[4].
4 Modelling the Hinting Process
In order to model the hinting process an algo-
rithm was derived based on the examples from
BE&E corpus. It takes into account the cur-
rent student answer and the number of wrong
answers encountered so far in the dialogue as
a means of deciding on the student model and
on which hint to generate. This gives empha-
sis to the local student model which has been
found to be more important than the global
one (Freedman et al., 1998). Six categories of
student answers according to their performance
were judged necessary based on the data: cor-
rect, near miss, partially correct (only part of
the correct answer was given), grain of truth
(there is an indication of some understanding of
the problem but the answer is wrong), wrong,
misconception (typically confused concepts and
suchlike).
The initiation of the hinting process, for ex-
ample when the student performs a wrong ac-
tion in the lab, and the conclusion, perhaps
when he does not need more help, were also
modelled by the algorithm. The algorithm itself
was modelled using update rules in the trindi
format. Section 4.1 includes an example of
an update rule whose preconditions and effects
were derived based on the algorithm.
The algorithm is a comprehensive guide and
we do not claim that it accurately predicts all
the dialogues in the corpus. This inaccuracy
is largely due to the inconsistency observed in
the human tutor behaviour, which is not ped-
agogically justifiable. We have tried to over-
come these inconsistencies by normalising the
behaviour based on the theory behind the So-
cratic method, and hence sacrificing flexibility.
Nevertheless, we have no evidence as to whether
consistency of an effective method or flexibility
is better.
The algorithm takes into account the fact
that every time there is a new answer from
the student the local student model, the perfor-
mance in one hinting session, changes. The tu-
tor’s hint reveals as much information as needed
based on the understanding demonstrated by
the current answer. So, when the hinting ses-
sion has just started it is easier to terminate
it, if the student seems to follow. However the
hinting strategies used are not very revealing.
The tutor will start the hinting session when the
student makes a mistake. If she is not certain
of the reason behind the mistake, she will do
check for the origin of mistake. At this
level, if the answer is correct, the hinting pro-
cess will end. If not, the tutor will choose the
appropriate hint according to the type of an-
swer the student gives. All hints at this second
level are more informative, since the student has
already been given a less informative hint and
couldn’t follow it. After the second hint, the
hinting process will continue even if the student
gives a correct answer as the tutor is reluctant
to let the student carry on by himself. Perfor-
mance so far has not been good, so she wants
to guide the student and make sure he under-
stands the whole task. The hints are yet more
helpful.
After three hints in a row, the tutor will re-
quire only correct answers in order to continue
hinting. In any other case the student model is
too bad, since by now the hints are quite reveal-
ing and are still not being followed. In order
both to avoid frustration on the student part,
and to preserve the effectiveness of the Socratic
method, the student is given a brief explanation.
After the explanation, the tutor will check for
understanding again. If a correct answer is still
not forthcoming, the student will be referred to
the study material, which he obviously has not
read properly. The aim is not to teach the ma-
terial, but to help the student to assimilate it,
once read, and to be able to apply it.
4.1 The IS Update Rules: The Basics
of the Formalisation
The formalisation of the hinting process fol-
lows the approach outlined in Matheson et al.
(2000) in using detailed representations of in-
formation states and in handling the partici-
pants’ deliberations using update rules some of
which are specific to the particular genre of di-
alogue. Thus some of the rules contain condi-
tions and effects specific to the hinting process
in tutorial dialogues. The conditions must be
satisfied in order for the IS to be updated, and
the kind of update that will take place is de-
fined by the effects. Example (2) contains an
update rule which models the circumstances in
which the hinting strategy relevant concept
should be generated (the notation used in the
rule is described in sections 5 and 6 below).
This strategy is exemplified in utterance T[3]
in (1) above. The student is responding to
a check for the origin of the mistake in
utterance T[1] in (1), but the answer given in
S[2] is wrong.
4
Therefore, the tutor points to a
relevant concept.
4
Three kinds of answers are classified as “wrong”:
genuinely wrong answers, wrong steps in the task and
no answer at all.
(2)
name: relevant concept
cond: in(IS.DH,or mistake(T))
in(IS.LM,wrong answer(S,DA))
effect: push(IS.INT,ack(T))
push(IS.INT,rel concept(T)
The conditions for the hinting formalisa-
tion presented here are the ones that must
hold according to the algorithm. Briefly,
the conditions state that the dialogue his-
tory contains a check for the origin of
the mistake (or
mistake) and that the lat-
est move is a wrong
answer. The effects are
that the tutor has intentions to acknowledge the
student’s utterance and to perform a relative
concept hint. The rule thus implements both
the student model, determining if a hint should
be generated, and also determines how informa-
tive the hint should be, via the effects. The type
of hint to be generated depends on the type of
answer elicited, and this is also represented in
the effects, as shown.
5 The Tutor Model
The notion of context accommodation has
been used in the past to deal with issues such
as over-answering (see Cooper et al. (1999)),
Question Under Discussion (qud) ((Ginzburg,
1996)), and for incorporating different plans
(Cooper et al. (2000), Kreutel and Matheson
(2000), Larsson et al. (2000)). Here we sug-
gest the application of context accommodation
to the hinting process. In tutorial dialogues the
intelligent system can use context accommoda-
tion to decide whether the student is on the
right track, or if the hinting process needs to
start, by accommodating steps in a predefined
order from a given plan.
Three levels of planning are suggested here.
At the top level domain holds a set of plans for
all the lab tasks, each consisting of a set of sub-
plans, called preconditions, that the total plan is
broken down to.
5
These have to be realised for
the goal of the total plan to be fulfilled. Every
precondition in turn is broken down into decom-
positions which represent steps in the possible
reasoning process towards achieving the precon-
5
Some of these plans already exist for the BE&E cor-
pus using the same notation.
dition. Therefore, if there is more than one way
of reasoning, all the options should be included
in the database. The same goes for the pre-
conditions and the plans. The agenda holds
the subplan at hand, with its preconditions and
decompositions, as well as realisations (fixed ut-
terances) for the particular content of every de-
composition. In a system that provides a dy-
namic language generation module, the relevant
hint, the decomposition, and all discourse plan-
ning information can be passed on, allowing the
module to generate an appropriate phrase. A
plan of the kind described for the BE&E cor-
pus can be seen in 3. The decompositions (dec)
refer only to the last precondition (prec) here.
(3)
header: Voltage Lab: Measure Voltage(VDC)
prec: Polarity observed
Meter on while making measurement
Meter set to VDC
Both leads attached
Meter attached
Circuit not powered down
Difference in charge between the two
ends of the meter when attached to
the circuit
dec1: vol.difpot.meas
dec2: vol.meas.so
dec4: vol.meas.lo
dec5: vol.difpot.so
dec6: vol.difpot.lo
dec7: sou.dif
dec8: lo.dif
dec9: sou.ex
dec10: lo.ex
sub-
effects: Difference in charge between the two
ends of the meter when attached to
the circuit
total-
effect: Voltage Lab: Measure Voltage(VDC)
Finally, the Dialogue History (dh), which
holds all the dialogue acts performed in one
hinting session, and the system variable Latest
Move (lm) communicate with the agenda via
the update rules.
6
Based on the latter, either
the agenda will be modified, in which case the
generation module will be activated, or not, in
which case the agenda is only used for passively
keeping track of the student’s step towards the
realisation of the goal of the lab task at hand by
popping preconditions off agenda and popping
6
Update rules are domain-independent but genre-
specific.
plans off domain.
Update rules are used to model everything
described here as well as the accommodation
of any preconditions performed and any de-
compositions which occur in a random order,
whenever this is allowed by the task. This is
the case only when certain steps, which consti-
tute preconditions for the one currently being
performed, have already been performed them-
selves. For instance, resetting the meter is a pre-
condition of moving any wires when making a
measurement. Context accommodation accom-
modates steps in the task, or the relevant rea-
soning, which are encountered before they are
expected. It captures the fact that the student
is following specific points in the tutor’s plan
and there is no reason to go through the steps
that cover them again explicitly.
Example 4 shows the update rule for accom-
modating random steps, that is, correct steps
(or preconditions) that are taken by the student
in an order which differs from the order in the
tutor’s plan:
(4)
name: ACCOMMODATION OF
RANDOM STEP
cond: in(IS.LM,cor answer(S, Substep))
match agenda(Substep, Agenda,
Domain)
not(last member(AGENDA,
cor answer(S, Substep)))
effect: push(IS.AGENDA,cor answer(Substep))
The conditions specified are:
(i) that there is a correct answer that was just
performed by the student (that would be
the step to be accommodated)
(ii) that the correct answer matches a subplan
in agenda which is currently the one ac-
commodated by domain
(iii) that the substep at hand is not the final
one in agenda.
7
The effect will be that the substep will be
pushed on top of the agenda, and the task can
continue from there.
7
If it were, another update would fire because it would
mean that the reasoning, or task, has been correctly com-
pleted.
6 An AVM representation
An attribute-value matrix (av m ) representation
of the dialogue in example (1) is shown in Figure
2. This represents the course of actions that
would produce the last dialogue act by the tutor
according to the update rules that formalise the
algorithm. For current purposes the fields that
are being used are agenda, generating the steps
to be followed; obl, holding any actions that
have been classified as obligations; int, which
holds the actions to be realised in one turn here,
and lm and dh, which stand for Latest Move
and Dialogue History respectively, and which as
mentioned above are used for capturing some
aspects of the student model. For this reason
dh includes all the previous dialogue acts, until
the system comes across an update rule that
specifically tells it to empty the dh.
The Intelligent Tutor, and with it the hint-
ing session, is activated by an information re-
quest from the student (S[1]). This is ac-
knowledged explicitly and the hint relevant
concept points the student in the correct direc-
tion (T[2]). With this prompt the student re-
members something remotely relevant and gives
a grain of truth answer (S[3]). That makes the
tutor generate an acknowledgement again and
a speak to answer hint as a follow-up (T[4]).
This action is interrupted (S[5]) and continued
again in T[6]. (There is no formal account of in-
terruptions here). What should be a set amount
of time goes by and the student does not re-
spond at all, so the tutor’s interpretation is that
the student does not follow, which for the algo-
rithm is classified as a wrong answer. Hence,
a logical relation is produced to give more
profound guidance (T[7]). The student gives an-
other wrong answer, (S[8]), and that is enough
for the tutor to start explaining the current sub-
step (T[9]). After this she checks for the origin
of the problem (T[9]) in order to assess the stu-
dent anew and perhaps generate an appropri-
ate hint. In this case the student gives another
wrong answer (S[10]).
All these moves are held in dh, as shown, and
as mentioned above lm is the latest move, here
the student’s wrong answer. Together with dh
they model the student performance, and based
on the number of wrong answers the tutor will
now direct the student to read the lesson again
(T[11]). The student is obliged to do this with-
IS:
AGENDA: <dact15:ack(T, dact14), dact16:action
dir(T), dact17:point lesson(T)>
INT: <ack(T, dact12),action dir(T),point lesson(T) >
LM: <wrong answer(S,dact11)>
DH: <dact1:info req(S), dact2:ack(T,dact1),dact3:rel conc(T), dact4:grain truth(S,dact3),
dact5:ack(T,dact4), dact6:sp answer(T,dact4,interrupted),
dact6:sp answer(T,dact4,continued), dact7:wrong answer(S,dact6), dact8:ack(T,dact7),
dact9:log rel(T,dact7), dact10:wrong answer(S,dact9), dact11:ack(T,dact10),
dact12:explain(T), dact13:check or(T),dact14:wrong answer(S,dact11), >
OBL: <ack(T,dact14)>
Figure 2: AVM representation of an Information State
out negotiation, and the session will stop in the
state following the one represented by this par-
ticular av m (after the point
lesson intention
has been performed). obl holds the obligation
for the tutor to acknowledge the student’s an-
swer; here this is done implicitly, but this is
enough due to the special obligations. agenda
holds the acts that must be produced next,
based on the update rules, which become the
tutor’s intentions as represented in int. These
are the moves to be realised in the current turn.
7 Related Work
The BE&E project (Core et al., 2000) also as-
sumes the trindi framework, and a plan based
approach, but does not allow for partial order in
the student steps, modelled here by Context Ac-
commodation. The project employs multiturn
tutorial strategies, some of which are motivated
by similar theoretical interests to the ones pre-
sented here. However, the number of strategies
is small and no emphasis is given to the way
information is made salient, which is the aim of
our taxonomy. The criteria for using one strat-
egy over another are also not clear. Note that
Core et al. (2000) contains descriptions of other
Intelligent Tutoring Systems that cannot be in-
cluded here due to lack of space.
Miss Lindquist (Hefferman and Koedinger,
2000) also has some domain specific types of
questions that resemble the BE&E strategies in
form. Although there is mention of hints, and
the notion of gradually revealing information by
rephrasing the question is prominent, there is
no taxonomy of hints or any suggestions for dy-
namically producing them.
A detailed analysis of hints can also be found
in the CIRCSIM project, and in particular in
the work of Hume et al. (1996). This paper
has largely been inspired by the CIRCSIM work
both for the general planning and for the taxon-
omy of hints, although the strategies recognised
in it are domain specific.
8 Conclusion
An analysis of moves based on obligations and
a taxonomy of hints is proposed. An algo-
rithm formalising the hinting process based on
the obligations and the taxonomy, and update
rules which model the algorithm in accordance
with the trindi project proposals, have been
presented briefly. A suggestion has been put
forward for applying context accommodation to
hinting and thus modelling some aspects of the
Intelligent Tutor. Future considerations include
the integration of the move analysis into the
hinting process, a reasoner for evaluating and
categorising the student answers, a database as
described above and, of course, the full imple-
mentation of the system, which we assume will
provide a useful basis for evaluating the ap-
proach described here.
References
Peter Bohlin, Robin Cooper, Elisabeth Eng-
dahl, and Larsson Staffan. 1999. Informa-
tion states and dialogue move engines. In
Jan Alexanderson, editor, IJCAI-99 WOrk-
shop on Knowledge and Reasoning in Practi-
cal Dialogue systems.
Robin Cooper, Staffan Larsson, Colin Math-
eson, Massimo Poesio, and David Traum.
1999. Coding instructional dialogue for infor-
mation states. Technical report, University of
Gothenburg.
Robin Cooper, Staffan Larsson, Elisabeth En-
gdahl, and Stina Ericsson. 2000. Accommo-
dating questions and the nature of qud. In
Proceedings G=F6talog2000.
Mark G. Core, Johanna Moore, and Claus Zinn.
2000. Supporting constructive learning with
a feedback planner. Technical report, Human
Communication Research Center, University
of Edinburgh, 445 Burgess Drive, Menlo Park
CA 94025.
Reva Freedman, Zhou Yujian, Michael Glass,
Jung Hee Kim, and Martha W. Evens. 1998.
Using rule induction to assist in rule con-
struction for a natural-language based intelli-
gent tutoring system. In Proceedings Twenti-
eth Annual Conference of the Cognitive Sci-
ence Society, pages 362–367, Madison.
Jonathan Ginzburg. 1996. Dynamics and the
semantics of dialogue. Language and Compu-
tation,1.
Neil T. Hefferman and Kenneth R. Koedinger.
2000. Building a 3rd generation its for sym-
bolization: Adding a tutorial model with mul-
tiple tutorial strategies. In Proceedings of
the ITS 2000 Workshop on Algebra Learning,
Montreal, Canada.
Gregory D. Hume, Michael A. Joel, Rovick A.
Allen, and Martha W. Evens. 1996. Journal
of the learning sciences. Hinting as a Tactic
in One-On-One Tutoring, 5(1):23–47.
Joern Kreutel and Colin Matheson. 2000. In-
cremental information state updates in an
obligation-driven dialogue model. Language
and Computation, 0(0):1–32. to appear.
Staffan Larsson, Peter Bohlin, Johan Bos, and
David Traum, 1999. Trindikit 1.0 manual.
Staffan Larsson, Robin Cooper, and Elisa-
beth Engdahl. 2000. Question accommoda-
tion and information states in dialogue. In
Third Workshop in Human-Computer Con-
versation, Bellagio.
Colin Matheson, Massimo Poesio, and David
Traum. 2000. Modelling grounding and dis-
course obligations using update rules. In Pro-
ceedings NAACL 2000, Seatle.
Charles Rich and Candace L. Sidner. 1998. Col-
lagen: A collaboration manager for software
interface agents. User Modeling and User-
Adapted Interaction , 8(3/4):315–350.
David R. Traum and James F. Allen. 1994.
Discourse obligations in dialogue process-
ing. In Proceedings 32nd Annual meeting of
the Association for Computational Linguis-
tics (ICSLP92) , pages 1–8.
... An explanation for that is that, because the student does not give direct answers, he must indicate somehow that he is taking the students answer into account, rahter than just ignoring it (Tsovaltzi and Matheson, 2002). ...
... An explanation for that is that both student and tutor are aware of the obligations that ther respective social roles bring. Since it is the student's obligation to take what the tutor says into account (expertise plays a role) in order to proceed with the task, the student does not feel that they need to indicate that that is what they are doing (Tsovaltzi and Matheson, 2002). Relation to other moves in same dimension Observation from the BE&E corpus: It might well be the case that "yes", "OK" and "right" are accept. ...
... Statements like these constitute additional support for the obligations involved. They show the dialogue participants' awareness of their respective roles in this special social context (Tsovaltzi and Matheson, 2002;Hulstijn, 2003). ...
... Adhering to the psychological evidence for the high educational effect of hinting [7, 8], we propose to establish those tutoring aims by making use of the socratic tutoring method 2 , whose decisive characteristic is the use of hints in order to achieve self-explanation [2, 8, 9]. Our work builds on the, little, systematic research done to date in the area [11, 12, 13]. In order to model hinting, we have been developing a taxonomy of hints for the naive set theory, which is based on the previously mentioned mathematical ontology. ...
... There is pedagogical evidence [7, 8] that students learn better if the tutor does not give away the answer but instead gives hints that prompt the student for the correct answer. Accordingly, based on the work by Tsovaltzi [13] we have derived an algorithm that implements an eliciting strategy that is user-adaptive by choosing hints tailored to the students. Only if hints appear not to help does the algorithm switch to an explaining strategy, where it gives away the answer and explains it. ...
... We present in this paper the main function of the algorithm which implements hinting. We derived this algorithm from empirical data, namely from the corpus collected in the BE&E project [8], with additional normalisations motivated by educational theories [13]. The function socratic produces hints directly or calls several other functions, which we do not examine in this paper. ...
Article
Full-text available
In this paper we present an approach which enables both reflection on the student's own line of reasoning and active learning. We combine the categorisation of the student answer against the expert domain knowledge, a hinting process that engages the student in actively reflecting upon his reasoning in relation to the experts reasoning, and clarification subdialogues in natural language initiated by the tutor or the student on the task.
... To estimate those needs we made use of the objects and the relations between them as defined in the mathematical ontology. An additional guide for deriving hint categories that are useful for tutoring in our domain was a previous hint taxonomy, which was derived from the BE&E corpus [26]. ...
... There is pedagogical evidence [7; 23] that students learn better if the tutor does not give away the answer but instead gives hints that prompt the student for self-explanations. Accordingly, based on [26] we have derived an algorithm that implements an eliciting strategy that is user-adaptive by choosing hints tailored to the students. Only if hints appear not to help does the algorithm switch to an explaining strategy, where it gives away the answer and explains it. ...
Article
Full-text available
Despite empirical evidence that natural language dialog capabilities are necessary for the success of tutorial sessions, only few state-of-the-art tutor- ing systems use natural-language style interaction. Moreover, although hinting has been psychologi- cally substantiated, most intelligent tutoring sys- tems do not systematically produce hints. In the DI- ALOG project, we aim at a mathematical tutoring system that employs an elaborate natural language dialog component and a hinting tutoring strategy. To tutor mathematics, we use a formally encoded mathematical theory including definitions and the- orems along with their proofs. We enhance this on- tology by making relations explicit and show how these relations can be used when planning the next utterance of the system. Moreover, we define a scheme for classifying the student's input in terms of the knowledge of the domain demonstrated. Fi- nally, as a theory of tutoring we define a taxonomy of hints and a hint determining algorithm that im- plements the socratic tutoring strategy, whose de- cisive characteristic is the use of hints in order to achieve self-explanation. This algorithm takes into account both the mathematical ontology and the categories of the student's answers.
... To estimate those needs we made use of the objects and the relations between them as defined in the mathematical ontology. An additional guide for deriving hint categories that are useful for tutoring in our domain was a previous hint taxonomy, which was derived from the BE&E corpus [22]. ...
... There is pedagogical evidence [4; 19] that students learn better if the tutor does not give away the answer but instead gives hints that prompt the student for the correct answer. Accordingly, based on the work by Tsovaltzi [22] we have derived an algorithm that implements an eliciting strategy that is user-adaptive by choosing hints tailored to the students. Only if hints appear not to help does the algorithm switch to an explaining strategy, where it gives away the answer and explains it. ...
... To estimate those needs we made use of the objects and the relations between them as defined in the mathematical ontology. An additional guide for deriving hint categories that are useful for tutoring in our domain was a previous hint taxonomy, which was derived from the BE&E corpus [26]. ...
... There is pedagogical evidence [7; 23] that students learn better if the tutor does not give away the answer but instead gives hints that prompt the student for self-explanations. Accordingly, based on [26] we have derived an algorithm that implements an eliciting strategy that is user-adaptive by choosing hints tailored to the students. Only if hints appear not to help does the algorithm switch to an explaining strategy, where it gives away the answer and explains it. ...
Article
Full-text available
Despite empirical evidence that natural language dialog capabilities are necessary for the success of tutorial sessions, only few state-of-the-art tutoring systems use natural-language style interaction. Moreover, although hinting has been psychologically substantiated, most intelligent tutoring systems do not systematically produce hints. In the DIALOG project, we aim at a mathematical tutoring system that employs an elaborate natural language dialog component and a hinting tutoring strategy. To tutor mathematics, we use a formally encoded mathematical theory including definitions and theorems along with their proofs. We enhance this on-tology by making relations explicit and show how these relations can be used when planning the next utterance of the system. Moreover, we define a scheme for classifying the student's input in terms of the knowledge of the domain demonstrated. Finally , as a theory of tutoring we define a taxonomy of hints and a hint determining algorithm that implements the socratic tutoring strategy, whose decisive characteristic is the use of hints in order to achieve self-explanation. This algorithm takes into account both the mathematical ontology and the categories of the student's answers.
... To estimate those needs we made use of the objects and the relations between them as defined in the mathematical ontology. An additional guide for deriving hint categories that are useful for tutoring in our domain was a previous hint taxonomy, which was derived from the BE&E corpus [22]. The structure of the hint taxonomy reflects the function of the hints with respect to the information that the hint addresses or is meant to trigger. ...
... There is pedagogical ev- idence [4; 19] that students learn better if the tutor does not give away the answer but instead gives hints that prompt the student for the correct answer. Accordingly, based on the work by Tsovaltzi [22] we have derived an algorithm that implements an eliciting strategy that is user-adaptive by choosing hints tailored to the students. Only if hints appear not to help does the algorithm switch to an explaining strategy, where it gives away the answer and explains it. ...
Article
Full-text available
Despite empirical evidence that natural language dialog capabilities are necessary for the success of tutorial sessions, only few state-of-the-art tutor-ing systems use natural-language style interaction. Since domain knowledge, tutoring and pedagogical knowledge, and dialog management are tightly in-tertwined, the modeling and integration of proper natural language dialog capabilities in a tutoring system turns out to be barely manageable. In the DIALOG project, we aim at a mathematical tutoring dialog system that employs an elaborate natural language dialog component. To tutor naive set theory, we use a formally encoded mathemati-cal theory including definitions and theorems along with their proofs. In this paper we present how we enhance this ontology by making relations explicit and we show how these relations can be used by the socratic tutoring strategy, which we employ, in planning the next system utterance. The decisive characteristic of the socratic strategy is the use of hints in order to achieve self-explanation.
... The decisive characteristic of the socratic strategy is exactly the use of hints in order to achieve self-explanation [6, 11]. We, therefore formalised the hinting process building on the, little, systematic research done to date in the area [12, 13, 14]. ...
... Only if the student gets stuck should the system intervene. Based on [14] we derived a socratic algorithm that implements this. We aimed at a user-adaptive algorithm, which chooses hints tailored to the students. ...
... Adhering to psychological evidence for the high educational effect of hinting (Rosé et al. 2001 ) and taking into consideration such psychological evidence as cognitive load theory , schema acquisition and motivational theory (Lim & Moore 2002; Owen, Sweller, & Olson 1985; Keller 1987), we propose to establish our tutoring aims by making use of a socratic tutoring method, whose decisive characteristic is the use of hints in order to achieve active learning. Our work builds on the little systematic research done to date in the area (Hume et al. 1996; Fiedler & Horacek 2001; Tsovaltzi 2001; Tsovaltzi & Matheson 2002). We have been developing a taxonomy of hints, which draws on a mathematical ontology and on abstraction con-active passive domain-relation elicit-antithesis give-away-antithesis elicit-duality give-away-duality domain-object give-away-antithesis give-away-relevant-concept elicit-basic-knowledge give-away-more-general- knowledge elicit-more-general-knowledge give-away-basic-knowledge inference rule give-away-relevant-concept give-away-inference-rule elaborate-domain-object elicit-inference-rule substitution give-away-inference-rule spell-out-substitution elicit-substitution meta-reasoning spell-out-substitution explain-meta-reasoning performable-step explain-meta-reasoning give-away-performable-step reduce-parentheses variables-closer pragmatic ordered-list take-for-granted elicit-discrepancyTable 1: A fragment of the taxonomy of hints siderations, as well as on empirical data from the BE&E corpus (Moore et al. 2000) and our own corpus (Benzmüller et al. 2003b; . ...
... Adhering to psychological evidence for the high educational effect of hinting (Rosé et al. 2001) and taking into consideration such psychological evidence as cognitive load theory, schema acquisition and motivational theory (Lim & Moore 2002;Owen, Sweller, & Olson 1985;Keller 1987), we propose to establish our tutoring aims by making use of a socratic tutoring method, whose decisive characteristic is the use of hints in order to achieve active learning. Our work builds on the little systematic research done to date in the area ( Hume et al. 1996;Fiedler & Horacek 2001;Tsovaltzi 2001;Tsovaltzi & Matheson 2002). ...
Conference Paper
Full-text available
NL interaction and skillful hinting are known as cornerstones for successful tutoring. Despite these insights, a combination of these two factors is widely under-represented in the eld. Building a tutorial system for teaching mathematical proof techniques, we aim at an elaborate hinting algorithm which integrates problem solving, discourse contexts and accurate domain knowledge into a socratic hinting strategy.
Conference Paper
A key challenge for dialogue-based intelligent tutoring systems lies in selecting follow-up questions that are not only context relevant but also encourage self-expression and stimulate learning. This paper presents an approach to ranking candidate questions for a given dialogue context and introduces an evaluation framework for this task. We learn to rank using judgments collected from expert human tutors, and we show that adding features derived from a rich, multi-layer dialogue act representation improves system performance over baseline lexical and syntactic features to a level in agreement with the judges. The experimental results highlight the important factors in modeling the questioning process. This work provides a framework for future work in automatic question generation and it represents a step toward the larger goal of directly learning tutorial dialogue policies directly from human examples.
Conference Paper
Asking questions in a context relevant manner is a critical behavior for intelligent tutoring systems; however even within a single pedagogy there may be numerous valid strategies. This paper explores the use of supervised ranking models to rank candidate questions in the context of tutorial dialogues. By training models on individual and aggregate judgments from experienced tutors, we learn to reproduce individual and average preferences in questioning. Analysis of our models' performance across different tutors highlights differences in individual teaching preferences and illustrates the impact of surface form, semantic and pragmatic features for modeling variations in tutoring styles. This work has implications for dialogue system design and provides a natural starting point towards creating tunable and customizable tutorial dialogue interactions.
Article
Full-text available
One-on-one tutoring is a particularly effective mode of instruction, and we have studied the behavior of expert tutors in such a setting. A tactic commonly used by our expert tutors is hinting, that is, the prompting of a student to recollect information presumed to be known to him or her, or the prompting of a student to make an inference needed to solve a problem or answer a question, or both. Hints may directly convey information or may point to information the student already possesses. Another tactic prompts the student in a step-by-step manner (in a directed line of reasoning) to an answer. Our tutors generated 315 hints and directed lines of reasoning in 30 hr of tutoring. The surface structure of hints is complex and varied, reflecting, in part, the fact that the utterances making up hints often serve multiple functions. Hinting is triggered by student errors but ceases when it appears that the student is unable to respond appropriately. Hints encourage the student to engage in active cognitive processes that are thought to promote deeper understanding and long-term retention. It is our intention to apply our knowledge of tutorial dialogue generation to the building of an intelligent tutoring system (ITS).
Article
Full-text available
We explore the notion of information state in relation to dialogue systems, and in particular to the part of a dialogue system we call the dialogue move engine. We use a framework for experimenting with information states and dialogue move engines, and show how an experimental dialogue system currently being developed in Goteborg within the framework can be provided with rules to handle accommodation of questions and plans in dialogue. 1 Introduction We use the term information state to mean, roughly, the information stored internally by an agent, in this case a dialogue system. A dialogue move engine updates the information state on the basis of observed dialogue moves and selects appropriate moves to be performed. In this paper we use a formal representation of dialogue information states that has been developed in the TRINDI 1 , SDS 2 and INDI 3 projects 4 . The structure of this paper is as follows: First, we give a brief description of a general dialogue syste...
Article
Full-text available
We sketch the outlines of a dialogue model using discourse obligations in a formal framework of information states. We propose a set of practical inference rules which incrementally update information states and assign intentional structures to sequences of dialogue moves. In this way we show that an obligation-driven approach can account for a wide range of phenomena which are assumed to be crucial for modelling any kind of dialogue. In particular, our analysis will focus on providing a treatment of questions and assertions in dialogue, thus covering some of the basic reasoning processes involved in information-oriented interaction.
Article
Full-text available
This paper sets out to describe the nature of context needed for a semantics that captures these two features
Article
Full-text available
We have implemented an application-independent collaboration manager, called Collagen, based on the SharedPlan theory of discourse, and used it to build a software interface agent for a simple air travel application. The software agent provides intelligent, mixed initiative assistance without requiring natural language understanding. A key benefit of the collaboration manager is the automatic construction of an interaction history which is hierarchically structured according to the user's and agent's goals and intentions. To appear in User Modeling and User-Adapted Interaction, Special Issue on Computational Models for Mixed Initiative Interaction. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Ele...