ArticlePDF Available

Problems with Intent Recognition for Elder Care

Authors:

Abstract

Providing contextually appropriate help for elders re- quires the ability to identify the activities they are engaged in. However, inferring agent intent in these kinds of very general domains requires answers to a number of problems that existing AI research on intent recognition/task tracking has not addressed. This pa- per identifies some of these limitations, sketches some of the solutions and points at some promising direc- tions for future work.l
Problems with Intent Recognition for Elder Care
Christopher W. Geib
Honeywell Labs
3660 Technology Drive
Minneapolis, MN 55418 USA
{ geib,goldman}@htc.honeywell.com
Abstract
Providing contextually appropriate help for elders re-
quires the ability to identify the activities they are
engaged in. However, inferring agent intent in these
kinds of very general domains requires answers to a
number of problems that existing AI research on intent
recognition/task tracking has not addressed. This pa-
per identifies some of these limitations, sketches some
of the solutions and points at some promising direc-
tions for future work.l
Introduction
To provide contextually relevant aid for elders, as-
sistant systems must be able to observe the actions
of the elder, infer their goals, and make predictions
about their future actions. In the artificial intelligence
(AI) literature this process of deducing an agent’s goals
from observed actions is called plan recognition or task
tracking. We argue that plan recognition must be a
central component in assistant systems for elders:
However, most existing AI literature on intent recog-
nition makes a number of assumptions preventing its
application to the elder care domain. In this paper
we will discuss the requirements of this domain, in
our previous work (Geib & Goldman 2001b; 2001a;
Goldman, Geib, ~: Miller 1999) we have described an
approach to plan recognition that does not make many
of the restrictive assumptions of other intent recogni-
tion systems. In this paper we briefly discuss its ap-
plication to the recognition of elders plans in relatively
unconstrained environments.
Requirements on Plan Recognition
The domain of plan recognition for elder care requires
confronting a number of issues that have not been ex-
amined in more theoretical or academic domains. The
following is a list of issues, properties or assumptions
1This work was performed under the support of the U.S.
Department of Commerce, National Institute of Standards
and Technology, Advanced Technology Program Coopera-
tive Agreement Number 70NANBOH3020
made by some or all previous AI plan recognition re-
search that are critical in the elder care domain.
Abandoning plans: All people abandon plans at
one time or another. They forget what they are doing,
the get distracted by other events, or due to changing
circumstances they explicitly decide to abandon a goal.
In the case of providing care to elders failing memory
and mental faculties may cause the elder to uninten-
tionally abandon goals that we would like the system
to remind them of. Consider an elder that begins to
take their medicine but then gets distracted by a phone
call and does not return to the medication. To remind
the elder that they need to finish taking their medi-
cation, a system will first have to recognize that they
have abandoned that goal. We will discuss this in more
detail in the second half of the paper.
Hostile agents: Most previous work in plan recog-
nition has assumed cooperative agents, that is agents
that do not mind or are even helpful in having their
plans recognized. Unfortunately it is unrealistic to as-
sume that all elders will be willing to have their actions
observed by an assistant system. Like all adults, elders
highly value their independence and may be willing to
take extensive measures to prevent an assistant system
from observing them.
To correctly handle agents that are hostile to hav-
ing their goals recognized it is critical that we remove
the assumption of a completely observable stream of
actions. The assistant system must be able to infer the
execution of unobserced actions on the basis of other
observed actions and observations of unexplained state
changes.
Observations of failed actions: Observations of
failed actions can tell us at least as much about the
intent of the elder as successful actions, however most
current work in plan recognition has not considered
observations of failed actions in inferring intent.
Partially ordered plans: The plans for the daily
activities of an elder are often very flexible in the or-
dering of their plans steps. Consider making a peanut-
butter and jelly sandwich, do we put the jelly on bread
first or do we put peanut-butter on bread first? In fact,
13
From: AAAI Technical Report WS-02-02. Compilation copyright © 2002, AAAI (www.aaai.org). All rights reserved.
it doesn’t matter but it does result in a large number
of acceptable orderings for the plan steps.
Multiple concurrent goals: People often have
multiple goals. For example, taking medication while
eating lunch, or watching a TV show while doing laun-
dry. Much previous work in plan recognition has looked
for the single goal that best explains all the observa-
tions. In these domains this is simply unacceptable. It
is critical that the system consider when the elder is
engaged in multiple tasks at the same time.
Actions used for multiple effects: Often in real
life a single action can be used for multiple effects. Con-
sider opening the refrigerator. This can be part of a
plan to get a drink as well as making a meal. While
this action can be done once for each of the plans it
can be overloaded for use by both plans if the elder
were performing both. Again a critical requirement for
our plan recognition system is that it be able to handle
these kinds of actions.
Failure to observe: Suppose we observe the front
door being opened, and suppose this could be part of
a plan to leave the house or to get the mail. If imme-
diately after the open door even we don’t see motion
in the house we should be more likely to believe that
the elder has left the house. It is this kind of reasoning
based on the absence of observed actions that is critical
to our domain.
Our commitment to considering agents with multi-
ple concurrent goals makes it even more critical that
an assistant system be able to engage in this kind of
reasoning. It is rare that we will be provided with
definitive evidence that an elder is not pursing a spe-
cific goal. Far more likely is that a lack of evidence for
the goal will lower its probability. As a result, reason-
ing on the basis of the "failure t.o observe" is critical
for these kinds of systems to prefer those explanations
where the agent is pursuing a single goal over those
where the elder has multiple goals but has not per-
formed any of the actions for one of them.
Impact of world state on adopted plans: World
state can have significant impact on the goals that. are
adopted by an elder. An elder may be lnore likely
to bath in the evening than in the lnorning, thus we
should be more likely to suspect that activity in the
bathroom at night is part of a bathillg goal than if we
saw the same activity in the morning.
Ranking multiple possible hypotheses: Provid-
ing a single explanation for the observed actions in gen-
eral is not going to be as helpful as ranking the possi-
bilities. Rather than giving just one of possibly many
explanations for a set of observations it is much more
helpful to report the relative likelihood of each of the
possibilities.
a b c
g
Figure 1: A very simple example plan library.
Background
To met the previous set of requirements, Geib and
Goldman have developed the plan/intent recognition
framework PHATT (Probabilistic Hostile Agent Task
Tracker) (Geib ~ Goldman 2001b; 2001a) based on
model of the execution of simple hierarchical task net-
work (HTN) plans (Erol, Hendler, & Nau 1994).
ure 1 shows a very simple examples of the kind of HTN
plans that we assume are initially given to the system.
The plans in the library may be represented as and/or
trees. In both figures, "and nodes" are represented by
an undirected arc across the lines connecting the par-
ent node to its children; "or nodes" do not have this
arc. Ordering constraints in the plans are represented
by directed arcs between the actions that are ordered.
For example in Figure 1 action a must be executed
before b before c.
The central realization of the PHATT approach is
that plans are executed dynamically and that at any
given moment the agent is able to execute any one
of the actions in its plans that have been enabled by
its previous actions. To formalize this, initially the
executing agent has a set of goals and chooses a set
of plans to execute to achieve these goals. The set
of plans chosen determines a set of pending primitive
actions. The agent executes one of the pending actions,
generating a new set of pending actions from which the
next action will be chosen, and so tbrth.
For each possible agent state (explanatory hypothe-
sis), then, there is a unique corresponding pending set.
Each pending set is generated from the previous set by
removing the action just executed and adding newly
enabled actions. Actions become enabled when their
required predecessors are completed. This process is
illustrated in Figure 2. To provide some intuition for
the probabilistically-inclined, the sequence of pending
sets can be seen as a Markov chain, and the addition of
the action executions with unobserved actions makes it
a hidden Markov model.
This view of plan execution provides a simple con-
ceptual model for the generation of execution traces.
To use this model to perform probabilistic plan recog-
nition, we use the observations of the agent’s actions as
an execution trace. By stepping forward through the
trace, and hypothesizing goals the agent may have, we
14
~rmT~l
Figure 2: A simple model of plan execution.
can generate the agent’s resulting pending sets. Once
we have reached the end of the execution trace we will
have the complete set of pending sets that are consis-
tent with the observed actions and the sets of hypoth-
esized goals that go with each of these sets. Once we
have this set, we establish a probability distribution
over it and can determine which of the possible goals
the agent is most likely pursuing.
PHATT builds this probability distribution over the
set of explanations by considering three kinds of prob-
abilistic features for each of the goals in each explana-
tion. First, the prior probability of each of the root
goals being adopted by the actor. In PHATT, this is
assumed to be a given with tlle plan library. Second,
for each goal (or node) for which there are multiple
methods (children), PHATT must have a probability
method choice given goal. Typically, we have assumed
that each method is equally likely given its parent goal,
but this simplifying assumption is not required by the
framework. Third and most relevant to this work, for
each pending set, PHATT computes the probability
that tile observed action is the one chosen next for
execution. We will discuss this third computation in
greater detail ill the next section.
On the basis of these three probabilities, PHATT
computes the prior probability of a given explanation
by multiplying togeth-r the priors for each goal, the
probability of the e:.:pausion choices, and tile probabil-
ity of the observed actions being chosen. Tile set of
explanations is, by construction, all exclusive and ex-
haustive covering of the set of observations, so the pri-
ors can be normalized to yield conditional probabilities
given the observations. For large problems, some prun-
ing of the hypothesis set and approximation of poste-
riors will obviously be necessary.
By looking at plan recognition in terms of plan exe-
cution, PHATT is able to handle a nmnber of prob-
lems that are critical to application to real world
domains. In our previous work we have discussed
PHATT’s ability to handle the issues mentioned ill ear-
lier. (Goldman, Geib, & Miller 1999) discusses the
problems of, multiple interleaved goals, partially or-
dered plans, the effects of context on tile goals adopted,
the effect of negative evidence or failure to observe
( "the dog didn’t bark"). (Geib & Goldman 20Olb;
2001a) discusses PHATT’s ability to handle hostile
agent’s in the form of missing observations, and obser-
vations of "failed actions." We refer interested readers
to these papers for a more complete discussion of the
system, its operation, and these issues. In the follow-
ing we will sketch a solution to the goal abandonment
problem.
Model Revision
Rather than explicitly computing the probability that a
given goal is abandoned we have approached this prob-
lem as one of model fitting or model revision. If we are
using a model of plan execution that does not consider
plan abandonment to recognize observation streams in
which the agent is abandoning plans, we expect that
the computed probabilities for the observation streams
will be quite low. Laskey (1991), Jensen (Jensen et
al. 1990), Uabbema (1976) and others (Rubin 1984)
have suggested that cases of an unexpectedly small
P(observations[Model) should be used as evidence of
a model mismatch.
In this case where we are interested in recog-
nizing a specific kind of model mismatch/failure
(namely that the agent is no longer executing a plan
that it has begun.) As a result, the statistic of
P(observations[model) is not sufficient. While this
statistic will drop rather rapidly as we fail to see ev-
idence of the agent carrying out the steps of their
plan, it does not provide us with sufficient informa-
tion to determine which of the agent’s goals has been
abandoned, the critical information that we need in
order to repair the model. Instead of the general
P(observa~ionslrnodel
)
statistic we propose the use of
a more specific statistic. Our intuitions about how we
recognize plans will help us to define this statistic.
Suppose I am observing someone that I.initially be-
lieve has two high level goals, call them a and ft. At.
tile outset they mostly alternate the steps of the plans,
but at about, half way through tile agent stops working
on fl and instead only executes actions that are part
of ~. As the string of actions that contribute only to
a- gets longer, and we see no more actions that colt-
tribute to fl we should start to suspect that the agent
has abandoned ft.
We can formalize this idea as the probabil-
ity that none of the observed actions in a sub-
sequence (front say s to t) contribute to one
of tile goals (call it G), and we denote it.
P(noTleContrib( G, s,t)l,nodel, observations). If this
probability gets unexpectedly small we consider this
as evidence of a mismatch between the model and the
real world. Namely the model predicts that the agent
is still working on the goal, while the agent may have
abandoned it. The following section will detail how to
15
compute this probability.
For a Single Explanation
Consider the plan library shown in Figure 1. The first
plan is a very simple plan for achieving S by executing
a, b, and c and the second plan for R has only the single
step of g. Given this plan library, assume the following
sequence of observations:
happen(a, 0), happen(b, 1), happen(g, 2), happen(g, 3).
In this case we know that at time 0 and 1 that the
agent has as a goal achieving S. Further we know
that the probability of seeing c at time 2 is given by:
(tulIPS2
D
where m is the number of elements in the
pending set that have c as the next action. The prob-
ability that we don’t see c (that is the probability that
any other element of the pending set is chosen at time
2) is just:
1 - (m/IPS21)
or more generally the probability that we have seen b
at time (s- l) and not seen c by time
1 - (,,diP&l)
To handle partially ordered plans, this forlnula nmst
be generalized slightly. With partially ordered plans it
is po.~sible for more than a single next action to con-
tribute to tile specified root goal. Thus, if mq,i repre-
sents the number of elements (with any next action) ill
tlle pending set at time i that coutribute to goal q, (s-l)
is tile last time we saw an action contribute to q and t is
tile current time, P(noneContrib(q, s, t)]-model, obs)
t
1-I 1 -
I)
Thus, under the assumptions that we have lnade we Call
compute the probability of tile subsequence of actions
smt contributing to a given plan or goal.
Given the large number of possible sets of abandoued
goals, simply computing this probability is not enough.
We will need a way to prune the search space of these
possible alternatives, that is, trigger model revision and
add the assumption that the plan has been abandoned.
To do this we require the system designer to specify a
threshold valne we will call the Probability of Aban-
donment Threshold (PAT). The PAT represents how
confident the system must be that no actions have con-
tributed to a given goal, before it can assume that the
goal actually has been abandoned.
By computing the probability of the action sequence
not contributing to the goal and comparing it to the
user set PAT, we can consider any explanation in which
this probability drops below the threshold as sufficient
evidence of a model mismatch and revise the model to
reflect the goal’s abandonment. This requires remov-
ing all the elements from the current pending set that
contribute to the abandoned goal. Modeling of the ex-
ecution of the rest of the goals can continue as before.
Estimating P(abandon(g)lObs)
If we keep a
rolling
computation of P(noneContrib(g, s, t)lmodel, obs) for
each g and threshold our models as described in the
previous section, the G&G algorithm will now provide
us with explanations that include abandoned goals and
probabilities for each of them. On the basis of this in-
formation we can estimate the probability of a goal’s
abandonment by:
P(aband°ned(g)lObs) ~--~E~pACg) p(elObs)
Z~
~p
P(elObs)
where Exp represents the set of all explanations for the
observations, and EXpA(g
)
represents the set of expla-
nations in which goal g is marked as abandoned.
Implementation
We have implemented this algorithm for estimating
model mismatch and plan abandonment in the PHATT
system. ~Ve are in the process of evaluating the al-
gorithms accuracy in estimating plan abandonment.
However, we have done some simple work on the algo-
rithms runtime. Figure 3 shows the runtimes for seven-
teen hundred randomly generated two goal observation
streams. The plans were taken fi’om an extensive plan
library that we have for a different domain. The x-axis
graphs the length of the observation stream (the num-
ber of actions observed) and tile )’-axis is a log scale
the runtime in milliseconds. For these runs the PAT
was set. at the relatively low level of 0.24 to encourage
a reasonable number of goals to be abandoned by the
system. As a result, 1138 of the runs had at. least, one
abandoned goal, and the majority of the examples still
had a runtiule under one second.
It is worth noting that, in general, running the same
set. of exalnples with a higher PAT can result in higher
runtimes. Since hypothesized goals and plans are kept.
active longer the system takes more time to determine
the correct set. of explanations and probabilities for the
observations. Likewise, lowering the PAT and aban-
doning more goals often shortens the runtime for the
system to generate each explanation. This results from
the reduced size of the pending set and the resulting
16
1O000
E
IOO0
IO0
o 0 o o
o ~ o o
9 12
0
0
o @
o 0
$
8
o
8
!
°
0 0 0 0 0 0 0
o ~ o o_ o I o
15 18 21
Number of Observations
o
O O
o
O o
o $
$
O
8
o
°l
0
8
o
o o
24
Figure 3:1700 runs with 2 goals each.
I
27
reduction in the number of possible plans for the goals
considered.
Preliminary analysis of this data shows that the
worst cases for the algorithm result when there is a
long connnon prefix for more than one goal in tile plan
library. If two plans with this con]mon prefix are in-
terleaved, alternating one action from each plan, the
number of possible explanations is quite large. Fur-
ther, since the actions are alternating, neither of the
plans is believed to be abandoned. In these cases; the
flfll algorithm is being run with the added overhead of
computing P(noneColTtrib(g, s, t)lmodel, obs) and at-
tempting to find abandoned plans.
Conclusions
In this paper we have argued for using a probabilistic
model of plan recognition in assistant systems for el-
der care. Vv’e have identified many requirelnents that
are placed on the process of plan recognition by this
domain and have shown how our model of plan recog-
nition based on plan execution meets these require-
lnents. These extensions remove a major assumption of
previous research in plan recognition and significantly
broadens the domains where plan recognition can be
applied.
References
Erol, K.; Hendler, J.; and Nan, D. S. 1994. UMCP:
A sound and complete procedure for hierarchical task
network planning. In Hammond, K. J., ed., Artifi-
cial Intelligence Planning Systems: Proceedings of the
Second International Conference, 249-254. Los Altos,
CA: Morgan Kaufmann Publishers, Inc.
Geib, C. W., and Goldman, R. P. 2001a. Plan recog-
nition in intrusion detection systems. In Proceedings
of DISCEX II, 2001.
Geib, C. W., and Goldman, R. P. 200lb. Probabilistic
plan recognition for hostile agents. In Proceedings of
the FLAIRS 2001 Conference.
Goldman, 1%. P.; Geib, C. W.; and Miller, C. A. 1999.
A new model of plan recognition. In Proceedings of
the 1999 Conference on Uncertainty in Artificial In-
telligence.
Habbema, J. 1976. Models for diagnosis and detec-
tions of combinations of diseases. In Decision Making
and Medical Care. North Holland.
Jensen, F. V.; Chamberlain, B.; Nordahl, T.; and
Jensen, F. 1990. Analysis in hugin of data conflict.
In Proceedings o/the 6th Conference on Uncertainty
in Artificial Intelligence.
Laskey, K. B. 1991. Conflict and surprise: Heuristics
for model revision. In Proceedings of the 7th Confer-
ence on Uncertainty in Artificial Intelligence.
Rubin, D. 1984. Bayesianly justifiable and relevent
frequency calculations for the applied statistician. In
The Annals of Statistics, volume 12(4), 1151-1172.
17
... Its aim is to infer the top-level goal of an agent based on input observation sequences and output an explanation of the observation sequence [1]. Currently, goal recognition has been applied in various fields, including intelligent assistants [2,3], autonomous driving [4,5], robot navigation [6,7], military confrontation [8], and more [9]. For example, in the application of goal recognition in a smart home assistant [2,3], the input observation sequence consists of a series of actions performed by a person as observed by a camera. ...
... Currently, goal recognition has been applied in various fields, including intelligent assistants [2,3], autonomous driving [4,5], robot navigation [6,7], military confrontation [8], and more [9]. For example, in the application of goal recognition in a smart home assistant [2,3], the input observation sequence consists of a series of actions performed by a person as observed by a camera. By using a goal recognition algorithm, the system can, for example, identify the goal of an elderly person as needing to go to the kitchen. ...
Article
Full-text available
The problem of goal recognition involves inferring the high-level task goals of an agent based on observations of its behavior in an environment. Current methods for achieving this task rely on offline comparison inference of observed behavior in discrete environments, which presents several challenges. First, accurately modeling the behavior of the observed agent requires significant computational resources. Second, continuous simulation environments cannot be accurately recognized using existing methods. Finally, real-time computing power is required to infer the likelihood of each potential goal. In this paper, we propose an advanced and efficient real-time online goal recognition algorithm based on deep reinforcement learning in continuous domains. By leveraging the offline modeling of the observed agent’s behavior with deep reinforcement learning, our algorithm achieves real-time goal recognition. We evaluate the algorithm’s online goal recognition accuracy and stability in continuous simulation environments under communication constraints.
... Existing survey papers on the topic of inference [13][14][15][16][17][18][19][20][21] have mainly focused on: 1) computational agents in human-machine interaction (HMI) applications [13][14][15][16] such as smart homes [22,23], surveillance [24], decision support systems [25], and dialogue interpretation [26]; or 2) physical human-robot interactions (HRI) in social settings [17][18][19][20][21] such as domestic assistance [27,28], medical rehabilitation [29,30], and social compliant navigation [31,32]. In the first set of applications, a computational agent observes a human and makes predictions about the activity, plan, or goal of the human [14]. ...
... In particular, activity prediction involves predicting human actions [22], while plan prediction is focused on inferring the actions that a human intends to execute next (i.e., the plan) to achieve their goals [24,25]. Lastly, goal prediction infers the overall objective of a human to understand his/her intention in achieving a specific goal [23,26]. However, these methods typically assume a static observer that is not acting on the observed environment and situations where the observed human is not aware that he/she is being observed [14]. ...
Article
Full-text available
In real-world environments, ranging from urban disastrous scenes to underground mining tunnels, autonomous mobile robots are being deployed in harsh and cluttered environments, having to deal with perception and communication issues that limit their facilitation for data sharing and coordination with other robots. In these scenarios, mobile robot inference can be used to increase spatial awareness and aid decision-making in order to complete tasks such as navigation, exploration, and mapping. This is advantageous as inference enables robots to plan with predicted information that is otherwise unobservable, thus, reducing the replanning efforts of robots by anticipating future states of both the environment and teammates during execution. While detailed reviews have explored the use of inference during human–robot interactions, to-date none have explored mobile robot inference in unknown environments and with cooperative teams. In this survey paper, we present the first extensive investigation of mobile robot inference problems in unknown environments with limited sensor and communication range and propose a new taxonomy to classify the different environment and task inference methods for single- and multi-robot systems. Furthermore, we identify the open research challenges within this emerging field and discuss future research directions to address them.
... However, most of the approaches developed in AI to address this problem suppose that the observed entity is rational, i.e. that all his performed actions are coherent with his intentions. The issue is more complex with patients suffering from Alzheimer's disease, because we must explicitly take into account in the recognition process the possible incoherent behaviour of the patient [8]. ...
... Moreover, these approaches are not adapted to the specific disorder of the patient, and therefore cannot anticipate behavioural anomalies. However, this adaptation is crucial in the case of patients with cognitive deficiencies [8]. Our work is closer to the logical approaches of [10] [15], which define the recognition activity in terms of inferential process. ...
Article
Full-text available
In this paper, we propose a non-quantitative logical approach to ADL recognition in a smart home, dedicated to Alzheimer's patients. Our formal frame-work for the recognition process is based on lattice theory and action description logic. Our framework reduces the uncertainty about the prediction of the observed patient's behaviour, allowing the assistant agent to anticipate the opportunities for assistance. This is achieved by dynamically generating the future potentially inco-herent intentions of the patient, which result from the symptoms of their cognitive impairments (disorientation, memory lapse, etc.). This approach offers an effective solution to actual recognition of ADL in a smart home, in order to provide assis-tance to persons suffering from Alzheimer's disease.
... Goal recognition is also related to the problem of Plan Recognition [128], which consists of trying to infer the actual plan adopted by the observed agent. The task of goal recognition has a number of potential and actual applications, including assisting the handicapped [129], activities of daily living [130], workplace safety [131], among others [132,133]. ...
Article
Multi-agent systems is an evolving discipline that encompasses many different branches of research. The long-standing Agents at Aberdeen ( A 3 ) group undertakes research across several areas of multi-agent systems, focusing in particular on aspects related to resilience, reliability, and coordination. In this article we introduce the group and highlight past research successes in those themes, building a picture of the strengths within the group. We close the paper outlining the future direction of the group and identify key open challenges and our vision towards solving them.
... Because of the importance and ubiquity of intention recognition, there have been many efforts, using several approaches, to formalize, implement and enable it in different application fields, such as military [36], security [56], pilot training [36], elder care [57], etc. ...
Thesis
Full-text available
This thesis concerns the problem of modelling evolving prospective agent systems. Inasmuch a prospective agent looks ahead a number of steps into the future, it is confronted with the problem of having several different pos-sible courses of evolution, and therefore needs to be able to prefer amongst them to determine the best to follow as seen from its present state. Based on historical information as well as quantitative and qualitative a poste-riori evaluation of its possible evolutions, the agent is equipped with so-called evolution-level preferences mechanism. In addition, to enable such a prospective agent to evolve, we provide a way for modelling its evolving knowledge base, including environment triggering of active goals, context-sensitive preferences and integrity constraints. Furthermore, to allow an evolving prospective agent acting under uncertainty, P-log is employed for representing probabilistic knowledge. Finally, such agents are enhanced with an ability of intention recognition, via combination of Causal Bayes Net-works and plan attribution. Besides, several examples are exhibited to illustrate the proffered con-cepts and features. We also show how the evolving prospective agent system can be applied to model morality and provide supports for elderly people.
Chapter
Goal recognition is an important problem in many application domains (e.g., pervasive computing, intrusion detection, computer games, etc.). In many application scenarios, it is important that goal recognition algorithms can recognize goals of an observed agent as fast as possible. However, many early approaches in the area of Plan Recognition As Planning, require quite large amounts of computation time to calculate a solution. Mainly to address this issue, recently, Pereira et al. developed an approach that is based on planning landmarks and is much more computationally efficient than previous approaches. However, the approach, as proposed by Pereira et al., considers trivial landmarks (i.e., facts that are part of the initial state and goal description are landmarks by definition) for goal recognition. In this paper, we show that it does not provide any benefit to use landmarks that are part of the initial state in a planning landmark based goal recognition approach. The empirical results show that omitting initial state landmarks for goal recognition improves goal recognition performance.
Article
Recognizing goals and plans from complete or partial observations can be efficiently achieved through automated planning techniques. In many applications, it is important to recognize goals and plans not only accurately, but also quickly. To address this challenge, we develop novel goal recognition approaches based on planning techniques that rely on planning landmarks. In automated planning, landmarks are properties (or actions) that cannot be avoided to achieve a goal. We show the applicability of a number of planning techniques with an emphasis on landmarks for goal recognition tasks in two settings: (1) we use the concept of landmarks to develop goal recognition heuristics; and (2) we develop a landmark-based filtering method to refine existing planning-based goal and plan recognition approaches. These recognition approaches are empirically evaluated in experiments over several classical planning domains. We show that our goal recognition approaches yield not only accuracy comparable to (and often higher than) other state-of-the-art techniques, but also result in substantially faster recognition time over existing techniques.
Conference Paper
This paper proposes a goal recognition and planning algorithm, HTN-GRP-PO, to enable intelligent assistant agents to recognize older adults’ goals and reason about desired further steps. It will be used in a larger system aimed to help older adults with cognitive impairments to accomplish activities of daily living independently. The algorithm addresses issues including partial observability due to unreliable or missing sensors, concurrent goals, and incorrectly executed steps. The algorithm has a Hierarchical Task Network basis, which enables it to deal with partially ordered subtasks and alternative plans. We test on simulated cases of different difficulties. The algorithm works very well on simple cases, with accuracy close to 100%. Even for the hardest cases, the performance is acceptable when sensor reliabilities are above 0.95.
Conference Paper
This paper presents a theoretically sound solution for intention recognition in smart home environment. The proposed solution for elderly intention recognition is based on activity recognition and context recognition. Accelerator-based activity classification and RSSI based location estimation technique have been suggested. The recognized situation sequence is then matched with most possible intentions modeled in Markov chains. Our preliminary experiment suggests this algorithm is viable.
Article
Just-in-time (JIT) is an important topic of lean manufacturing and services, focusing on the reduction of the waiting time. In this study, a JIT ubiquitous output system is established based on the application of a handheld intelligent device. The system can be regarded as a location aware service (LAS). The JIT ubiquitous output system starts from the detection of the user's location and speed using the global positioning system (GPS) on the hand-held intelligent device. The detection results and the document to be printed are sent to the server with a reasoning module that searches for the nearby service locations to determine the output location, so that the document can be printed out just before the user reaches the output location, i.e. just in time. To this end, the fuzzy Dijkstra's algorithm is proposed.
Article
Full-text available
After a brief introduction to causal probabilistic networks and the HUGIN approach, the problem of conflicting data is discussed. A measure of conflict is defined, and it is used in the medical diagnostic system MUNIN. Finally, it is discussed how to distinguish between conflicting data and a rare case.
Article
Full-text available
We present a new abductive, probabilistic theory of plan recognition. This model differs from previous plan recognition theories in being centered around a model of plan execution: most previous methods have been based on plans as formal objects or on rules describing the recognition process. We show that our new model accounts for phenomena omitted from most previous plan recognition theories: notably the cumulative effect of a sequence of observations of partially-ordered, interleaved plans and the effect of context on plan adoption. The model also supports inferences about the evolution of plan execution in situations where another agent intervenes in plan execution. This facility provides support for using plan recognition to build systems that will intelligently assist a user.
Conference Paper
Full-text available
This paper presents a probabilistic and abductive the- ory of plan recognition that handles agents that are actively hostile to the inference of their plans. This focus violates a primary assumption of most previous work, namely complete observability of the executed actions.
Conference Paper
Full-text available
To be effective, current intrusion detection systems (IDSs) must incorporate artificial intelligence methods for plan recognition. Plan recognition is critical both to predicting the future actions of attackers and planning appropriate responses to their actions. However network security places a new set of requirements on plan recognition. We present an argument for including plan recognition in IDSs and an algorithm for conducting plan recognition that meets the needs of the network security domain
Article
Full-text available
One big obstacle to understanding the nature of hierarchical task network (htn) planning has been the lack of a clear theoretical framework. In particular, no one has yet presented a clear and concise htn algorithm that is sound and complete.
Conference Paper
Any probabilistic model of a problem is based on assumptions which, if violated, invalidate the model. Users of probability based decision aids need to be alerted when cases arise that are not covered by the aid's model. Diagnosis of model failure is also necessary to control dynamic model construction and revision. This paper presents a set of decision theoretically motivated heuristics for diagnosing situations in which a model is likely to provide an inadequate representation of the process being modeled.
Models for diagnosis and detections of combinations of diseases. In Decision Making and Medical Care
  • J Habbema
Habbema, J. 1976. Models for diagnosis and detections of combinations of diseases. In Decision Making and Medical Care. North Holland.