ChapterPDF Available

Intention Based Decision Making and Applications

Authors:

Abstract and Figures

In this chapter, the authors present an intention-based decision-making system. They exhibit a coherent combination of two Logic Programming-based implemented systems, Evolution Prospection and Intention Recognition. The Evolution Prospection system has proven to be a powerful system for decision-making, designing, and implementing several kinds of preferences and useful environment-triggering constructs. It is here enhanced with an ability to recognize intentions of other agents—an important aspect not well explored so far. The usage and usefulness of the combined system are illustrated with several extended examples in different application domains, including Moral Reasoning, Ambient Intelligence, Elder Care, and Game Theory.
Content may be subject to copyright.
Hans W. Guesgen
Massey University, New Zealand
Stephen Marsland
Massey University, New Zealand
Human Behavior
Recognition
Technologies:
Intelligent Applications for
Monitoring and Security
Lindsay Johnston
Joel Gamon
Jennifer Yoder
Adrienne Freeland
Austin DeMarco
Kayla Wolfe
Alyson Zerbe
Jason Mull
Human behavior recognition technologies : intelligent applications for monitoring and security / Hans Guesgen and Stephen
Marsland, editors.
pages cm
Includes bibliographical references and index.
Summary: “This book takes an insightful glance into the applications and dependability of behavior detection and looks
into the social, ethical, and legal implications of these areas”--Provided by publisher.
ISBN 978-1-4666-3682-8 (hardcover) -- ISBN 978-1-4666-3683-5 (ebook) -- ISBN 978-1-4666-3684-2 (print & perpetual
access) 1. Human activity recognition. 2. Home automation. I. G?sgen, Hans Werner, 1959- II. Marsland, Stephen.
TK7882.P7H86 2013
681’.25--dc23
2012045192
British Cataloguing in Publication Data
A Cataloguing in Publication record for this book is available from the British Library.
All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the
authors, but not necessarily of the publisher.
Managing Director:
Editorial Director:
Book Production Manager:
Publishing Systems Analyst:
Development Editor:
Assistant Acquisitions Editor:
Typesetter:
Cover Design:
Published in the United States of America by
Information Science Reference (an imprint of IGI Global)
701 E. Chocolate Avenue
Hershey PA 17033
Tel: 717-533-8845
Fax: 717-533-8661
E-mail: cust@igi-global.com
Web site: http://www.igi-global.com
Copyright © 2013 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in
any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher.
Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or
companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark.
Library of Congress Cataloging-in-Publication Data
174
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Chapter 9
INTRODUCTION
Given the crucial role and ubiquity of intentions
in our everyday decision making (Bratman,
1987; Meltzoff, 2007; Roy, 2009b; Searle, 2010;
Woodward, Sommerville, Gerson, Henderson,
& Buresh, 2009), one would expect intentions to
occupy a substantial place in any theory of ac-
tion. However, in what concern perhaps the most
prominent theory of action—rational choice theory
(Binmore, 2009; Russell & Norvig, 2003)—which
includes the theory of decision making—the
attention is mainly, if not exclusively, given to
actions, strategies, information, outcomes and
preferences, but not to intentions (Roy, 2009a;
van Hees & Roy, 2008).
This is not to say that no attention has been
paid to the relationship between rational choice and
intentions. Quite the contrary, a rich philosophi-
cal and Artificial Intelligence (AI) literature has
The Anh Han
Universidade Nova de Lisboa, Portugal
Luis Moniz Pereira
Universidade Nova de Lisboa, Portugal
Intention-Based Decision
Making via Intention
Recognition and its Applications
ABSTRACT
In this chapter, the authors present an intention-based decision-making system. They exhibit a coherent
combination of two Logic Programming-based implemented systems, Evolution Prospection and Intention
Recognition. The Evolution Prospection system has proven to be a powerful system for decision-making,
designing, and implementing several kinds of preferences and useful environment-triggering constructs.
It is here enhanced with an ability to recognize intentions of other agents—an important aspect not well
explored so far. The usage and usefulness of the combined system are illustrated with several extended
examples in different application domains, including Moral Reasoning, Ambient Intelligence, Elder
Care, and Game Theory.
DOI: 10.4018/978-1-4666-3682-8.ch009
175
Intention-Based Decision Making via Intention Recognition and its Applications
developed on the relation between rationality and
intentions (Bratman, 1987; Cohen & Levesque,
1990; Malle, Moses, & Baldwin, 2003; Singh,
1991; van Hees & Roy, 2008). Some philosophers,
for example in (Bratman, 1987; Roy, 2009b), have
been concerned with the role that intention plays
in directing rational decision making and guiding
future actions. In addition, many agent researchers
have recognized the importance of intentions in
developing useful agent theories, architectures,
and languages, such as Rao and Georgeff with
their BDI model (Rao & Georgeff, 1991, 1995),
which has led to the commercialization of several
high-level agent languages, e.g. in (Burmeister,
Arnold, Copaciu, & Rimassa, 2008; Wooldridge,
2000, 2002). However, to the best of our knowl-
edge, there has been no real attempt to model and
implement the role of intentions in decision mak-
ing, within a rational choice framework. Intentions
of other relevant agents are always assumed to be
given as the input of a decision making process; no
system that integrates a real intention recognition
system into a decision making system has been
implemented so far.
In this chapter, we present a coherent Logic
Programming (LP) based framework for deci-
sion making—which extends our previous work
on Evolution Prospection for decision making
(Pereira & Han, 2009a, 2009b)—but taking into
consideration now the intentions of other agents.
Obviously, when being immersed in a multi-
agent environment, knowing the intentions of
other agents can benefit the recognizing agents
in a number of ways. It enables the recognizing
agents to predict what other agents will do next
or might have done before—thereby being able
to plan in advance to take the best advantage
from the prediction, or to act so as to take re-
medial action. In addition, an important role of
recognizing intentions is to enable coordination
of your own actions and in collaborating with
others (Bratman, 1987, 1999; Kaminka, Tambe,
Pynadath, & Tambe, 2002; Roy, 2009b; Searle,
1995, 2010). We have also recently shown the role
of intention recognition in promoting improved
cooperative behavior in populations or societies
of self-interested agents (Han, 2012; Han, Pereira,
& Santos, 2011b, 2012a, 2012b). A large body
of literature has exhibited experimental evidence
of the ability to recognize/understand intentions
of others in many kinds of interactions and com-
munications, not only in Human but also many
other species (Cheney & Seyfarth, 2007; Meltzoff,
2005, 2007; Tomasello, 1999, 2008; Woodward,
et al., 2009). Furthermore, the important role of
intention-based decision making modeling has
been recognized in a diversity of experimental
studies, including behavioral economics (Falk,
Fehr, & Fischbacher, 2008; Frank, Gilovich, &
Regan, 1993; Radke, Guroglu, & de Bruijn, 2012)
and morality (Hauser, 2007; Young & Saxe, 2011).
In AI application domains wherein an ability to
recognize users’ intentions is crucial for the suc-
cess of a technology, such as the ones of Ambient
Intelligence (Friedewald, Vildjiounaite, Punie,
& Wright, 2007; Sadri, 2011a) and Elder Care
(Giuliani, Scopelliti, & Fornara, 2005; Pereira &
Han, 2011a; Sadri, 2010, 2011b), intention-based
decision making is also becoming of increasing
interest.
The Evolution Prospection (EP) system is an
implemented LP-based system for decision making
(Pereira & Han, 2009a, 2009b). An EP agent can
prospectively look ahead a number of steps into
the future to choose the best course of evolution
that satisfies a goal. This is achieved by design-
ing and implementing several kinds of prior and
post preferences (Pereira, Dell’Acqua, & Lopes,
2012; Pereira & Lopes, 2009) and several useful
environment-triggering constructs for decision
making. In order to take into account intentions
of other agents in decision making processes,
we employ our previously implemented, also
LP-based, intention recognition system, as an
external module of the EP system. For an easy
integration, the Bayesian network inference of
the intention recognition system is performed by
P-log (Baral, Gelfond, & Rushton, 2009; Han,
176
Intention-Based Decision Making via Intention Recognition and its Applications
Carroline, & Damasio, 2008), a probabilistic logic
system1. In general, intention recognition can be
defined as the process of inferring the intention
or goal of another agent (called individual in-
tention recognition”) or a group of other agents
(called collective intention recognition) through
their observable actions or their actions’ observ-
able effects on the environment (Han & Pereira,
2010a; Heinze, 2003; Sadri, 2010; Sukthankar &
Sycara, 2008).
The remainder of this chapter is structured as
follows. In Section 2 we describe our two LP-based
previously implemented systems, the Evolution
Prospection system and Intention Recognition.
On top of these two systems, in Section 3 we
describe our intention-based decision making
system, the main contribution of this chapter.
Section 4 describes how our framework can be
utilized to address several issues in the Ambient
Intelligence and Elder Care application domains.
Next, Section 5 points out how intentionality is
important in the moral reasoning, and how our
intention-based decision making system can be
used therein. This section also demonstrates how
our system can be useful to model different issues
in Game Theory, when strategies are characterized
as modifiable intentions. The chapter ends with
concluding remarks and future work directions.
BACKGROUND
Evolution Prospection
The implemented EP system2 has proven useful
for decision making (Han, 2009; Han & Pereira,
2011b; Han, Saptawijaya, & Pereira, 2012; Pereira
& Han, 2009a, 2009b). It is implemented on top
of ABDUAL3, a preliminary implementation of
(Alferes, Pereira, & Swift, 2004), using XSB
Prolog (XSB, 2009). We next describe the con-
structs of EP, to the extent we use them here. A
full account can be found in (Han, 2009; Pereira
& Han, 2009b).
Language: Let L be a first order language.
A domain literal in L is a domain atom A or
its default negation not A. The latter is used to
express that the atom is false by default (Closed
World Assumption). A domain rule in L is a rule
of the form:
AL1,…, Lt (t 0)
where A is a domain atom and L1,…, Lt are domain
literals. An integrity constraint in L is a rule with
an empty head. A (logic) program P over L is
a set of domain rules and integrity constraints,
standing for all their ground instances.
Here we consider solely Normal Logic Pro-
grams (NLPs), those whose heads of rules are
positive literals, or empty (Baral, 2003). We focus
furthermore on abductive logic programs (Alferes,
et al., 2004; Kakas, Kowalski, & Toni, 1993), i.e.
NLPs allowing for abducibles – user-specified
positive literals without rules, whose truth-value
is not fixed. Abducibles instances or their default
negations may appear in bodies of rules, like any
other literal. They stand for hypotheses, each of
which may independently be assumed true, in
positive literal or default negation form, as the
case may be, in order to produce an abductive
solution to a query.
Definition 1 (Abductive Solution): An abductive
solution is a consistent collection of abduc-
ible instances or their negations that, when
replaced by true everywhere in P, affords a
model of P (for the specific semantics used
on P) which satisfies the query and the ICs
– a so-called abductive model.
Active Goals: In each cycle of its evolution
the agent has a set of active goals or desires. We
introduce the on_observe/1 predicate, which we
consider as representing active goals or desires
that, once triggered by the observations figuring
in its rule bodies, cause the agent to attempt their
satisfaction by launching all the queries standing
177
Intention-Based Decision Making via Intention Recognition and its Applications
for them, or using preferences to select them. The
rule for an active goal AG is of the form:
on_observe(AG) L1,…, Lt (t 0)
where L1,…, Lt are domain literals. During evo-
lution, an active goal may be triggered by some
events, previous commitments or some history-
related information. When starting a cycle, the
agent collects its active goals by finding all the
on_observe(AG) that hold under the initial theory
without performing any abduction, then finds
abductive solutions for their conjunction.
Preferring Abducibles: An abducible A can
be assumed only if it is a considered one, i.e. if it
is expected in the given situation, and, moreover,
there is no expectation to the contrary
consider(A) expect(A), not expect_not(A), A.
The rules about expectations are domain-
specific knowledge contained in the theory of the
program, and effectively constrain the hypotheses
available in a situation. Note that for each abduc-
ible a consider-rule is added automatically into
the EP program.
Handling preferences over abductive logic
programs has several advantages, and allows for
easier and more concise translation into NLPs than
those prescribed by more general and complex
rule preference frameworks. The advantages of so
proceeding stem largely from avoiding combina-
tory explosions of abductive solutions, by filtering
irrelevant as well as less preferred abducibles
(Pereira, et al., 2012).
To express preference criteria among abduc-
ibles, we envisage an extended language L*. A
preference atom in L* is of the form a <| b, where
a and b are abducibles. It means that if b can be
assumed (i.e. considered), then b <| a forces a to
be considered too if it can. A preference rule in
L* is of the form:
a <| b L1,…, Lt (t0)
where L1,…, Lt are domain literals over L*.
A priori preferences are used to produce the
most interesting or relevant conjectures about
possible future states. They are taken into account
when generating possible scenarios (abductive
solutions), which will subsequently be preferred
amongst each other a posteriori.
Example 1: (Choose Tea or Coffee): Consider
a situation where I need to choose to drink
either tea or coffee (but not both). I prefer
coffee to tea when sleepy, and do not drink
coffee when I have high blood pressure. This
situation can be described with the follow-
ing EP program, including two abducibles
coffee and tea:
abds [tea/0, coffee/0].
on_observe(drink).
drink ← tea.
drink ← coffee.
← tea, coffee.
expect(tea). expect(coffee).
expect_not(coffee) ← blood_high_pressure.
coffee <| tea ← sleepy.
This program has two abductive solutions, one
with tea and the other with coffee. Adding literal
sleepy triggers the only a priori preference in the
program, which defeats the solution where only
tea is present (due to the impossibility of simul-
taneously abducing coffee). If later we add blood
pressure high, coffee is no longer expected, and the
transformed preference rule no longer defeats the
abduction of tea, which then becomes the single
abductive solution, despite the presence of sleepy.
A Posteriori Preferences: Having computed
possible scenarios, represented by abductive solu-
tions, more favorable scenarios can be preferred
a posteriori. Typically, a posteriori preferences
are performed by evaluating consequences of
abducibles in abductive solutions. An a posteriori
preference has the form:
178
Intention-Based Decision Making via Intention Recognition and its Applications
Ai Aj holds_given(Li, Ai), holds_given(Lj, Aj)
where Ai, Aj are abductive solutions and Li, Lj are
domain literals. This means that Ai is preferred
to Aj a posteriori if Li and Lj are true as the side
effects of abductive solutions Ai and Aj, respec-
tively, without any further abduction when testing
for the side effects. Optionally, in the body of the
preference rule there can be any Prolog predicate
used to quantitatively compare the consequences
of the two abductive solutions.
Evolution Result A Posteriori Preference:
While looking ahead a number of steps into the
future, the agent is confronted with the problem
of having several different possible courses of
evolution. It needs to be able to prefer amongst
them to determine the best courses from its present
state (and any state in general). The a posteriori
preferences are no longer appropriate, since they
can be used to evaluate only one-step-far con-
sequences of a commitment. The agent should
be able to also declaratively specify preference
amongst evolutions through quantitatively or
qualitatively evaluating the consequences or side
effects of each evolution choice.
A posteriori preference is generalized to prefer
between two evolutions. An evolution result a
posteriori preference is performed by evaluat-
ing consequences of following some evolutions.
The agent must use the imagination (look-ahead
capability) and present knowledge to evaluate the
consequences of evolving according to a particular
course of evolution. An evolution result a posteriori
preference rule has the form:
Ei <<< Ej holds_in_evol(Li, Ei),
holds_in_evol(Lj, Ej)
where Ei , Ej are possible evolutions and Li, Lj are
domain literals. This preference implies that Ei is
preferred to Ej if Li and Lj are true as evolution
history side effects when evolving according to
Ei or Ej, respectively, without making further ab-
ductions when just checking for the side effects.
Optionally, in the body of the preference rule there
can be recourse to any Prolog predicate, used to
quantitatively compare the consequences of the
two evolutions for decision making.
Intention Recognition
We describe our previously implemented intention
recognition system, which operates upon Bayesian
Network (BN) inference (Pereira & Han, 2009c,
2011b). To begin with, we provide some basic
definitions regarding BNs needed for further
understanding of the system.
Bayesian Networks
Definition 2 (Bayesian Network): A Bayesian
Network (BN) is a pair consisting of a di-
rected acyclic graph (DAG) whose nodes
represent variables and missing edges en-
code conditional independencies between
the variables, and an associated probability
distribution satisfying the Markov assump-
tion of conditional independence, saying
that variables are independent of non-
descendants given their parents in the graph
(Pearl, 1988, 2000).
In a BN, associated with each node of its
DAG is a specification of the distribution of its
variable, say A, conditioned on its parents in
the graph (denoted by pa(A))—i.e., P(A|pa(A))
is specified. If pa(A) is empty (A is called root
node), its unconditional probability distribution,
P(A), is specified. These distributions are called
the Conditional Probability Distribution (CPD)
of the BN.
The joint distribution of all node values can
be determined as the product of conditional prob-
abilities of the value of each node on its parents
P X XN P Xi pa Xi
t
N
1
1
,...,
( )
=
( )
()
=
(1)
179
Intention-Based Decision Making via Intention Recognition and its Applications
where V= {X1,…,XN}. is the set of nodes of the
DAG. Suppose there is a set of evidence nodes
(i.e. their values are observed) in the DAG, say
O= {O1,…,ON}
V. We can determine the con-
ditional probability distribution of a variable X
given the observed value of evidence nodes by
using the conditional probability formula
P X O P X O
P O
P X O Om
P O Om
()
=
( )
( )
=
( )
( )
, , ,...,
,...,
1
1
(2)
where the numerator and denominator are com-
puted by summing the joint probabilities over
all absent variables with respect to V as follows:
P X x O o
P X x O o AV av
av ASG AV
= =
( )
= = = =
( )
,
, ,
( )
1
1
P O o P O o AV av
av ASG AV
=
( )
= = =
( )
,
( )
2
2
where o= {o1,…,om} with o1,…,om being the ob-
served values of O1,…,Om, respectively; ASG(Vt)
denotes the set of all assignments of vector Vt
(with components are variables in V); AV1, AV2
are vectors components of which are correspond-
ing absent variables, i.e. variables in V\(O
{X})
and V\O, respectively.
Bayesian Networks for
Intention Recognition
In (Pereira & Han, 2009c, 2011b), a general BN
model for intention recognition is presented and
justified based on Heinze’s causal intentional
model (Heinze, 2003; Tahboub, 2006). Basically,
the BN consists of three layers: cause/reason nodes
in the first layer (called pre-intentional), connect-
ing to intention nodes in the second one (called
intentional), in turn connecting to action nodes
in the third (called activity) (Figure 1).
In general, intention recognition consists in
computing the probabilities of each conceivable
intention conditional on the current observations,
including the observed actions in the third layer,
and some of the causes/reasons in the first layer.
The prediction of what is the intention of the
observed agent can simply be the intention with
the greatest conditional probability, possibly above
some minimum threshold. Sometimes it is also
useful to predict what are the N(N 2) most
likely intentions given the current observations
(Armentano & Amandi, 2009; Blaylock & Allen,
2003; Han & Pereira, 2011a).
Example 2 (Fox-Crow): Consider the Fox-Crow
story, adapted from Aesop’s fable (Aesop).
There is a crow, holding a cheese. A fox, being
hungry, approaches the crow and praises her,
hoping that the crow will sing and the cheese
will fall down near him. Unfortunately for
the fox, the crow is very intelligent, having
the ability of intention recognition.
Figure 1. General structure of a Bayesian network
for intention recognition. The Bayesian network
consists of three layers. The pre-intentional layer
consists of cause/reason nodes, connecting to
intention nodes in the intentional layer, which in
turn connect to action nodes in the activity layer.
180
Intention-Based Decision Making via Intention Recognition and its Applications
The BN for recognizing Fox’s intention is
depicted in the Figure 2. The initial possible
intentions of Fox that Crow comes up with are:
Food- i(F), Please- i(P) and Territory-i(T). The
facts that might give rise to those intentions are
how friendly the Fox is (Friendly_fox) and how
hungry he is (Hungry_fox). These figure in the
first layer of the BN as the causes/reasons of the
intention nodes. Currently, there is only one ob-
servation, which is, Fox praised Crow (Praised).
In this work, Bayesian Network inference will
be performed using P-log, a probabilistic logic
system, described in the next section. This will
not only allow us to effectively represent the
causal relations present a BN for intention recog-
nition, the logic-based implementation of P-log
will also allow us to make an easy integration
with the EP system.
P-Log
The P-log system in its original form (Baral, et
al., 2009) uses Answer Set Programming (ASP)
as a tool for computing all stable models (Baral,
2003; Gelfond & Lifschitz, 1993) of the logical
part of P-log. Although ASP has proven a useful
paradigm for solving a variety of combinatorial
problems, its non-relevance property (Castro,
Swift, & Warren, 2007) makes the P-log system
sometimes computationally redundant. A new
implementation of P-log (Han, et al., 2008; Han,
Carroline, & Damasio, 2009), which we deploy
in this work, uses the XASP package of XSB
Prolog (XSB, 2009) for interfacing with Smodels
(Niemela & Simons, 1997), an answer set solver.
The power of ASP allows the representation of
both classical and default negation, to produce
2-valued models. Moreover, using XSB as the
underlying processing platform enables collect-
ing the relevant abducibles for a query, obtained
Figure 2. Bayesian network for Fox’s intention recognition
181
Intention-Based Decision Making via Intention Recognition and its Applications
by need with top-down search. Furthermore,
XSB permits to embed arbitrary Prolog code
for recursive definitions. Consequently, it allows
more expressive queries not supported in the
original version, such as meta-queries (proba-
bilistic built-in predicates can be used as usual
XSB predicates, thus allowing the full power of
probabilistic reasoning in XSB) and queries in
the form of any XSB predicate expression (Han,
et al., 2008). In addition, the tabling mechanism
of XSB (Swift, 1999) significantly improves the
performance of the system.
In general, a P-log program Π consists of a
sorted signature, declarations, a regular part, a set
of random selection rules, a probabilistic informa-
tion part, and a set of observations and actions.
Sorted Signature and Declaration: The sorted
signature Σ of Π contains a set of constant symbols
and term-building function symbols, which are
used to form terms in the usual way. Addition-
ally, the signature contains a collection of special
function symbols called attributes. Attribute terms
are expressions of the form a(t), where a is an
attribute and t is a vector of terms of the sorts
required by a. A literal is an atomic expression,
p, or its explicit negation, neg_p.
The declaration part of a P-log program can be
defined as a collection of sorts and sort declarations
of attributes. A sort c can be defined by listing all
the elements c= {x1,…,xm} or by specifying the
range of values c= {L..U}, where L and U are the
integer lower bound and upper bound of the sort
c. Attribute a with domain c1×... ×cn and range
c0 is represented as follows:
a:c1 ×... × cn --> c0
If attribute a has no domain parameter, we
simply write a: c0. The range of attribute a is
denoted by range(a).
Regular Part: This part of a P-log program
consists of a collection of XSB Prolog rules, facts
and integrity constraints (IC) formed using literals
of Σ. An IC is encoded as a XSB rule with the
false literal in the head.
Random Selection Rule: This is a rule for at-
tribute a having the form:
random(RandomName,a(t),DynamicRange):-
Body.
This means that the attribute instance a(t) is
random if the conditions in Body are satisfied. The
DynamicRange allows us to restrict the default
range for random attributes. The RandomName
is a syntactic mechanism used to link random
attributes to the corresponding probabilities. A
constant full can be used in DynamicRange to
signal that the dynamic range is equal to range(a).
Probabilistic Information: Information about
probabilities of random attribute instances a(t)
taking a particular value y is given by probability
atoms (or simply pa-atoms) which have the fol-
lowing form:
pa(RandomName, a(t,y), d (A,B)):- Body
meaning that if the Body were true, and the value of
a(t) were selected by a rule named RandomName,
then Body would cause a(t) = y with probability
A/B. Note that the probability of an atom a(t,y)
will be directly assigned if the corresponding pa/3
atom is the head of some pa-rule with a true body.
To define probabilities of the remaining atoms
we assume that, by default, all values of a given
attribute, which are not assigned a probability,
are equally likely.
Observations and Actions: These are, respec-
tively, statements of the forms obs(l) and do(l),
where l is a literal. Observations obs(a(t,y)) are
used to record the outcomes y of random events
a(t), i.e. random attributes and attributes dependent
on them. Statement do(a(t,y)) indicates a(t) = y
is enforced as the result of a deliberate action.
In an EP program, P-log code is embedded
by putting it between two reserved keywords
182
Intention-Based Decision Making via Intention Recognition and its Applications
beginPlog and endPlog. In P-log, probabilistic
information can be obtained using the XSB Prolog
built-in predicate pr/2 (Han, et al., 2008). Its first
argument is the query, the probability of which is
needed to compute. The second argument captures
the result. Thus, probabilistic information can be
easily embedded by using pr/2 like a usual Pro-
log predicate, in any constructs of EP programs,
including active goals, preferences, and integrity
constraints. What is more, since P-log (Han, et al.,
2008) allows us to code Prolog probabilistic meta-
predicates (Prolog predicates that depend on pr/2
predicates), we also can directly use probabilistic
meta-information in EP programs. We will illus-
trate those features with several examples below.
Example 3 (Fox-Crow): The BN for Fox’s inten-
tion recognition (Figure 2) can be coded with
the P-log program in Box 1.
Two sorts bool and fox_intentions, in order to
represent Boolean values and the current set of
Fox’s conceivable intentions, are declared in part
1. Part 2 is the declaration of four attributes hun-
gry_fox, friendly_fox, praised and i, which state
the first three attributes have no domain param-
eter and get Boolean values, and the last one maps
each Fox’s intention to a Boolean value. The
random selection rules in part 3 declare that these
four attributes are randomly distributed in their
ranges. The distributions of the top nodes (hun-
gry_fox, friendly_fox) and the CPD corresponding
to the BN in Figure 2 are given in part 4 and parts
Box 1.
1. bool = {t,f}. fox_intentions = {food,please,territory}.
2. hungry_fox : bool. friendly_fox : bool.
i : fox_intentions --> bool. praised : bool.
3. random(rh, hungry_fox, full). random(rf, friendly_fox, full).
random(ri, i(I), full). random(rp, praised, full).
4. pa(rh,hungry_fox(t),d_(1,2)).
pa(rf,friendly_fox(t),d_(1,100)).
5. pa(ri(food),i(food,t),d_(8,10)) :- friendly_fox(t),hungry_fox(t).
pa(ri(food),i(food,t),d_(9,10)) :- friendly_fox(f),hungry_fox(t).
pa(ri(food),i(food,t),d_(0.1,10)) :- friendly_fox(t),hungry_fox(f).
pa(ri(food),i(food,t),d_(2,10)) :-
friendly_fox(f),hungry_fox(f).
6. pa(ri(please),i(please,t),d_(7,10)) :- friendly_fox(t),hungry_fox(t).
pa(ri(please),i(please,t),d_(1,100)) :- friendly_fox(f),hungry_fox(t).
pa(ri(please),i(please,t),d_(95,100)) :- friendly_fox(t),hungry_fox(f).
pa(ri(please),i(please,t),d_(5,100)) :- friendly_fox(f),hungry_fox(f).
7. pa(ri(territory),i(territory,t),d_(1,10)) :- friendly_fox(t).
pa(ri(territory),i(territory,t),d_(9,10)) :- friendly_fox(f).
8. pa(rp, praised(t),d_(95,100)) :- i(food, t), i(please, t).
pa(rp, praised(t),d_(6,10)) :- i(food, t), i(please, f).
pa(rp, praised(t),d_(8,10)) :- i(food, f), i(please, t).
pa(rp, praised(t),d_(1,100)) :- i(food, f), i(please,f), i(territory,t).
pa(rp, praised(t),d_(1,1000)) :- i(food,f), i(please,f), i(territory,f).
183
Intention-Based Decision Making via Intention Recognition and its Applications
5-8, respectively, using the probabilistic informa-
tion pa-rules. For example, in part 4 the first rule
says that fox is hungry with probability 1/2 and
the second rule says he is friendly with probabil-
ity 1/100. The first rule in part 5 states that if Fox
is friendly and hungry, the probability of him
having intention Food is 8/10.
Note that the probability of an atom a(t,y) will
be directly assigned if the corresponding pa/3 atom
is in the head of some pa-rule with a true body.
To define probabilities of the remaining atoms
we assume that by default, all values of a given
attribute which are not assigned a probability are
equally likely. For example, the first rule in part 4
implies that fox is not hungry with probability 1/2.
And, actually, we can remove that rule without
changing the probabilistic information since, in
that case, the probability of fox being hungry and
of not being hungry are both defined by default,
thus, equal to 1/2.
The probabilities of Fox having intention Food,
Territory and Please given the observation that
Fox praised Crow can be found in P-log with the
queries in Box 2, respectively.
From the result of Box 2, we can say that Fox
is most likely to have the intention of deceiving
the Crow for food, i(food).
INTENTION-BASED
DECISION MAKING
There are several ways an EP agent can benefit
from the ability to recognize intentions of other
agents, both in friendly and hostile settings. Know-
ing the intention of an agent is a means to predict
what he will do next or might have done before.
The recognizing agent can then plan in advance
to take the best advantage of the prediction, or
act to take remedial action. Technically, in the
EP system, this new kind of knowledge may im-
pinge on the body of any EP constructs, such as
active goals, expectation and counter-expectation
rules, preference rules, integrity constraints, etc.,
providing a new kind of trigger.
In order to account for intentions of other agents
in decision making with EP, we provide a built-
in predicate, has_intention(Ag,I), stating that an
agent Ag has the intention I. The truth-value of this
predicate is evaluated by the intention recognition
system. Whenever this predicate is called in an
EP pro- gram, the intention recognition system is
employed to check if Ag has intention I, i.e. I is the
most likely conceivable intention at that moment.
We also provide predicate has_intention(Ag,I,Pr),
stating that agent Ag has intention I with prob-
ability Pr. Hence, one can express, for example,
the situation where one needs to be more, or less,
cautious.
One can also generalize to consider the N-best
intention recognition approach, that is, to assess
whether the intention of the agent is amongst the
N most likely intentions. It has been shown that
by increasing N, the recognition accuracy is sig-
nificantly improved (Armentano & Amandi, 2009;
Blaylock & Allen, 2003; Han & Pereira, 2011a).
In the sequel we draw closer attention to some
EP constructs, illustrating with several examples
how to take into account intentions of other agents
for enhancement of decision making.
Box 2.
?− pr(i(food,t) ′|′ obs(praised(t)),V1). The answer is: V1 = 0.9317.
?− pr(i(territory,t) ′|′ obs(praised(t)),V2). The answer is: V2 = 0.8836.
?− pr(i(please,t) ′|′ obs(praised(t)),V3). The answer is: V3 = 0.0900.
184
Intention-Based Decision Making via Intention Recognition and its Applications
Intentions Triggering Active Goals
Recall that an active goal has the form
on_observe(AG) L1,…, Lt (t 0)
where L1,…, Lt are domain literals. At the begin-
ning of each cycle of evolution, those literals
are checked with respect to the current evolving
knowledge base and trigger the active goal if they
all hold. For intention triggering active goals, the
domain literals in the body can be in the form
of has intention predicates, taking into account
intentions of other agents.
This way, any intention recognition system can
be used as the goal producer for decision making
systems, the inputs of which are (active) goals to
be solved (see for instance (Han & Pereira, 2011b);
Pereira & Han, (2011a, 2011b)).
It is easily seen that intention triggering ac-
tive goals are ubiquitous. New goals often appear
when one recognizes some intentions in others.
In a friendly setting, one might want to help oth-
ers to achieve their intention, which is generally
represented as follows
on_observe(help_achieve_goal(G)) friend(P),
has_intention(P,G)
while in a hostile setting, we probably want to
prevent the opponents from achieving their goals
on_observe(prevent_achieve_goal(G))
opponent(P), has_intention(P,G)
Or, perhaps we simply want to plan in advance
to take advantage of the hypothetical future ob-
tained when the intending agent employs the plan
that achieves his intention
on_observe(take_advantage(F)) agent(P),
has_intention(P,G), future(employ(G),F).
Let us look a little closer at each setting, provid-
ing some ideas how they can be enacted. When
helping someone to achieve an intention, what
we need to do is to help him/her with executing a
plan achieving that intention successfully, i.e., all
the actions involved in that plan can be executed.
This usually occurs in multi-agent collaborative
tasks (see for example (Kaminka, et al., 2002)),
wherein the agents need to be able to recognize
their partners’ intention to secure an efficient
collaboration.
In contrast, in order to prevent an intention
from being achieved, we need to guarantee that
any conceivable plans achieving that intention
cannot be executed successfully. To that effect,
at least one action in each plan must be prevented
if the plan is conformant (i.e., a sequence of ac-
tions (Phan Huy Tu, Son, Gelfond, & Morales,
2011)). If the plan is conditional (see for (Pereira
& Han, 2009c; P. H. Tu, Son, & Baral, 2007)),
each branch is considered a conformant plan and
must be prevented.
We shall exhibit a diversity of examples in the
following sections.
Intention Triggering Preferences
Having recognized an intention of another agent,
the recognizing agent may either favor or disfavor
an abducible (a priori preferences), an abductive
solution (a posteriori preferences) or an evolution
(evolution result a posteriori preferences) with
respect to another, respectively; depending on the
setting they are in. If they are in a friendly setting,
the one that provides more support to achieve
the intention is more favored; in contrast, in a
hostile setting, the one providing more support
is disfavored. The recognizing agent may also
favor the one that takes better advantage of the
recognized intention.
To illustrate the usage of intention triggering
a priori preferences, we revise here Example 1.
185
Intention-Based Decision Making via Intention Recognition and its Applications
Example 4 (Choose tea or coffee taking into
account a friend’s intentions): Being
thirsty, I consider making tea or coffee. I
realize that my roommate, John, also wants
to have a drink. To be friendly, I want to take
into account his intention when making my
choice. This scenario is represented with the
EP program in Box 3.
It is enacted by the preference rules in part 5.
The first rule says that tea is preferable, a priori,
to coffee if John intends to drink tea; and vice
versa, the second rule says that if John intends to
drink coffee, coffee is preferable. Note that the
recognition of what John intends is performed by
the intention recognition system—which is trig-
gered when a reserved predicate has_intention/2
is called.
This scenario also can be encoded using inten-
tion triggering a posteriori preferences. As a good
friend of John, I prefer an abductive solution with
a side effect of John being happy to the one with
a side effect of John being unhappy. This can be
coded as in Box 4.
Despite its simplicity, the example demon-
strates how to solve a class of collaborative situ-
ations, where one would like to take into account
the intentions and the need of others when deriv-
ing relevant hypothetical solutions of our current
goals.
Next, to illustrate other kinds of preferences,
we consider the following revised extended version
of the saving city example, presented in (Pereira
& Han, 2009b).
Example 5 (Saving cities by means of inten-
tion recognition): During war time, agent
David, a general, needs to decide to save a
city from his enemy’s attack or leave it to
keep the military resource, which might be
important for some future purpose. David
has recognized that a third party is intend-
ing to make an attack to the enemy on the
next day. David will have a good chance to
defeat the enemy if he has enough military
resource to coordinate with the third party.
The described scenario is coded with the EP
program in Box 5.
In the first cycle of evolution, there are two
abducibles, save and leave, declared in part 1, to
solve the active goal choose. The active goal is
triggered when David recognizes the intention of
the enemy to attack his city (part 3).
Similar to the original version in (Pereira &
Han, 2009b), in the case of being a bad general
who just sees the situation at hand, David would
choose to save the city since it would save more
people (5000 vs. 0, part 4), i.e. the a posteriori
preference in part 5 is taken into account im-
mediately, to rule out the case of leaving the city
since it would save less people. Then, next day,
he would not be able to attack since the military
Box 3.
1. abds([coffee/0, tea/0]).
2. expect(coffee). expect(tea).
3. on_observed(drink) ← thirsty.
drink ← tea.
drink ← coffee.
← tea, coffee.
4. expect_not(coffee) ← blood_high_pressure.
5. tea <| coffee ← has_intention(john,tea).
coffee <| tea ← has_intention(john,coffee).
Box 4.
unhappy ← coffee, has_intention(john, tea).
happy ← coffee, has_intention(john, coffee).
unhappy ← tea, has_intention(john, coffee).
unhappy ← tea, has_intention(john, tea).
Ai << Aj ← holds_given(happy, Ai),
holds_given(unhappy, Aj).
186
Intention-Based Decision Making via Intention Recognition and its Applications
resource is not saved (part 7), and that leads to the
outcome with very small probability of winning
the whole war (part 8).
But, fortunately, being able to look ahead plus,
being capable of intention recognition, David can
see that on the next day, if he has enough mili-
tary resources, he will have a good opportunity
to make a counter-attack on his enemy (part 7),
by coordinating with a third party who exhibits
the intention to attack the enemy on that day as
well; and a successful counter-attack would lead
to a very much higher probability of winning the
conflict as a whole (part 8). The evolution result
a posteriori preference is employed in part 9 to
prefer the evolution with higher probability of
winning the whole conflict.
In this example we can see, in part 7, how a
detected intention of another agent can be used
to enhance the decision making process. It is
achieved by providing an (indirect) trigger for an
abducible expectation—thereby enabling a new
opportunistic solution by means of coordinating
with others —which affects the final outcome
of the evolution result a posteriori preference
in part 9.
Hostile Setting
In this hostile setting, having confirmed the inten-
tion (and possibly also the plans achieving that
intention being carried out by the intending agent),
the recognizing agent might act to prevent the
intention from being achieved, that is, prevent at
least one action of each intention achieving plan
from being successfully executed; and, in case of
impossibility to doing so, act to minimize losses
as much as possible.
Example 6 (Fox-Crow, cont’d): Suppose in Ex-
ample 2, the final confirmed Fox’s intention
is that of getting food (additional details can
be found in (Pereira & Han, 2009c)). That
Box 5.
1. abds([save/0, leave/0]).
2. expect(save). expect(leave).
3. on_observe(choose) ← has_intention(enemy,attack_my_city).
choose ← save.
choose ← leave.
4. save_men(5000) ← save. save_men(0) ← leave.
lose_resource ← save. save_resource ← leave.
5. Ai << Aj ← holds_given(save_men(Ni), Ai),
holds_given(save_men(Nj), Aj), Ni > Nj.
6. on_observe(decide) ← decide_strategy.
decide ← stay_still.
decide ← counter_attack.
7. good_opportunity ← has_intention(third_party,attack).
expect(counter_attack) ← good_opportunity, save_resource.
expect(stay_still).
8. pr(win,0.9) ← counter_attack.
pr(win,0.01) ← stay_still.
9. Ei <<< Ej ← holds_in_evol(pr(win,Pi), Ei),
holds_in_evol(pr(win,Pj), Ej), Pi > Pj.
187
Intention-Based Decision Making via Intention Recognition and its Applications
is, the predicate has_intention(fox,food)
holds. Having recognized Fox’s intention,
what should Crow do to prevent Fox from
achieving it? The EP program in Box 6 helps
Crow with that.
There are two possible ways so as not to lose
the Food to Fox, either simply decline to sing (but
thereby missing the pleasure of singing) or hide
or eat the cheese before singing.
Part 1 is the declaration of program abducibles
(the last two abducibles are for the usage in the
second phase, starting from part 9). All of them are
always expected (part 2). The counter-expectation
rule in part 4 states that an animal is not expected
to eat if he is full. The integrity constraints in part
5 say that Crow cannot decline to sing and sing,
hide and eat the cheese, at the same time. The a
priori preference in part 6 states that eating the
cheese is always preferred to hiding it (since it
may be stolen), of course, just in case eating is a
possible solution.
Suppose Crow is not full. Then, the counter-
expectation in part 4 does not hold. Thus, there
Box 6.
1. abds([decline/0, sing/0, hide/2, eat/2, has_food/0, find_new_food/0]).
2. expect(decline). expect(sing).
expect(hide(_,_)). expect(eat(_,_)).
3. on_observe(not_losing_cheese) ← has_intention(fox, food).
not_losing_cheese ← decline.
not_losing_cheese ← hide(crow,cheese), sing.
not_losing_cheese ← eat(crow,cheese), sing.
4. expect_not(eat(A,cheese)) ← animal(A), full(A).
animal(crow).
5. ← decline, sing.
← hide(crow,cheese), eat(crow,cheese).
6. eat(crow,cheese) <| hide(crow,cheese).
7. no_pleasure ← decline.
has_pleasure ← sing.
8. Ai<< Aj ← holds_given(has_pleasure,Ai),
holds_given(no_pleasure,Aj).
9. on_observe(feed_children) ← hungry(children).
feed_children ← has_food.
feed_children ← find_new_food.
← has_food, find_new_food.
10.expect(has_food) ← decline, not eat(crow,cheese).
expect(has_food) ← hide(crow,cheese), not stolen(cheese).
expect(find_new_food).
11. Ei<<< Ej ← hungry(children), holds_in_evol(has_food,Ei),
holds_in_evol(find_new_food,Ej).
12. Ei<<< Ej ← holds_in_evol(has_pleasure,Ei),
holds_in_evol(no_pleasure,Ej).
188
Intention-Based Decision Making via Intention Recognition and its Applications
are two possible abductive solutions: [decline]
and [eat(crow,cheese), sing] (since the a priori
preference prevents the choice containing hiding).
Next, the a posteriori preference in part 8 is
taken into account and rules out the abductive
solution containing decline since it leads to hav-
ing no pleasure which is less preferred to has
pleasure —the consequence of the second solu-
tion that contains sing (part 7). In short, the final
solution is that Crow eats the cheese then sings,
without losing the cheese to Fox and having the
pleasure of singing.
Now, let us consider a smarter Crow who is
capable of looking further ahead into the future
in order to solve longer-term goals. Suppose that
Crow knows that her children will be hungry later
on, in the next stage of evolution (part 9); eating
the cheese right now would make her have to find
new food for the hungry children. Finding new
food may take long, and is always less favorable
than having food ready to feed them right away
(cf. the evolution result a posteriori preference in
part 11). Crow can see three possible evolutions:
[[decline], [has_food]]; [[hide(crow, cheese),
sing], [has_food]] and [[eat(crow, cheese), sing],
[find_new_food]]. Note that in looking ahead
at least two steps into the future, a posteriori
preferences are taken into account only after all
evolution-level ones have been applied (Pereira
& Han, 2009b).
Now the two evolution result a posteriori
preferences in parts 11-12 are taken into account.
The first one rules out the evolution including
finding new food since it is less preferred than the
other two, which includes has_food. The second
one rules out the one including decline. In short,
Crow will hide the food to keep it for her hungry
children, and still take pleasure from singing.
In short, we have seen several extended
examples illustrating diverse ways in which ac-
counting for intentions of others might, in a simple
manner, significantly enhance the final outcome
of a decision situation. In the next sections we
pay attention to concrete application domains,
wherein we address issues on which intention-
based decision making may enable improvement,
and show how to tackle them using our described
logic-based framework. Namely, in more techno-
logical based application domains, those regarding
Ambient Intelligence in the home environment and
regarding Elder Care will be studied in the next
section. Then, in Section 5, more experimental
based domains, those of moral reasoning and
game theory, are given attention.
AMBIENT INTELLIGENCE IN
THE HOME ENVIRONMENT
AND ELDER CARE
Ambient Intelligence (AmI) is the vision of a
future in which environments support people
inhabiting in them. The envisaged environment is
unobtrusive, interconnected, adaptable, dynamic,
embedded and intelligent. It should be sensitive to
the needs of inhabitants, and capable of anticipat-
ing their needs and behavior. It should be aware
of their personal requirements and preferences,
and interact with people in a user-friendly way
(see a comprehensive survey in (Sadri, 2011a)).
One of the key issues of Ambient Intelligence,
which has not been well studied yet, and reported
as an ongoing challenge (Cook, Augusto, &
Jakkula, 2009), is that AmI systems need to be
aware of users’ preferences, intentions and needs.
Undoubtedly too, respecting users’ preferences
and needs in decision making processes would
increase their degree of acceptance with respect to
the systems, making these deemed more friendly
and thoughtful.
From this perspective on AmI, we can see
a number of issues where intention recognition
techniques can step in, providing help and enabling
improvement. For example, in order to provide
appropriate support, the environment should be
able to proactively recognize the inhabitants’
intention—to glean whether they need help to
accomplish what they intend to do—or to warn
189
Intention-Based Decision Making via Intention Recognition and its Applications
them (or their carers) in case they intend something
inappropriate or even dangerous.
Undoubtedly, an ability to recognize inten-
tions of assisted people, as well as other relevant
concerns such as intruders or the like, would en-
able to deal with a combination of several issues,
encompassing those of pro-activeness (either ago-
nistic or antagonistic), security, and emergency,
in a much more integrated and timely manner
(Han & Pereira, 2010a, 2010b; P. Roy, Bouchard,
Bouzouane, & Giroux, 2007). We discuss these
very issues in the sequel.
Proactive Support
An important feature of AmI, particularly desir-
able in the Elder Care domain, is that the assisting
system should take initiative to help the people it
assists. To this end, the system must be capable of
recognizing their intentions on the basis of their
observable actions, then provide suggestions or
help achieve the recognized intentions (Pereira
& Han, 2011a, 2011b). A suggestion can be, for
example, what are the appropriate kinds of drink
for the elder, considering the current time, tem-
perature, or even future scheduled events such as
going to have a medical test on the next day, upon
having recognized that he has an intention to drink
something. Or, a suggestion can simply be telling
the elder where he put his book yesterday, having
recognized that he might be looking for it. This
feature is especially desirable and important when
the assisted people are elderly or individuals with
disabilities or suffering from mental difficulties
(P. Roy, et al., 2007). The need for technology
in this area is obvious looking at the fact that in
the last twenty years there has been a significant
increase of the average age of the population in
most western countries and the number of elderly
people has been and will be constantly growing
(Cesta & Pecora, 2004; Cook, et al., 2009; Geib,
2002; Giuliani, et al., 2005; Haigh, et al., 2004;
Han & Pereira, 2010a; Pereira & Han, 2011a; P.
Roy, et al., 2007; Sadri, 2008).
The EP system can be engaged to provide
appropriate suggestions for the elders, taking
into account the external environment, elders’
preferences and already scheduled future events.
Expectation rules and a priori preferences cater
for the physical state information (health reports)
of the elders, in order to guarantee that only
contextually safe healthy choices are generated;
subsequently, information such as the elders’
pleasure and interests are then considered by a
posteriori preferences and the like.
In the Elder Care domain, assisting systems
should be able to provide contextually appro-
priate suggestions for the elders based on their
recognized intentions. The assisting system is
supposed to be better aware of the environment,
the elders’ physical states, mental states as well
as their scheduled events, so that it can provide
good and safe suggestions, or simply warnings.
Let us consider the following simple scenario
in the Elder Care domain.
Example 7 (Elder Care): An elder stays alone in
his apartment. The intention recognition sys-
tem observes that he is looking for something
in the living room. In order to assist him, the
system needs to figure out what he intends
to find. The possible things are: something
to read (book); something to drink (drink);
the TV remote control (Rem); and the light
switch (Switch). The BN for recognizing the
elder’s intention, with CPD and top nodes
distribution, is given in Figure 3.
Similarly to the P-log representation and infer-
ence in Example 3, the probabilities that the elder
has the intention of looking for book, drink, remote
control and light switch given the observations
that he is looking around and of the states of the
light (on or off) and TV (on or off) can be obtained
with the queries in Box 7, respectively. In these
instances, S1, S2 are Boolean values (t or f) are to
be instantiated during execution, depending on
the states of the light and TV. Let us consider the
possible cases:
190
Intention-Based Decision Making via Intention Recognition and its Applications
If the light is off (S2 =f), then V1= V2=
V3= 0, V4 =1.0, regardless of the state of
the TV.
If the light is on and TV is off (S1 = t,
S2=f), then V1 =0.7521, V2 = 0.5465, V3=
0.5036, V4 = 0.0101.
If both light and TV are on (S1 = t, S2= t),
then V1 = 0, V2 = 0.6263, V3 = 0.9279,
V4= 0.0102.
Thus, if one observes that the light is off,
definitely the elder is looking for the light switch,
Figure 3. Bayesian network for recognizing the elder’s intentions
Box 7.
? − pr(i(book, t) | (obs(tv(S1)) & obs(light(S2)) & obs(look(t))), V1).
? − pr(i(drink, t) | (obs(tv(S1)) & obs(light(S2)) & obs(look(t))), V2).
? − pr(i(rem, t) | (obs(tv(S1)) & obs(light(S2)) & obs(look(t))), V3).
? − pr(i(switch, t) | (obs(tv(S1)) & obs(light(S2)) & obs(look(t))), V4).
191
Intention-Based Decision Making via Intention Recognition and its Applications
given that he is looking around. Otherwise, if one
observes the light is on, in both cases where the
TV is either on or off, the first three intentions
book, drink, remote control still need to be put
under consideration in the next phase, generating
possible plans for each of them. The intention of
looking for the light switch is very unlikely to be
the case comparing with other three, thus being
ruled out. When there is light one goes directly
to the light switch if the intention is to turn it off,
without having to look for it.
Example 8 (Elder Care, cont’d): Suppose in
the above Elder Care scenario, the final
confirmed intention is that of looking for
a drink4. The possibilities are: natural pure
water, tea, coffee and juice. The EP system
now is employed to help the elder with
choosing an appropriate drink. The scenario
is coded with the EP program below.
The elder’s physical states are utilized in a pri-
ori preferences and expectation rules to guarantee
that just choices that are contextually safe for the
elder are generated. Only after that other aspects,
for example the elder’s pleasure with respect to
each kind of drink, are taken into account, with
the a posteriori preferences. See Box 8.
The information regarding the environment
(current time, current temperature) and the
physical states of the elder is coded in parts 9-11.
The assisting system is supposed to be aware of
this information in order to provide good sugges-
tions.
Part 1 is the declaration of the program ab-
ducibles: water, coffee, tea, and juice. All of them
in this case are always expected (part 2). Part 3
exhibits an intention triggering active goal: since
the intention recognition module confirms that
the elder’s intention is to find something to drink,
the EP system is triggered to seek appropriate
suggestions for achieving the elder’s intention.
The counter-expectation rules in part 4 state that
coffee is not expected if the elder has high blood
pressure, experiences difficulty to sleep or it is
late; and juice is not expected if it is late. Note
that the reserved predicate prolog/1 is used to
allow embedding Prolog code, put between two
built-in keywords, beginProlog and endProlog,
in an EP program. More details can be found in
(Han, 2009; Pereira & Han, 2009a, 2009b). The
integrity constraints in part 5 say that it is not al-
lowed to have at the same time the following pairs
of drink: tea and coffee, tea and juice, coffee and
juice, and tea and water. However, it is the case
that the elder can have coffee or juice together
with water at the same time.
The a priori preferences in part 6 say in the
morning coffee is preferred to tea, water and juice.
And if it is hot, juice is preferred to all other kinds
of drink and water is preferred to tea and coffee
(part 7). In addition, the a priori preferences in
part 8 state if the weather is cold, tea is the most
favorable, i.e. preferred to all other kinds of drink.
Now let us look at the suggestions provided by
the Elder Care assisting system modeled by this
EP program, considering some cases:
1. time(24) (late); temperature(16) (not hot,
not cold); no high blood pressure; no sleep
difficulty: there are two a priori abductive
solutions: [tea], [water]. Final solution: [tea]
(since it has greater level of pleasure than
water, which is ruled out by the a posteriori
preference in part 12.
2. time(8) (morning time); temperature(16)
(not hot, not cold); no high blood pressure;
no sleep difficulty: there are two abductive
solutions: [coffee], [coffee, water]. Final:
[coffee], [coffee, water].
3. time(18) (not late, not morning time); tem-
perature(16) (not cold, not hot); no high
blood pressure; no sleep difficulty: there are
six abductive solutions: [coffee], [coffee,
water], [juice], [juice, water], [tea], and
[water]. Final: [coffee], [coffee, water].
4. time(18) (not late, not morning time); tem-
perature(16) (not cold, not hot); high blood
192
Intention-Based Decision Making via Intention Recognition and its Applications
pressure; no sleep difficulty: there are four
abductive solutions: [juice], [juice, water],
[tea], and [water]. Final: [tea].
5. time(18) (not late, not morning time); tem-
perature(16) (not cold, not hot); no high
blood pressure; sleep difficulty: there are
four abductive solutions: [juice], [juice,
water], [tea], and [water]. Final: [tea].
6. time(18) (not late, not morning time); tem-
perature(8) (cold); no high blood pressure; no
sleep difficulty: there is only one abductive
solution: [tea].
7. time(18) (not late, not morning time); tem-
perature(35) (hot); no high blood pressure;
no sleep difficulty: there are two abductive
solutions: [juice], [juice, water]. Final:
[juice], [juice, water].
If the evolution result a posteriori preference
in part 15 is taken into account and the elder is
scheduled to go to the hospital for health check in
the second day: the first and the second cases do
not change. In the third case: the suggestions are
[tea] and [water] since the ones that have coffee or
juice would cause high caffeine and sugar levels,
respectively, which can make the checking result
(health) imprecise (parts 13-15). It can be done
similarly for all the other cases.
Note future events can be asserted as Prolog
code using the reserved predicate scheduled_
events/2. For more details of its use see (Pereira
& Han, 2009a, 2009b).
As one can gather, the suggestions provided by
this assisting system are quite contextually appro-
priate. We might elaborate current factors (time,
temperature, physical states) and even consider
more factors to provide more appropriate sugges-
tions if ever the situation gets more complicated.
Security and Emergency
Security in AmI: Security is one of the key issues
for AmI success (Friedewald, et al., 2007), and
particularly important in home environments
(Friedewald, Costa, Punie, Alahuhta, & Heinonen,
2005). It comprises two important categories:
security in terms of Burglary Alarm systems and
security in terms of health and wellbeing of the
residents (prevention, monitoring) (Friedewald,
et al., 2005).
So far Burglary Alarm technology has been
mainly based on sensing and recognizing the very
last action of an intrusion plan, such as “breaking
the door” (Friedewald, et al., 2005; Wikipedia).
However, it may be too late to provide an appro-
priate protection. Burglary Alarm systems need
to be able to guess in advance the possibility of
an intrusion on the basis of the very first observ-
able actions of potential intruders. For example, it
would be useful to find out how likely a stranger
constantly staring at your house has an intrusion
intention, taking into account the particular situa-
tion, e.g. if he has weapon or if it is night time. This
information can be sent to the carer, the assistive
system, or the elders themselves (if there are no
carers or assistive systems available), for them to
get prepared (e.g. turn on the light or sounders to
scare off burglars or call relatives, police, firemen,
etc.). Our intention-based decision making system
proves appropriate to deal with this scenario.
Given any currently observed actions, the prob-
ability of the on-going conceivable intentions are
computed, and if the one of the intrusion intention
is large enough or is among (some of) the most
likely intentions, the EP component should be
informed of a potential intrusion, so as to make
a timely decision, and issue suggestions to the
elders. To be more certain about the possibility
of an intrusion, additional observations may need
to be made, but at least for now it is about ready
to handle any potentially negative forthcoming
situations. Waiting until being sure to get ready
can be too late to take appropriate actions. For
illustration, consider the next example.
Example 9 (Solving Intrusion): Envisage a situ-
ation where the intention recognition system
recognized an intention of intrusion at night.
193
Intention-Based Decision Making via Intention Recognition and its Applications
continued on following page
Box 8.
1. abds([water/0, coffee/0, tea/0, juice/0, precise_result/0, imprecise_result/0]).
2. expect(coffee). expect(tea).
expect(water). expect(juice).
3. on_observe(drink) ← has_intention(elder,drink).
drink ← tea. drink ← coffee.
drink ← water. drink ← juice.
4. expect_not(coffee) ← prolog(blood_high_pressure).
expect_not(coffee) ← prolog(sleep_difficulty).
expect_not(coffee) ← prolog(late).
expect_not(juice) ← prolog(late).
5. ← tea, coffee. ← coffee, juice.
← tea, juice. ← tea, water.
6. coffee <| tea ← prolog(morning_time).
coffee <| water ← prolog(morning_time).
coffee <| juice ← prolog(morning_time).
7. juice <| coffee ← prolog(hot).
juice <| tea ← prolog(hot).
juice <| water ← prolog(hot).
water <| coffee ← prolog(hot).
water <| tea ← prolog(hot).
8. tea <| coffee ← prolog(cold).
tea <| juice ← prolog(cold).
tea <| water ← prolog(cold).
9. pleasure_level(3) ← coffee. pleasure_level(2) ← tea.
pleasure_level(1) ← juice. pleasure_level(0) ← water.
10. sugar_level(1) ← coffee. sugar_level(1) ← tea.
sugar_level(5) ← juice. sugar_level(0) ← water.
11.caffein_level(5) ← coffee. caffein_level(0) ← tea.
caffein_level(0) ← juice. caffein_level(0) ← water.
12.Ai << Aj ← holds_given(pleasure_level(V1), Ai),
holds_given(pleasure_level(V2), Aj), V1 > V2.
13. on_observe(health_check) ← time_for_health_check.
health_check ← precise_result.
health_check ← imprecise_result.
14. expect(precise_result) ← no_hight_sugar, no_high_caffein.
expect(imprecise_result).
no_high_sugar ← sugar_level(L), prolog(L < 2).
no_high_caffein ← caffein_level(L), prolog(L < 2).
194
Intention-Based Decision Making via Intention Recognition and its Applications
The system must either warn the elders who
are sleeping, automatically call the nearest
police, or activate the embedded burglary
alarm. If the elders are sleeping and ill, they
do not expect to be warned, but prefer other
solutions. Due to potential disturbance, the
elders prefer simply activating the burglary
system to calling the police as long as no
weapon is detected and there is a single
intruder.
The situation is described by the program with
three abducibles: call_police, warn_persons, ac-
tivate_alarm, and can be coded in EP as in Box 9.
Suppose it is night-time and an intrusion inten-
tion is recognized, then the active goal solve intru-
sion (part 1) is triggered, and the EP system starts
reasoning to find the most appropriate solutions.
This program has three abductive solutions:
[call_police], [warn_persons], and [activate_
alarm] since all the abducibles are expected and
there is no expectations to their contrary. Suppose
it detects that the elders are sleeping and known to
be ill, i.e. literals ill and sleeping are factual. In this
case, the elders do not expect to be warned (part
4), thus ruling out the second solution [warn_per-
sons]. And if no weapon is detected and there is
only a single intruder, the a priori preference in
part 5 is triggered, which defeats the solution where
only call police is present (due to the impossibil-
ity of simultaneously abducing activate alarm).
Hence, the only solution is to activate the burglary
15.Ei <<< Ej ← holds_in_evol(precise_result, Ei ),
holds_in_evol(imprecise_result, Ej).
beginProlog.
: - assert(scheduled_events(1, [has_intention(elder,drink)])),
assert(scheduled_events(2, [time_for_health_check])).
late :- time(T), (T > 23; T < 5).
morning_time :- time(T), T > 7, T < 10.
hot :- temperature(TM), TM > 32.
cold :- temperature(TM), TM < 10.
blood_high_pressure :- physical_state(blood_high_pressure).
sleep_difficulty :- physical_state(sleep_difficulty).
endProlog.
Box 8. Continued
Box 9.
1. on observe(solve intrusion) ← at night, has_intention(stranger, intrusion).
2. solve_intrusion ← call police.
solve_intrusion ← warn persons.
solve_intrusion ← activate alarm.
3. expect(call police). expect(warn persons). expect(activate alarm).
4. expect_not(warn persons) ← ill, sleeping.
5. activate_alarms <| call police ← no_weapon_detected, individual.
6. call_police <| activate_alarms ← weapon_detected.
195
Intention-Based Decision Making via Intention Recognition and its Applications
alarm. However, if weapons were detected, the
preference in part 6 is triggered and defeats the
[activate_alarm] solution. The only solution left
is to call the police (call_police).
Regarding Burglary Alarm systems, in the
following example we consider a simple scenario
of recognizing an elder’s intentions.
Example 10 (Detecting Intrusion): An elder
stays alone in his apartment. One day the
Burglary Alarm is ringing, and the assisting
system observes that the elder is looking
for something. In order to assist him, the
system needs to figure out what he intends
to find. Possible things are: Alarm button
(AlarmB); Contact Device (ContDev),
Defensible Weapons (Weapon), and light
switch (Switch). The BN representing this
scenario is in Figure 4.
The nodes representing the conceivable inten-
tions are: i(AlarmB), i(ContDev), i(Weapon), and
i(Switch). The Bayesian network for intention
recognition has three top nodes in the pre-inten-
tional level, representing the causes or reasons of
the intentions, which are Alarm_On, Defensible
and Light_on. The first and last nodes are evidence
nodes, i.e. their values are observable. There is
only one observable action, represented by the
node Looking in the last layer. It is a direct child
of the intention nodes. The conditional probabil-
ity tables (CPD) of each node in the BN are
given. For example, the table of the node Defen-
sible says that the elder is able to defense himself
(with weapons) with probability of 0.3 and not
able to do so with probability 0.7. The table in
the top-right corner provides the probability of
the elder looking around for something condi-
tional on the intentions. Based on this BN one
can now compute the conditional probability of
each intention given the observed action.
Another security issue concerns health and
well-being of the residents. AmI systems need to
be able to prevent hazardous situations, which usu-
Figure 4. Bayesian Network for recognizing the elder’s intentions in an intrusion situation
196
Intention-Based Decision Making via Intention Recognition and its Applications
ally come from dangerous ideas or intentions (e.g.
take a bath when drunk, drink alcohol while not
permitted, or even commit suicide) of the assisted
persons, especially those with mental impairments
(P. Roy, et al., 2007). To this end, guessing their
intentions from the very first relevant actions is
indispensable to take timely actions. In our incre-
mental intention recognition method, a BN will be
built to compute how likely there is a dangerous
intention, with respect to any currently observed
actions, and carers would be informed in case it
is likely enough, in order to get prepared in time.
Emergency in AmI: Handling emergency situ-
ations is another important issue in AmI. There
are a wide range of emergency situations, e.g. in
security, when recognizing intrusion intention of
a stranger or dangerous intentions of the assisted
person. They also can occur when detecting fire,
unconsciousness or unusualness in regular ac-
tivities (e.g. sleep for too long), etc. Emergency
handling in the EP system can be done by hav-
ing an active goal rule for whichever emergency
situation. For solving the goal, a list of possible
actions, all represented by abducible enablers, are
available to form solutions. Then, users’ prefer-
ences are encoded using all kinds of preference of
EP: a priori ones for preferring amongst available
actions, a posteriori ones for comparing solutions
taking into account their consequences and utility,
and a posteriori evolution result ones for compar-
ing more-than-one-step consequences. Moreover,
the expectation and counter expectation rules are
used to encode pros and cons of the users towards
each available action, or towards any abducible
in general.
Discussion of Other AmI Issues: We have
shown how our intention-based decision making
framework can enable the provision of proactive
support for assisted people, and the tackling of
the AmI security and emergency issues. We now
briefly sketch how it can be utilized to address
yet other important issues in AmI.
First of all, it is known that intention recogni-
tion plays a central role in human communication
(Heinze, 2003; Pinker, Nowak, & Lee, 2008; To-
masello, 2008). In addition, an important aspect of
intentions is future-directedness, i.e. if we intend
something now it means we intend to execute a
course of actions to achieve it in the future (Brat-
man, 1987; O. Roy, 2009b; Singh, 1991). Most
actions may be executed only at a far distance
in time. Thus, we usually need to guess others’
intentions from the very first clues, such as their
actions or spoken sentences, in order to secure a
smooth conversation or collaboration. Perhaps
we guess a wrong intention, but we need to be
able to react in a timely manner; and that is also
part of the conversation. We can simply attempt
to confirm by asking, e.g. “is this (...) what you
mean?”. Our intention-based decision making
framework can be used to design better and more
friendly human-computer interaction devices that
can react to human behavior and speech, com-
municate with them to confirm their intentions
so as to provide appropriate help when necessary,
after having guessed their likely intentions using
an intention recognition system.
Yet another issue is that, in order to be highly
accepted by the users, an assistive system should
be able to proffer explanations for the sugges-
tions it provides. In EP, that can be easily done by
keeping all the preferences, integrity constraints,
expectation and counter expectation rules that were
used both to consider and to rule out abductive
solutions.
OTHER DOMAINS:
INTENTION-BASED DECISION
MAKING IN MORAL REASONING
AND GAME THEORY
Intention-Based Decision
Making in Moral Reasoning
A key factor in legal and moral judgments is
intention (Hauser, 2007; Young & Saxe, 2011).
Intent differentiates, for instance, murder from
197
Intention-Based Decision Making via Intention Recognition and its Applications
manslaughter. When making a moral decision, it
is crucial to recognize if an action or decision is
intentional (or at least very likely to be intentional
so as, for instance, to be judged beyond reasonable
double) or not. Intentionality plays the central
part in different moral rules, notably the double
effect principle (Hauser, 2007; Mikhail, 2007),
rendered as follows:
Harming another individual is permissible if it is
the foreseen consequence of an act that will lead
to a greater good; in contrast, it is impermissible
to harm someone else as an intended means to a
greater good.
This principle is particularly applicable for the
well-known trolley problems, having the following
initial circumstance (Hauser, 2007):
There is a trolley and its conductor has fainted.
The trolley is headed toward five people walking
on the track. The banks of the track are so steep
that they will not be able to get off the track in time.
Given this circumstance, there exist several
cases of moral dilemmas (Mikhail, 2007). Let
us consider the following three typical cases (il-
lustrated in Figure 5).
Bystander: Hank is standing next to a
switch that can turn the trolley onto a side-
track, thereby preventing it from killing the
five people. However, there is a man stand-
ing on the sidetrack. Hank can throw the
switch, killing him; or he can refrain from
doing so, letting the five die. Is it morally
permissible for Hank to throw the switch?
Footbridge: Ian is on the bridge over the
trolley track, next to a heavy man, which
he can shove onto the track in the path of
the trolley to stop it, preventing the killing
of five people. Ian can shove the man onto
the track, resulting in death; or he can re-
frain from doing so, letting the five die. Is
it morally permissible for Ian to shove the
man?
Loop Track: Ned is standing next to a
switch that can temporarily turn the trolley
onto a sidetrack, without stopping, only to
join the main track again. There is a heavy
man on the sidetrack. If the trolley hits the
man, he will slow down the trolley, giving
time for the five to escape. Ned can throw
the switch, killing the man; or he can re-
frain from doing so, letting the five die. Is
it morally permissible for Ned to throw the
switch?
Figure 5. Three trolley cases: (1) bystander; (2) footbridge; (3) loop track
198
Intention-Based Decision Making via Intention Recognition and its Applications
The trolley problem suite has been used in
tests to assess moral judgments of subjects from
demographically diverse populations (Hauser,
2007; Mikhail, 2007). Interestingly, although
all three cases have the same goal, i.e. to save
five albeit killing one, subjects come to different
judgments on whether the action to reach the goal
is permissible or impermissible, i.e. permissible
for the Bystander case, but impermissible for the
Footbridge and Loop Track cases. As reported
by (Mikhail, 2007), the judgments appear to be
widely shared among demographically diverse
populations.
We show how the trolley problems can be
modeled within our intention-based decision
making framework, leading to outcomes comply-
ing with the moral principle of double effect. In
all these three cases, as the action to be judged is
given explicitly, one just has to decide whether
the action is an intentional act of killing or not.
The three-layered Bayesian network in Figure 6
is provided for this purpose. Here since we are
deciding whether the observed action O is an
intentional killing act, we can easily define the
CPD of O, P(O= t | IK) =1 for all IK
{t,f}.
Next, the CPD of IK can be defined as follows:
P(IK=t| IM=t, PR)=1; P(IK=t| IM= f, PR=t)=
0.6; P(IK=t| IM=f, PR=f)=0.
We now only need to focus on prior probabil-
ities of IM and PK. The P-log program represent-
ing this BN can be provided similarly to the one
in Example 3.
In the original form of the trolley cases pre-
sented above, any personal reason is not consid-
ered, thus having prior probability of 0. The prior
probability of IM is 0 for the Bystander case and
1 for the other cases. Hence, the probability of
intentional killing, i.e. P(IK=t| O=t), is 0 for the
Bystander case and 1 for the other two cases.
Let us consider how to model the first two
cases, those of the Bystander and the Footbridge.
The Loop track case can be done similarly.
Example 11 (Bystander): In the following we
see how the Bystander case can be coded
using our intention-based decision making
framework (Box 9).
Part 1 is the declaration of abducibles. Parts
6-7 model the principle of double effect. Namely,
part 6 says it is impermissible to have an action
(that is, throwing the switch) of intentional killing,
Figure 6. Bayesian network for intentional killing recognition. The node intentional killing (IK) in the
intentional (middle) layer receives Boolean values (t or f), stating whether the observed action in the
third layer is an intentional killing act. The node IK is causally affected by IM (intended means), stating
whether the observed action is performed as an intended means to a greater good, and PR (personal
reason), stating whether the action is performed due to a personal reason. Both IM and PR receive
Boolean values.
199
Intention-Based Decision Making via Intention Recognition and its Applications
which is judged so if intentional killing is pre-
dicted by the model with a probability greater
some given threshold. This threshold depends on
how certain the judgment needs to be provided,
for instance, say 0.95 if it is guilty beyond rea-
sonable doubt. Part 7 says the scenario involving
the saving of more people is more favorable. When
the train with the fainted conductor is coming,
agent Hank has to decide either to watch the train
go straight or throw the switch (part 2). There is
always the possible expectation to watch the train
go straight or the possible expectation to throw
the switch, there being no expectations to their
contrary (parts 3 and 4).
Because in this Bystander case, the probability
of intentional killing is 0, P(IK = t | O = t) = 0,
there are two prior abductive solutions: [watch-
ing, not throwing_switch], [throwing_switch, not
watching].
Next the a posteriori preferences are taken into
account to rule out the less preferred abductive
solutions. Considering the a posteriori preference
in part 7, the abductive solution including watching
is ruled out since it leads to the consequence of
five people dying (part 3), which is less preferred
than the one including throwing_switch that leads
to the consequence of non-intentional killing of
one person. In short, Hank’s decision is to throw
the switch to save five people although one will
die (unintentionally killed).
Now let us modify the original Bystander case
to see how the factor ‘personal reason’ (PR) in
the BN model may affect any moral judgment.
Supposed there is a good chance that there is
some evidence showing that Hank wants to kill
the person on the sidetrack: P(PR= t) =0.85. Now,
the probability of intentional killing is: P(IK= t|
O=t) =0.51. It is not enough to judge that Hank’s
action is one of intentional killing, beyond reason-
able doubt, but the probability is high enough to
require further investigation to clarify the case.
Example 12 (Footbridge): The footbridge case
can be coded with the program in Box 10.
Similarly to the previous case, part 1 provides
is the declaration of program abducibles. There
is always expectation to watch the train go straight
Box 9.
1. abds([watching/0, throwing_switch/0]).
2. on_observe(decide) ← train_comming.
decide ← watching.
decide ← throwing_switch.
← throwing_switch, watching.
3. expect(watching).
train_straight ← watching.
end(die(5)) ← train_straight.
4. expect(throwing_switch).
redirect_train ← throwing_switch.
end(die(1)) ← human(X), side_track(X), redirect_train.
5. side_track(john). human(john).
6. intentional_kill ← throwing_switch, has_intention(ned, kill, Pr), prolog(Pr > 0.95).
← intentional_killing.
7. Ai << Aj ← holds_given(end(die(N)), Ai ),
holds_given(end(die(K)), Aj), N < K.
200
Intention-Based Decision Making via Intention Recognition and its Applications
and no expectation to its contrary (part 2). How-
ever, the action of shoving an object is only pos-
sible if there is an object near Ian to shove (part
3). To make this case more interesting we can
have an additional heavy object, e.g. rock, on the
footbridge near to Ian and see whether our
model of the moral rule still allows the reasoning
to deliver moral decisions as expected. Similarly
to the Bystander case, the double effect principle
is modeled in parts 6 and 7.
If there is a person, named John, standing near
to Ian (part 5), then there is a possible expectation
to shove John (part 3). However, shoving a human
is an intentional killing action, which does not
satisfy the integrity constraint in part 6, since the
probability of intentional killing predicted by the
BN model is 1: P(IK = t | O = t) = 1. Therefore,
there is only one abductive solution, to merely
watch the train go towards the five people: [watch-
ing, not shoving(john)].
Now consider the same initial situation but,
instead of a person, there is a heavy inanimate
object, a rock, standing near Ian (replace stand_
near(john) in part 5 with stand_near(rock)). Now
there is expectation to shove the rock. In addition,
it is not an intentional killing. Thus, there are two
abductive solutions: [watching, not shove(rock)],
[shove(rock), not watching]. Next, the a posteriori
preferences in part 7 are taken into account. The
abductive solution including watching is ruled
out since it leads to the consequence of killing
five people, less preferred than the one includ-
ing shove(rock) that leads to the consequence of
killing nobody.
In short, if standing near to Ian is a person he
has only one choice to watch the train go straight
Box 10.
1. abds([watching/0, shove/1]).
on_observe(decide) ← train_comming.
decide ← watching.
decide ← shove(X).
← watching, shove(X).
2. expect(watching).
train_straight ← watching.
end(die(5)) ← train_straight.
3. expect(shove(X)) ← stand_near(X).
on_track(X) ← shove(X).
stop_train(X) ← on_track(X), heavy(X).
kill(1) ← human(X), on_track(X).
kill(0) ← inanimate_object(X), on_track(X).
end(die(N)) ← kill(N).
4. human(john). heavy(john).
inanimate_object(rock). heavy(rock).
5. stand_near(john).
%stand_near(rock).
6. intentional_kill ← human(X), shove(X), has_intention(ian, kill, Pr), Prolog(Pr > 0.95).
← intentional_killing.
7. Ai << Aj ← holds_given(end(die(N)), Ai),
holds_given(end(die(K)), Aj), N < K.
201
Intention-Based Decision Making via Intention Recognition and its Applications
and kill five people since shoving a person to the
sidetrack is an intentional killing action. However,
if standing near to him were an inanimate object,
he would shove the object to stop the train, saving
the five and killing no one.
Uncertainty about observed actions: Usually
moral reasoning is performed upon conceptual
knowledge of the actions. But it often happens that
one has to pass a moral judgment on a situation
without actually observing the situation, i.e. there
is no full, certain information about the actions.
The BN in Figure 6 is not applicable anymore. In
this case, it is important to be able to reason about
the actions, under uncertainty, that might have
occurred, and thence provide judgment adhering
to moral rules within some prescribed uncertainty
level. Courts, for example, are required to proffer
rulings beyond reasonable doubt. There is a vast
body of research on proof beyond reasonable
doubt within the legal community, e.g. (Newman,
2006). For illustration, consider this variant of the
Footbridge case.
Example 13 (Moral Reasoning with Uncertain
Actions): Suppose a jury in a court is faced
with the case where the action of Ian shoving
the man onto the track was not observed.
Instead, they are only presented with the
fact that the man died on the sidetrack and
Ian was seen on the bridge at the occasion.
Is Ian guilty (beyond reasonable doubt), i.e.
does he violate the double effect principle, of
shoving the man onto the track intentionally?
To answer this question, one should be able
to reason about the possible explanations of the
observations, on the available evidence. The
following code shows a model for this example.
Given the active goal judge (part 2), two abducibles
are available, i.e. verdict (guilty_beyond_reason-
able_doubt) and verdict(not_guilty). Depend-
ing on how probable is each possible verdict,
verdict (guilty_beyond_reasonable_doubt) or
verdict(not_guilty) is expected a priori (part 3
and 9). The sort intentionality in part 4 represents
the possibilities of an action being performed
intentionally (int) or non-intentionally (not_int).
Random attributes df_run and br_slip in part 5
and 6 denote two kinds of evidence: Ian was defi-
nitely running on the bridge in a hurry (df_run)
and the bridge was slippery at the time (br_slip),
respectively. Each has prior probability of 4/10.
The probability with which shoving is performed
intentionally is captured by the random attribute
shoved (part 7), which is causally influenced by
both evidence. Part 9 defines when the verdicts
(guilty and not_guilty) are considered highly
probable using the meta-probabilistic predicate
pr_iShv/1, defined in part 8. It denotes the prob-
ability of intentional shoving, whose value is
determined by the existence of evidence that Ian
was running in a hurry past the man (signaled
by predicate evd_run/1) and that the bridge was
slippery (signaled by predicate evd_slip/1). See
Box 11.
Using the above model, different judgments
can be delivered by our system, subject to avail-
able evidence and attending truth-value. We ex-
emplify some cases in the sequel. If both evidence
are available, where it is known that Ian was run-
ning in a hurry on the slippery bridge, then he
may have bumped the man accidentally, shoving
him unintentionally onto the track. This case is
captured by the first pr_iShv rule (part 8): the
probability of intentional shoving is 0.05. Thus,
the atom highly_probable(not guilty) holds (part
10). Hence, verdict(not_guilty) is the preferred
final abductive solution (part 3). The same abduc-
tive solution is obtained if it is observed that the
bridge was slippery, but whether Ian was running
in a hurry was not observable. The probability of
intentional shoving, captured by pr_iShv, is 0.29.
On the other hand, if the evidence shows that
Ian was not running in a hurry and the bridge was
also not slippery, then they do not support the
explanation that the man was shoved unintention-
ally, e.g., by accidental bumping. The action of
shoving is more likely to have been performed
202
Intention-Based Decision Making via Intention Recognition and its Applications
intentionally. Using the model, the probability
of 0.97 is returned and, being greater than 0.95,
verdict(guilty_beyond_reasonable_doubt) be-
comes the sole abductive solution. In another
case, if it is only known the bridge was not slip-
pery and no other evidence is available, then the
probability of intentional shoving becomes 0.79,
and, by parts 4 and 10, no abductive solution is
preferred. This translates into the need for more
evidence, as the available one is not enough to
issue judgment.
Intention-Based Decision
Making in Game Theory
In strategic and economic situations as typically
modeled using the game theoretical framework
(Hofbauer & Sigmund, 1998; Osborne, 2004),
the achievement of a goal by an agent usually
does not depend uniquely on its own actions, but
also on the decisions and actions of others—es-
pecially when the possibility of communication
is limited (Heinze, 2003; Kraus, 1997; Pinker,
Box 11.
1. abds([verdict/1]).
2. on_observe(judge).
judge ← verdict(guilty_beyond_reasonable_doubt).
judge ← verdict(not_guilty).
3. expect(verdict(X)) ← prolog(highly_probable(X)).
beginPlog.
4. bool = {t, f}. intentionality = {int, not_int}.
5. df_run : bool. random(rdr,df_run,full).
pa(rdr,df_run(t),d_(4, 10)).
6. br_slip : bool. random(rsb,br_slip,full).
pa(rsb,br_slip(t),d_(4, 10)).
7. shoved : intentionality. random(rs, shoved, full).
pa(rs,shoved(int),d_(97,100)) :- df_run(f),br_slip(f).
pa(rs,shoved(int),d_(45,100)) :- df_run(f),br_slip(t).
pa(rs,shoved(int),d_(55,100)) :- df_run(t),br_slip(f).
pa(rs,shoved(int),d_(5,100)) :- df_run(t),br_slip(t).
:- dynamic evd_run/1, evd_slip/1.
8. pr_iShv(Pr) :- evd_run(X), evd_slip(Y), !,
pr(shoved(int) ‘|’ obs(df_run(X)) & obs(br_slip(Y)), Pr).
pr_iShv(Pr) :- evd_run(X), !,
pr(shoved(int) ‘|’ obs(df_run(X)), Pr).
pr_iShv(Pr) :- evd_slip(Y), !,
pr(shoved(int) ‘|’ obs(br_slip(Y)), Pr).
pr_iShv(Pr) :- pr(shoved(int), Pr).
9. highly_probable(guilty_beyond_reasonable_doubt) :- pr_iShv(PrG), PrG > 0.95.
highly_probable(not_guilty) :- pr_iShv(PrG), PrG < 0.6.
endPlog.
203
Intention-Based Decision Making via Intention Recognition and its Applications
et al., 2008; Tomasello, 2008). The knowledge
about intention of others in such situations could
enable an recognizing agent to plan in advance,
either to secure a successful cooperation, to deal
with potential hostile behaviors, and thus take
the best advantage of such knowledge (Bratman,
1987; Cohen & Levesque, 1990; Han, 2012; Han,
Pereira, & Santos, 2011a; Han, Pereira, et al.,
2012a; O. Roy, 2009b; van Hees & Roy, 2008).
Additionally, in more realistic settings where
deceit may offer additional profits, agents often
attempt to hide their real intentions and make
others believe in faked ones (Han, 2012; Han,
Pereira, et al., 2012b; Robson, 1990; Tomasello,
2008; Trivers, 2011). Undoubtedly, in all such
situations a capability of recognizing intentions of
others and take them into account when making
decision is crucial, providing its withholders with
significant net benefit or evolutionary advantages.
Indeed, the capacity for intention recognition and
intention-based decision making can be found
abundantly in many kinds of humans’ interactions
and communications, widely documented for
instance in (Cheney & Seyfarth, 2007; Meltzoff,
2007; Tomasello, 1999, 2008; Woodward, et al.,
2009). In addition, there is a large body of litera-
ture on experimental economics that shows the
importance of intention-based decision making
in diverse kinds of strategic games, for instance,
the Prisoner’s dilemma (Frank, et al., 1993), the
Moonlighting game (Falk, et al., 2008; Radke, et
al., 2012) and the Ultimatum game (Radke, et al.,
2012). In addition, computational models show
that the taking into account of the ongoing strategic
intentions of others is crucial for agents’ success
in the course of different strategic games (Han,
2012; Han, et al., 2011a, 2011b; Han, Pereira, et
al., 2012a, 2012b; Janssen, 2008).
Let us consider some examples of intention-
based decision making in the context of the
Prisoner’s Dilemma (PD), where in each inter-
action a player needs to choose a move, either to
cooperate (‘c’) or to defect (‘d’). In a one-shot
PD interaction, it is always better off choosing to
defect, but cooperation might be favorable if the
PD is repeated (called iterated PD), that is, there
is a good chance that players will play the same
PD with each other again. Several successful
strategies have been provided in the context of
the iterated PD (see a survey in Sigmund, 2010),
most famously amongst them are tit-for-tat (tft)
and win-stay-lose-shift (wsls).
The following two strategies (each denoted by
IR), operating upon intent-based decision making,
have been shown to be better than those famous
strategies of the iterated PD (Han, et al., 2011a,
2011b; Han, Pereira, et al., 2012a). In the sequel
we show how to model them within our framework.
Example 14 (Intention-Based Decision Making
Rule in (Han, et al., 2011a; Han, Pereira,
et al., 2012b; Janssen, 2008)): Prefer to co-
operate if the co-player intends to cooperate,
and prefer to defect otherwise. See Box 12.
At the start of a new interaction, an IR player
needs to choose a move, either cooperate (c) or
defect (d) (parts 2-3). Both options are expected,
and there are no expectations to the contrary (part
4). There are two a priori preferences in part 5,
stating that an IR player prefers to cooperate if
the co-player’s recognized intention is to cooper-
ate, and prefers to defect otherwise. The built-in
predicate has_intention/2, in the body of the
preferences, triggers the intention recognition
Box 12.
1. abds([move/1]).
2. on_observed(decide) ← new_interaction.
3. decide ← move(c).
decide ← move(d).
← move(c), move(d).
4. expect(move(X)).
5. move(c) <| move(d) ← has intention(co_player, c).
move(d) <| move(c) ← has intention(co_player, d).
204
Intention-Based Decision Making via Intention Recognition and its Applications
module to validate if the co-player is more likely
to have the intention expressed in the second argu-
ment.
Example 15 (Intention-Based Decision Making
Rule in (Han, et al., 2011b; Han, Pereira,
et al., 2012a)): Defect if the co-player’s
recognized intention or rule of behavior is
always-cooperate (allc) or always-defect
(alld), cooperate if it is tft; and if it is wsls,
cooperate if last game state is both cooperated
(denoted by R) or both defected (denoted
by P) and defect if the current game state
is IR defected and the co-player cooperated
(denoted by T) or vice versa (denoted by S).
This rule of behavior is learnt using a dataset
collected from prior interactions with those strate-
gies (Han, et al., 2011b). See Box 13.
At the start of a new interaction, an IR needs
to choose a move, either cooperate (c) or defect
(d) (parts 2-3). Both options are expected, and
there are no expectations to the contrary. The a
priori preferences in part 5 stating which move
IR prefers to choose, given the recognized inten-
tion of the co- player (allc, alld, tft or wsls) and
the current game state (‘T’, ‘R’, ‘P’ or ‘S’). The
built-in predicate has_intention/2 in the body of
the preferences triggers the intention recognition
module to validate if the co-player is most likely
to follow a given intention (strategy), specified
by the second argument.
In short, our framework is general and expres-
sive, suitable for intention-based decision making
in the context of game theory.
CONCLUSION AND FUTURE WORK
We have summarized our previous work on Evo-
lution Prospection (EP) (Pereira & Han, 2009a,
2009b) and have shown how to obtain its coher-
ent combination with the intention recognition
system, for achieving intention-based decision
making. The EP system has proven useful before
for the purpose of decision making (Han & Pereira,
2010a, 2010b, 2011b; Han, Saptawijaya, et al.,
2012; Pereira & Han, 2009a, 2009b, 2011b), and
has now been empowered to take into account the
intentions of other agents—an important aspect
that has not been well explored so far (O. Roy,
2009b; van Hees & Roy, 2008). The fact that both
systems are Logic Programming based enabled
their easy integration. We have described and
exemplified several ways in which an EP agent
can benefit from having an ability to recognize
intentions in other agents.
Box 13.
1. abds([move/1]).
2. on observed(decide) ← new interaction.
3. decide ← move(c).
decide ← move(d).
← move(c), move(d).
4. expect(move(X)).
5. move(d) <| move(c) ← has_intention(co_player, allc).
move(d) <| move(c) ← has_intention(co_player, alld).
move(c) <| move(d) ← has_intention(co_player, tft).
move(c) <| move(d) ← has_intention(co_player, wsls), game_state(s), (s = ‘R’; s = ‘P’).
move(d) <| move(c) ← has_intention(co_player, wsls), game_state(s), (s = ‘T’; s = ‘S’).
205
Intention-Based Decision Making via Intention Recognition and its Applications
Notwithstanding, the combination of intention
recognitions approach we have used here is not
deemed restricted to Logic Programming based
systems. In general, any intention recognition sys-
tem, and indeed, any decision making system, can
be considered. The ideas of combined integration
described here can be adopted by other decision
making systems to account for intentions.
We have addressed the need for intention-based
decision making in different application domains,
including Ambient Intelligence (Sadri, 2011a)
and Elder Care (Cesta & Pecora, 2004; Sadri,
2008), where decision making techniques as well
as intention recognition abilities are becoming of
increased importance (Geib, 2002; Pereira & Han,
2011a; Sadri, 2010). Furthermore, we have also
described how important and ubiquitous intention-
based decision making is in the moral reasoning
and game theory setting application domains.
In future work, we consider applying our
combined system to other application domains,
including story understanding (Charniak & Gold-
man, 1990), human-computer and interface-agents
systems (Armentano & Amandi, 2007; Hong,
2001; Lesh, 1998), traffic monitoring (Pynadath
& Wellman, 1995), assistive living (Geib, 2002;
Haigh, et al., 2004; Pereira & Han, 2011a; P. Roy,
et al., 2007; Tahboub, 2006), military settings
(Heinze, 2003; Mao & Gratch, 2004), and moral
reasoning (Han, Saptawijaya, et al., 2012), where
intention recognition has proven useful and of great
practicality. Another area of future development
is to extend our system to enable collective or
group intention recognition (Sukthankar, 2007;
Sukthankar & Sycara, 2008) in a decision mak-
ing process. In this regard, we have made some
initial attempts in the Elder Care domain (Han &
Pereira, 2010a, 2010b).
ACKNOWLEDGMENT
We thank Ari Saptawijaya for his comments on an
earlier version of this chapter. TAH acknowledges
the support from FCT-Portugal (grant reference
SFRH/BD/62373/2009).
REFERENCES
Alferes, J. J., Pereira, L. M., & Swift, T. (2004).
Abduction in well-founded semantics and gen-
eralized stable models via tabled dual programs.
Theory and Practice of Logic Programming, 4(4),
383–428. doi:10.1017/S1471068403001960
Armentano, M. G., & Amandi, A. (2007). Plan
recognition for interface agents. Artificial Intel-
ligence Review, 28(2), 131–162. doi:10.1007/
s10462-009-9095-8
Armentano, M. G., & Amandi, A. (2009). Goal
recognition with variable-order Markov models.
In Proceedings of the 21st International Joint
Conference on Artificial Intelligence. IEEE.
Baral, C. (2003). Knowledge representation,
reasoning, and declarative problem solving.
Cambridge, UK: Cambridge University Press.
doi:10.1017/CBO9780511543357
Baral, C., Gelfond, M., & Rushton, N. (2009).
Probabilistic reasoning with answer sets. Theory
and Practice of Logic Programming, 9(1), 57–144.
doi:10.1017/S1471068408003645
Binmore, K. G. (2009). Rational decisions. Princ-
eton, NJ: Princeton University Press.
Blaylock, N., & Allen, J. (2003). Corpus-based,
statistical goal recognition. In Proceedings of the
18th International Joint Conference on Artificial
Intelligence (IJCAI 2003). IEEE.
206
Intention-Based Decision Making via Intention Recognition and its Applications
Bratman, M. E. (1987). Intention, plans, and
practical reason. CSLI.
Bratman, M. E. (1999). Faces of intention: Se-
lected essays on intention and agency. Cambridge,
UK: Cambridge University Press. doi:10.1017/
CBO9780511625190
Burglar Alarm. (2012). Wikipedia. Retrieved from
http://en.wikipedia.org/wiki/Burglar_alarm
Burmeister, B., Arnold, M., Copaciu, F., &
Rimassa, G. (2008). BDI-agents for agile goal-
oriented business processes. In Proceedings of the
7th International Joint Conference on Autonomous
Agents and Multiagent Systems: Industrial Track.
IEEE.
Castro, L., Swift, T., & Warren, D. S. (2007).
XASP: Answer set programming with xsb and
smodels. Retrieved from http://xsb.sourceforge.
net/packages/xasp.pdf
Cesta, A., & Pecora, F. (2004). The robocare
project: Intelligent systems for elder care. Paper
presented at the AAAI Fall Symposium on Car-
ing Machines: AI in Elder Care. New York, NY.
Charniak, E., & Goldman, R. P. (1990). Plan
recognition in stories and in life. In Proceedings
of the Fifth Annual Conference on Uncertainty in
Artificial Intelligence. IEEE.
Cheney, D. L., & Seyfarth, R. M. (2007). Baboon
metaphysics: The evolution of a social mind. Chi-
cago, IL: University Of Chicago Press.
Cohen, P. R., & Levesque, H. J. (1990). Intention
is choice with commitment. Artificial Intel-
ligence, 42(2-3), 213–261. doi:10.1016/0004-
3702(90)90055-5
Cook, D., Augusto, J., & Jakkula, V. (2009). Ambi-
ent intelligence: Technologies, applications, and
opportunities. Pervasive and Mobile Computing,
5(4), 277–298. doi:10.1016/j.pmcj.2009.04.001
Falk, A., Fehr, E., & Fischbacher, U. (2008).
Testing theories of fairness---Intentions matter.
Games and Economic Behavior, 62(1), 287–303.
doi:10.1016/j.geb.2007.06.001
Frank, R. H., Gilovich, T., & Regan, D. T. (1993).
The evolution of one-shot cooperation: An experi-
ment. Ethology and Sociobiology, 14(4), 247–256.
doi:10.1016/0162-3095(93)90020-I
Friedewald, M., Costa, O. D., Punie, Y., Alahuhta,
P., & Heinonen, S. (2005). Perspectives of ambient
intelligence in the home environment. Telematics
Information, 22.
Friedewald, M., Vildjiounaite, E., Punie, Y., &
Wright, D. (2007). Privacy, identity and secu-
rity in ambient intelligence: A scenario analy-
sis. Telematics and Informatics, 24(1), 15–29.
doi:10.1016/j.tele.2005.12.005
Geib, C. W. (2002). Problems with intent rec-
ognition for elder care. In Proceedings of AAAI
Workshop Automation as Caregiver. AAAI.
Gelfond, M., & Lifschitz, V. (1993). Representing
actions and change by logic programs. Journal of
Logic Programming, 17(2,3,4), 301 - 323.
Giuliani, M. V., Scopelliti, M., & Fornara, F.
(2005). Elderly people at home: technological help
in everyday activities. Paper presented at the IEEE
International Workshop on In Robot and Human
Interactive Communication. New York, NY.
Haigh, K., Kiff, L., Myers, J., Guralnik, V., Geib,
C., Phelps, J., et al. (2004). The independent
lifestyle assistant (I.L.S.A.): AI lessons learned.
In Proceedings of Conference on Innovative Ap-
plications of Artificial Intelligence. IEEE.
Han, T. A. (2009). Evolution prospection with
intention recognition via computational logic.
Dresden, Germany: Technical University of
Dresden.
207
Intention-Based Decision Making via Intention Recognition and its Applications
Han, T. A. (2012). Intention recognition, com-
mitments and their roles in the evolution of co-
operation. Lisbon, Portugal: Universidade Nova
de Lisboa.
Han, T. A., Carroline, D. P., & Damasio, C. V.
(2008). An implementation of extended p-log
using XASP. In Proceedings of the 24th Interna-
tional Conference on Logic Programming. IEEE.
Han, T. A., Carroline, D. P., & Damasio, C. V.
(2009). Tabling for p-log probabilistic query
evaluation. Paper presented at the New Trends
in Artificial Intelligence, Proceedings of 14th
Portuguese Conference on Artificial Intelligence
(EPIA 2009). Evora, Portugal.
Han, T. A., & Pereira, L. M. (2010a). Collec-
tive intention recognition and elder care. Paper
presented at the AAAI 2010 Fall Symposium on
Proactive Assistant Agents (PAA 2010). New
York, NY.
Han, T. A., & Pereira, L. M. (2010b). Proactive in-
tention recognition for home ambient intelligence.
In Proceedings of 5th Workshop on Artificial
Intelligence Techniques for Ambient Intelligence
(AITAmI’10), Ambient Intelligence and Smart
Environments. IEEE.
Han, T. A., & Pereira, L. M. (2011a). Context-
dependent incremental intention recognition
through Bayesian network model construction. In
Proceedings of the Eighth UAI Bayesian Modeling
Applications Workshop (UAI-AW 2011). UAI-AW.
Han, T. A., & Pereira, L. M. (2011b). Intention-
based decision making with evolution prospection.
In Proceedings of the 15th Portugese Conference
on Progress in Artificial Intelligence. IEEE.
Han, T. A., Pereira, L. M., & Santos, F. C. (2011a).
Intention recognition promotes the emergence of
cooperation. Adaptive Behavior, 19(3), 264–279.
Han, T. A., Pereira, L. M., & Santos, F. C. (2011b).
The role of intention recognition in the evolution
of cooperative behavior. In Proceedings of the
22nd International Joint Conference on Artificial
Intelligence (IJCAI’2011). IEEE.
Han, T. A., Pereira, L. M., & Santos, F. C. (2012a).
Corpus-based intention recognition in coop-
eration dilemmas. Artificial Life. doi:10.1162/
ARTL_a_00072
Han, T. A., Pereira, L. M., & Santos, F. C. (2012b,
June). Intention recognition, commitment and the
evoution of cooperation. Paper presented at the
The 2012 IEEE World Congress on Computational
Intelligence (IEEE WCCI 2012), Congress on
Evolutionary Computation (IEEE CEC 2012).
Brisbane, Australia.
Han, T. A., Saptawijaya, A., & Pereira, L. M.
(2012). Moral reasoning under uncertainty. In
Proceedings of the 18th International Conference
on Logic for Programming, Artificial Intelligence
and Reasoning (LPAR-18). LPAR.
Hauser, M. D. (2007). Moral minds, how nature
designed our universal sense of right and wrong.
New York, NY: Little Brown.
Heinze, C. (2003). Modeling intention recognition
for intelligent agent systems.
Hofbauer, J., & Sigmund, K. (1998). Evolution-
ary games and population dynamics. Cambridge,
UK: Cambridge University Press. doi:10.1017/
CBO9781139173179
Hong, J. (2001). Goal recognition through goal
graph analysis. Journal of Artificial Intelligence Re-
search, 15, 1–30. doi:10.1023/A:1006673610113
Janssen, M. A. (2008). Evolution of coopera-
tion in a one-shot prisoner’s dilemma based on
recognition of trustworthy and untrustworthy
agents. Journal of Economic Behavior & Or-
ganization, 65(3-4), 458–471. doi:10.1016/j.
jebo.2006.02.004
208
Intention-Based Decision Making via Intention Recognition and its Applications
Kakas, A. C., Kowalski, R. A., & Toni, F. (1993).
Abductive logic programming. Journal of Logic
and Computation, 2(6), 719–770. doi:10.1093/
logcom/2.6.719
Kaminka, G. A., Tambe, D. V. P. M., Pynadath,
D. V., & Tambe, M. (2002). Monitoring teams
by overhearing: A multi-agent plan-recognition
approach. Journal of Artificial Intelligence Re-
search, 17.
Kraus, S. (1997). Negotiation and cooperation
in multi-agent environments. Artificial Intel-
ligence, 94(1-2), 79–98. doi:10.1016/S0004-
3702(97)00025-8
Lesh, N. (1998). Scalable and adaptive goal rec-
ognition. Seattle, WA: University of Washington.
Malle, B. F., Moses, L. J., & Baldwin, D. A.
(2003). Intentions and intentionality: Foundations
of social cognition. Cambridge, MA: MIT Press.
Mao, W., & Gratch, J. (2004). A utility-based ap-
proach to intention recognition. Paper presented
at the AAMAS 2004 Workshop on Agent Track-
ing: Modeling Other Agents from Observations.
New York, NY.
Meltzoff, A. N. (2005). Imitation and other minds:
the ``like me” hypothesi. In Hurley, S. A. C. (Ed.),
Perspectives on Imitation: From Neuroscience to
Social Science: Imitation, Human Development,
and Culture (pp. 55–77). Cambridge, MA: MIT
Press.
Meltzoff, A. N. (2007). The framework for
recognizing and becoming an intentional agent.
Acta Psychologica, 124(1), 26–43. doi:10.1016/j.
actpsy.2006.09.005
Mikhail, J. (2007). Universal moral grammar:
Theory, evidence, and the future. Trends in Cog-
nitive Sciences, 11(4), 143–152. doi:10.1016/j.
tics.2006.12.007
Newman, J. O. (2006). Quantifying the standard
of proof beyond a reasonable doubt: A comment
on three comments. Law Probability and Risk,
5(3-4), 267–269. doi:10.1093/lpr/mgm010
Niemela, I., & Simons, P. (1997). Probabilistic
reasoning with answer sets. Paper presented at
the LPNMR4. New York, NY.
Osborne, M. J. (2004). An introduction to game
theory. Oxford, UK: Oxford University Press.
Pearl, J. (1988). Probabilistic reasoning in intel-
ligent systems: Networks of plausible inference.
San Francisco, CA: Morgan Kaufmann.
Pearl, J. (2000). Causality: Models, reasoning,
and inference. Cambridge, UK: Cambridge Uni-
versity Press.
Pereira, L. M., Dell’Acqua, P., & Lopes, G. (2012).
Inspecting and preferring abductive models.
In Handbook on Reasoning-Based Intelligent
Systems. Singapore, Singapore: World Scientific
Publishers.
Pereira, L. M., & Han, T. A. (2009a). Evolution
prospection. In Proceedings of International
Symposium on Intelligent Decision Technologies
(KES-IDT 2009). KES-IDT.
Pereira, L. M., & Han, T. A. (2009b). Evolution
prospection in decision making. Intelligent Deci-
sion Technologies, 3(3), 157–171.
Pereira, L. M., & Han, T. A. (2009c). Intention
recognition via causal bayes networks plus plan
generation. Paper presented at the Progress in
Artificial Intelligence, Proceedings of 14th Por-
tuguese International Conference on Artificial
Intelligence (EPIA 2009). Evora, Portgual.
Pereira, L. M., & Han, T. A. (2011a). Elder care
via intention recognition and evolution prospec-
tion. In Proceedings of the 18th International
Conference on Applications of Declarative Pro-
gramming and Knowledge Management (INAP).
Evora, Portugal: Springer.
209
Intention-Based Decision Making via Intention Recognition and its Applications
Pereira, L. M., & Han, T. A. (2011b). Intention
recognition with evolution prospection and causal
bayes networks. In Computational Intelligence
for Engineering Systems 3: Emergent Applica-
tions (pp. 1–33). Berlin, Germany: Springer.
doi:10.1007/978-94-007-0093-2_1
Pereira, L. M., & Lopes, G. (2009). Prospective
logic agents. International Journal of Reasoning-
Based Intelligent Systems, 1(3/4).
Pinker, S., Nowak, M. A., & Lee, J. J. (2008).
The logic of indirect speech. Proceedings of
the National Academy of Sciences of the United
States of America, 105(3), 833–838. doi:10.1073/
pnas.0707192105
Pynadath, D. V., & Wellman, M. P. (1995). Ac-
counting for context in plan recognition, with
application to traffic monitoring. In Proceedings
of Conference on Uncertainty in Artificial Intel-
ligence (UAI 1995). UAI.
Radke, S., Guroglu, B., & de Bruijn, E. R. A.
(2012). There’s something about a fair split:
Intentionality moderates context-based fairness
considerations in social decision-making. PLoS
ONE, 7(2). doi:10.1371/journal.pone.0031491
Rao, A. S., & Georgeff, M. P. (1991). Modeling
rational agents within a BDI-architecture. In Pro-
ceedings of the Second International Conference
of Principles of Knowledge Representation and
Reasoning. IEEE.
Rao, A. S., & Georgeff, M. P. (1995). BDI agents:
From theory to practice. In Proceeding of First
International Conference on Multiagent Systems.
IEEE.
Robson, A. (1990). Efficiency in evolutionary
games: Darwin, Nash, and the secret handshake.
Journal of Theoretical Biology, 144(3), 379–396.
doi:10.1016/S0022-5193(05)80082-7
Roy, O. (2009a). Intentions and interactive trans-
formations of decision problems. Synthese, 169(2),
335–349. doi:10.1007/s11229-009-9553-5
Roy, O. (2009b). Thinking before acting: Inten-
tions, logic, rational choice. Retrieved from http://
olivier.amonbofis.net/docs/Thesis_Olivier_Roy.
pdf
Roy, P., Bouchard, B., Bouzouane, A., & Giroux,
S. (2007). A hybrid plan recognition model for
Alzheimer’s patients: Interleaved-erroneous
dilemma.In Proceedings of IEEE/WIC/ACM
International Conference on Intelligent Agent
Technology. IEEE.
Russell, S. J., & Norvig, P. (2003). Artificial
intelligence: A modern approach. Upper Saddle
River, NJ: Pearson Education.
Sadri, F. (2008). Multi-agent ambient intelligence
for elderly care and assistance. In Proceedings of
International Electronic Conference on Computer
Science. IEEE.
Sadri, F. (2010). Logic-based approaches to
intention recognition. In Handbook of Research
on Ambient Intelligence: Trends and Perspectives
(pp. 375). Springer.
Sadri, F. (2011a). Ambient intelligence: A sur-
vey. ACM Computing Surveys, 43(4), 1–66.
doi:10.1145/1978802.1978815
Sadri, F. (2011b). Intention recognition with event
calculus graphs and weight of evidence. In Pro-
ceedings 4th International Workshop on Human
Aspects in Ambient Intelligence. IEEE.
Searle, J. R. (1995). The construction of social
reality. New York, NY: The Free Press.
Searle, J. R. (2010). Making the social world:
The structure of human civilization. Oxford, UK:
Oxford University Press.
Sigmund, K. (2010). The calculus of selfishness.
Princeton, NJ: Princeton University Press.
210
Intention-Based Decision Making via Intention Recognition and its Applications
Singh, M. P. (1991). Intentions, commitments and
rationality. Paper presented at the 13th Annual
Conference of the Cognitive Science Society.
New York, NY.
Sukthankar, G. R. (2007). Activity recognition for
agent teams. Retrieved from http://www.cs.cmu.
edu/~gitars/gsukthankar-thesis.pdf
Sukthankar, G. R., & Sycara, K. (2008). Robust
and efficient plan recognition for dynamic multi-
agent teams. In Proceedings of International Con-
ference on Autonomous Agents and Multi-Agent
Systems. IEEE.
Swift, T. (1999). Tabling for non-monotonic pro-
gramming. Annals of Mathematics and Artificial
Intelligence, 25(3-4), 240.
Tahboub, K. A. (2006). Intelligent human-machine
interaction based on dynamic Bayesian networks
probabilistic intention recognition. Journal
of Intelligent & Robotic Systems, 45, 31–52.
doi:10.1007/s10846-005-9018-0
Tomasello, M. (1999). The cultural origins of hu-
man cognition. Boston, MA: Harvard University
Press.
Tomasello, M. (2008). Origins of human com-
munication. Cambridge, MA: MIT Press.
Trivers, R. (2011). The folly of fools: The logic
of deceit and self-deception in human life. New
York, NY: Basic Books.
Tu, P. H., Son, T. C., & Baral, C. (2007). Reason-
ing and planning with sensing actions, incom-
plete information, and static causal laws using
answer set programming. Theory and Practice of
Logic Programming, 7(4), 377–450. doi:10.1017/
S1471068406002948
Tu, P. H., Son, T. C., Gelfond, M., & Morales, A.
R. (2011). Approximation of action theories and
its application to conformant planning. Artifi-
cial Intelligence, 175(1), 79–119. doi:10.1016/j.
artint.2010.04.007
van Hees, M., & Roy, O. (2008). Intentions and
plans in decision and game theory. In Reasons
and Intentions (pp. 207–226). New York, NY:
Ashgate Publishers.
Woodward, A. L., Sommerville, J. A., Gerson,
S., Henderson, A. M. E., & Buresh, J. (2009).
The emergence of intention attribution in infancy.
Psychology of Learning and Motivation, 51,
187–222. doi:10.1016/S0079-7421(09)51006-7
Wooldridge, M. (2000). Reasoning about rational
agents. Cambridge, MA: MIT Press.
Wooldridge, M. (2002). Reasoning about rational
agents. Journal of Artificial Societies and Social
Simulation, 5(1).
XSB. (2009). The XSB system version 3.2 vol. 2:
Libraries, interfaces and packages. XSB.
Young, L., & Saxe, R. (2011). When ignorance
is no excuse: Different roles for intent across
moral domains. Cognition, 120(2), 202–214.
doi:10.1016/j.cognition.2011.04.005
KEY TERMS AND DEFINITIONS
Ambient Intelligence: This refers to electronic
environments that are sensitive and responsive to
the presence of people. In an ambient intelligence
world, devices work in concert to support people
in carrying out their everyday life activities, tasks,
and rituals in easy, natural way using information
and intelligence that is hidden in the network con-
necting these devices.
Evolution Prospection: A decision making
system designed and implemented based on the
idea that, when making some decision at the
current state for solving some current goals, one
usually takes into account longer-terms goals and
future events.
Intention-Based Decision Making: The
decision making process that takes into account
intentions of other agents in the environment.
211
Intention-Based Decision Making via Intention Recognition and its Applications
Technically, intentions of others are now part of
the constructs of decision making, such as goals
and preferences.
Intention Recognition: To infer an agent’s
intentions (called individual intention recogni-
tion”) or intentions of a group of agents (called
collective intention recognition”) through its/
their observed actions and effects of actions on
the environment.
Moral Reasoning: The process in which
an individual tries to determine the difference
between what is right and what is wrong in a
personal situation by using logic.
ENDNOTES
1 The implementation of P-log systems de-
scribed in (Baral et al., 2009) can be found
in: http://www.cs.ttu.edu/~wezhu/
2 The implementation of the Evolution
Prospection system can be downloaded at:
http://centria.di.fct.unl.pt/~lmp/software/
epa.zip
3 The implementation of ABDUAL system
can be downloaded at: http://centria.di.fct.
unl.pt/~lmp/software/contrNeg.rar
4 In general, from the design point of view,
one needs to provide an EP program for each
intention, because, according to context,
a user might have or be predicted to have
distinct intentions.
... Many other results of the current literature directly or indirectly related to the naturalization of logic need be quoted, such as recent AI oriented research on counterfactual reasoning [15,56,64,68]; moral reasoning [37,67,74,75]; mutual debugging and argumenting [65,66]; objecting [62,63]; preferring [61]; forgetting [3]; updating [2,4,38]; intention recognition and decision making [31,33]. Also, interesting studies related to the evolutionary game theory concerning emergent population norms and emergent cooperative behavior morals represent a new promising area for the naturalization of the logic of agents embedded in populations and groups, and certainly points out central issues which help to go beyond the expressive rigidity of the mainstream received logical tradition [3,30,32,[34][35][36]. ...
... as well as purely semantic processes that lead to the identification of the array -high level vision" [71, p. 189]. 30 On the basis of this distinction it seems plausible -as Fodor contends -to think there is a substantial amount of information in perception which is theory-neutral. However, also a certain degree of theoryladenness is justifiable, which can be seen at work for instance in the case of so-called "perceptual learning". ...
... The top-down process takes advantage of descending pathways that send active information out from a central point and play a part in selectively "listening" to the environment, involving relevant motor aspects (indeed action is fundamental to calibrating perception). The role of hearing in the perception of space is central, complementing multichannel visual information with samples of the acoustic field picked up by the ears: cues to location of source by means of interaural intensity, difference and distance according to cues like loudness are two clear examples of 30 A full treatment of the problem of perception both from a psychological and neural perspective is available in the recent [72]. ...
Article
I will analyse some properties of abduction that are essential from a logical standpoint. When dealing with the so-called ‘inferential problem’, I will opt for the more general concepts of input and output instead of those of premisses and conclusions, and show that in this framework two consequences can be derived that help clarify basic logical aspects of abductive reasoning: (i) it is more natural to accept the ‘multimodal’ and ‘context-dependent’ character of the inferences involved, (ii) inferences are not merely conceived of in the terms of the process leading to the ‘generation of an output’ or to the proof of it, as in the traditional and standard view of deductive proofs, but rather, from this perspective abductive inferences can be seen as related to logical processes in which input and output fail to hold each other in an expected relation, with the solution involving the modification of inputs, not that of outputs. I will also describe that if we wish to naturalize the logic of the abductive processes and its special consequence relation, we should refer to the following main aspects: ‘optimization of situatedness’, ‘maximization of changeability’ of both input and output, and high ‘information-sensitiveness’.
... Many other results of the current literature directly or indirectly related to the naturalization of logic need be quoted, such as recent AI oriented research on counterfactual reasoning [15,56,64,68]; moral reasoning [37,67,74,75]; mutual debugging and argumenting [65,66]; objecting [62,63]; preferring [61]; forgetting [3]; updating [2,4,38]; intention recognition and decision making [31,33]. Also, interesting studies related to the evolutionary game theory concerning emergent population norms and emergent cooperative behavior morals represent a new promising area for the naturalization of the logic of agents embedded in populations and groups, and certainly points out central issues which help to go beyond the expressive rigidity of the mainstream received logical tradition [3,30,32,[34][35][36]. ...
... as well as purely semantic processes that lead to the identification of the array -high level vision" [71, p. 189]. 30 On the basis of this distinction it seems plausible -as Fodor contends -to think there is a substantial amount of information in perception which is theory-neutral. However, also a certain degree of theoryladenness is justifiable, which can be seen at work for instance in the case of so-called "perceptual learning". ...
... The top-down process takes advantage of descending pathways that send active information out from a central point and play a part in selectively "listening" to the environment, involving relevant motor aspects (indeed action is fundamental to calibrating perception). The role of hearing in the perception of space is central, complementing multichannel visual information with samples of the acoustic field picked up by the ears: cues to location of source by means of interaural intensity, difference and distance according to cues like loudness are two clear examples of 30 A full treatment of the problem of perception both from a psychological and neural perspective is available in the recent [72]. ...
Article
A complete revision of mainstream logic is an urgent task to be achieved. This revision will be able to bring logic into a creative rapprochement with cognitive science. This can be achieved by trying to do for logic what over forty years ago Quine and others attempted for epistemology. It is necessary to propose a “naturalization” of the logic of human inference. This paper deals with an examination of how the naturalization process might go, together with some indication of what might be achieved by it. To assist the reader in understanding the naturalization of logic I will take advantage of my own research on the concept of abduction, which vindicates the positive cognitive value of the fallacy of the affirming the consequent thanks to the so-called EC-model (Eco-Cognitive model), and of the recent book Errors of Reasoning: Naturalizing the Logic of Inference (2013) [86], by John Woods. While this paper certainly aims at promoting the research program on the naturalization of logic, it also further advocates the placement of abduction in the research programmes of logic, and stresses to what extent our contemporary philosophical and logical tradition is indebted towards Charles Sanders Peirce, a thinker often praised for his productivity but whose quality and importance are too often overlooked.
... In both cases, we experimented with populations with different proportions of diverse strategies in order to calculate, in particular, what is the minimum fraction of individuals capable of intention recognition for cooperation to emerge, invade, prevail, and persist. It is noteworthy that intention recognition techniques have been studied actively in AI for several decades [27][28][29][30][31], with several applications such as for improving human-computer interactions, assisting living, moral reasoning, and team work [32,33]. In most of these applications the agents engage in repeated interactions with each other. ...
Article
The mechanisms of emergence and evolution of collective behaviours in dynamical Multi-Agent Systems (MAS) of multiple interacting agents, with diverse behavioral strategies in co-presence, have been undergoing mathematical study via Evolutionary Game Theory (EGT). Their systematic study also resorts to agent-based modelling and simulation (ABM) techniques, thus enabling the study of aforesaid mechanisms under a variety of conditions, parameters, and alternative virtual games. This paper summarises some main research directions and challenges tackled in our group, using methods from EGT and ABM. These range from the introduction of cognitive and emotional mechanisms into agents’ implementation in an evolving MAS, to the cost-efficient interference for promoting prosocial behaviours in complex networks, to the regulation and governance of AI safety development ecology, and to the equilibrium analysis of random evolutionary multi-player games. This brief aims to sensitize the reader to EGT based issues, results and prospects, which are accruing in importance for the modeling of minds with machines and the engineering of prosocial behaviours in dynamical MAS, with impact on our understanding of the emergence and stability of collective behaviours. In all cases, important open problems in MAS research as viewed or prioritised by the group are described.
... Such sensitivity to the thought processes of others who may consider cheating or deception as an option involves a further capacity to recognize intentions. Consistent with Kant's observations, our research confirms that intention recognition plays a crucial role in moderating social interactions, even when any given intention is not carried out [30,31,51]. ...
Article
Full-text available
We present a summary of research that we have conducted employing AI to better understand human morality. This summary adumbrates theoretical fundamentals and considers how to regulate development of powerful new AI technologies. The latter research aim is benevolent AI, with fair distribution of benefits associated with the development of these and related technologies, avoiding disparities of power and wealth due to unregulated competition. Our approach avoids statistical models employed in other approaches to solve moral dilemmas, because these are “blind” to natural constraints on moral agents, and risk perpetuating mistakes. Instead, our approach employs, for instance, psychologically realistic counterfactual reasoning in group dynamics. The present paper reviews studies involving factors fundamental to human moral motivation, including egoism vs. altruism, commitment vs. defaulting, guilt vs. non-guilt, apology plus forgiveness, counterfactual collaboration, among other factors fundamental in the motivation of moral action. These being basic elements in most moral systems, our studies deliver generalizable conclusions that inform efforts to achieve greater sustainability and global benefit, regardless of cultural specificities in constituents.
Preprint
Full-text available
The mechanisms of emergence and evolution of collective behaviours in dynamical Multi-Agent Systems (MAS) of multiple interacting agents, with diverse behavioral strategies in co-presence, have been undergoing mathematical study via Evolutionary Game Theory (EGT). Their systematic study also resorts to agent-based modelling and simulation (ABM) techniques, thus enabling the study of aforesaid mechanisms under a variety of conditions, parameters, and alternative virtual games. This paper summarises some main research directions and challenges tackled in our group, using methods from EGT and ABM. These range from the introduction of cognitive and emotional mechanisms into agents' implementation in an evolving MAS, to the cost-efficient interference for promoting prosocial behaviours in complex networks, to the regulation and governance of AI safety development ecology, and to the equilibrium analysis of random evolutionary multi-player games. This brief aims to sensitize the reader to EGT based issues, results and prospects, which are accruing in importance for the modeling of minds with machines and the engineering of prosocial behaviours in dynamical MAS, with impact on our understanding of the emergence and stability of collective behaviours. In all cases, important open problems in MAS research as viewed or prioritised by the group are described.
Chapter
Having addressed the prerequisite issues for a justified and contextualized computational morality, the absence of radically new problems resulting from the co-presence of agents of different nature, and addressed the difficulties inherent in the creation of moral algorithms, it is time to present the research we have conducted. The latter considers both the very aspects of programming, as the need for protocols regulating competition among companies or countries. Its aim revolves around a benevolent AI, contributing to the fair distribution of the benefits of development, and attempting to block the tendency towards the concentration of wealth and power. Our approach denounces and avoids the statistical models used to solve moral dilemmas, because they are “blind” and risk perpetuating mistakes. Thus, we use an approach where counterfactual reasoning plays a fundamental role and, considering morality primarily a matter of groups, we present conclusions from studies involving the pairs egoism/altruism; collaboration/competition; acknowledgment of error/apology. These are the basic elements of most moral systems, and studies make it possible to draw generalizable and programmable conclusions in order to attain group sustainability and greater global benefit, regardless of their constituents.
Chapter
Machine ethics is a sprouting interdisciplinary field of enquiry arising from the need of imbuing autonomous agents with some capacity for moral decision-making. Its overall results are not only important for equipping agents with a capacity for moral judgment, but also for helping better understand morality, through the creation and testing of computational models of ethics theories. Computer models have become well defined, eminently observable in their dynamics, and can be transformed incrementally in expeditious ways. We address, in work reported and surveyed here, the emergence and evolution of cooperation in the collective realm. We discuss how our own research with Evolutionary Game Theory (EGT) modelling and experimentation leads to important insights for machine ethics, such as the design of moral machines, multi-agent systems, and contractual algorithms, plus their potential application in human settings too.
Chapter
Full-text available
Machine ethics is a sprouting interdisciplinary field of enquiry arising from the need of imbuing autonomous agents with some capacity for moral decision-making. Its overall results are not only important for equipping agents with a capacity for moral judgment, but also for helping better understand morality, through the creation and testing of computational models of ethics theories. Computer models have become well defined, eminently observable in their dynamics, and can be transformed incrementally in expeditious ways. We address, in work reported and surveyed here, the emergence and evolution of cooperation in the collective realm. We discuss how our own research with Evolutionary Game Theory (EGT) modelling and experimentation leads to important insights for machine ethics, such as the design of moral machines, multi-agent systems, and contractual algorithms, plus their potential application in human settings too.
Chapter
In this chapter we set forth a case study of the integration of philosophy and computer science using artificial agents, beings ruled by abductive logic and emergent behavior. Our first step in this chapter is to highlight different models that we developed of such agents (a set of them related with evolutionary game theory and one model of a narrative storyteller robot). As we indicate, each model exemplifies different aspects of the bottom of the hill of autonomy as an emergent property of artificial systems specified through three aspects (“Self control”, “Adaptivity to the environment” and “Response to environment”). In summary, our conception is that autonomy, when presented as an emergent characteristic, could fill the important place given it by elaborations in philosophical ethics and one that leads us to a clearer comprehension of where to direct our efforts in the field of artificial agents. We conclude this chapter with the notion that this reevaluation of autonomy is necessary for the enhanced comprehension of human morality.
Article
Full-text available
The mechanisms of emergence and evolution of cooperation in populations of abstract individuals, with diverse behavioral strategies in co-presence, have been undergoing mathematical study via Evolutionary Game Theory, inspired in part on Evolutionary Psychology. Their systematic study resorts to simulation techniques, thus enabling the study of aforesaid mechanisms under a variety of conditions, parameters, and alternative virtual games. The theoretical and experimental results have continually been surprising, rewarding, and promising. In our recent work, we initiated the introduction, in such groups of individuals, of cognitive abilities inspired on techniques and theories of Artificial Intelligence, namely those pertaining to Intention Recognition, Commitment, and Apology (separately and jointly), encompassing errors in decision-making and communication noise. As a result, both the emergence and stability of cooperation become reinforced comparatively to the absence of such cognitive abilities. This holds separately for Intention Recognition, for Commitment, and for Apology, and even more so when they are jointly engaged. Our presentation aims to sensitize the reader to these Evolutionary Game Theory based issues, results and prospects, which are accruing in importance for the modeling of minds with machines, with impact on our understanding of the evolution of mutual tolerance and cooperation. Recognition of someone's intentions, which may include imagining the recognition others have of our own intentions, and may comprise not just some error tolerance, but also a penalty for unfulfilled commitment though allowing for apology, can lead to evolutionary stable win/win equilibriums within groups of individuals, and perhaps amongst groups. The recognition and the manifestation of intentions, plus the assumption of commitment -- even whilst paying a cost for putting it in place -- and the acceptance of apology, are all facilitators in that respect, each of them singly and, above all, in collusion.
Thesis
Full-text available
This thesis concerns the problem of modelling evolving prospective agent systems. Inasmuch a prospective agent looks ahead a number of steps into the future, it is confronted with the problem of having several different pos-sible courses of evolution, and therefore needs to be able to prefer amongst them to determine the best to follow as seen from its present state. Based on historical information as well as quantitative and qualitative a poste-riori evaluation of its possible evolutions, the agent is equipped with so-called evolution-level preferences mechanism. In addition, to enable such a prospective agent to evolve, we provide a way for modelling its evolving knowledge base, including environment triggering of active goals, context-sensitive preferences and integrity constraints. Furthermore, to allow an evolving prospective agent acting under uncertainty, P-log is employed for representing probabilistic knowledge. Finally, such agents are enhanced with an ability of intention recognition, via combination of Causal Bayes Net-works and plan attribution. Besides, several examples are exhibited to illustrate the proffered con-cepts and features. We also show how the evolving prospective agent system can be applied to model morality and provide supports for elderly people.
Article
The book offers a profound understanding of how we create a social reality-a reality of money, property, governments, marriages, stock markets and cocktail parties. The paradox addressed is that these facts only exist because we think they exist and yet they have an objective existence. Continuing a line of investigation begun in his earlier book The Construction of Social Reality, the author identifies the precise role of language in the creation of all "institutional facts." His aim is to show how mind, language and civilization are natural products of the basic facts of the physical world described by physics, chemistry and biology. The author explains how a single linguistic operation, repeated over and over, is used to create and maintain the elaborate structures of human social institutions. These institutions serve to create and distribute power relations that are pervasive and often invisible. These power relations motivate human actions in a way that provides the glue that holds human civilization together. The author then applies the account to show how it relates to human rationality, the freedom of the will, the nature of political power and the existence of universal human rights. In the course of his explication, he asks whether robots can have institutions, why the threat of force so often lies behind institutions, and he denies that there can be such a thing as a "state of nature" for language-using human beings.
Article
How does cooperation emerge among selfish individuals? When do people share resources, punish those they consider unfair, and engage in joint enterprises? These questions fascinate philosophers, biologists, and economists alike, for the "invisible hand" that should turn selfish efforts into public benefit is not always at work.The Calculus of Selfishnesslooks at social dilemmas where cooperative motivations are subverted and self-interest becomes self-defeating. Karl Sigmund, a pioneer in evolutionary game theory, uses simple and well-known game theory models to examine the foundations of collective action and the effects of reciprocity and reputation.Focusing on some of the best-known social and economic experiments, including games such as the Prisoner's Dilemma, Trust, Ultimatum, Snowdrift, and Public Good, Sigmund explores the conditions leading to cooperative strategies. His approach is based on evolutionary game dynamics, applied to deterministic and probabilistic models of economic interactions.Exploring basic strategic interactions among individuals guided by self-interest and caught in social traps,The Calculus of Selfishnessanalyzes to what extent one key facet of human nature--selfishness--can lead to cooperation.
Article
Written by one of the preeminent researchers in the field, this book provides a comprehensive exposition of modern analysis of causation. It shows how causality has grown from a nebulous concept into a mathematical theory with significant applications in the fields of statistics, artificial intelligence, economics, philosophy, cognitive science, and the health and social sciences. Judea Pearl presents and unifies the probabilistic, manipulative, counterfactual, and structural approaches to causation and devises simple mathematical tools for studying the relationships between causal connections and statistical associations. The book will open the way for including causal analysis in the standard curricula of statistics, artificial intelligence, business, epidemiology, social sciences, and economics. Students in these fields will find natural models, simple inferential procedures, and precise mathematical definitions of causal concepts that traditional texts have evaded or made unduly complicated. The first edition of Causality has led to a paradigmatic change in the way that causality is treated in statistics, philosophy, computer science, social science, and economics. Cited in more than 5,000 scientific publications, it continues to liberate scientists from the traditional molds of statistical thinking. In this revised edition, Judea Pearl elucidates thorny issues, answers readers’ questions, and offers a panoramic view of recent advances in this field of research. Causality will be of interests to students and professionals in a wide variety of fields. Anyone who wishes to elucidate meaningful relationships from data, predict effects of actions and policies, assess explanations of reported events, or form theories of causal understanding and causal speech will find this book stimulating and invaluable.
Conference Paper
The Independent LifeStyle Assistant™ (I.L.S.A.) is an agent-based monitoring and support system to help elderly people to live longer in their homes by reducing caregiver burden. I.L.S.A. is a multiagent system that incorporates a unified sensing model, situation assessments, response planning, real-time responses and machine learning. This paper describes the some of the lessons we learned during the development and six-month field study.
Article
Perception of the social world in terms of agents and their intentional relations is fundamental to human experience. In this chapter, we review recent investigations into the origins of this fundamental ability that trace its roots to the first year of life. These studies show that infants represent others' actions not as purely physical motions, but rather as actions directed at goals and objects of attention. Infants are able to recover intentional relations at varying levels of analysis, including concrete action goals, higher-order plans, acts of attention, and collaborative goals. There is mounting evidence that these early competencies are strongly influenced by infants' own experience as intentional agents. Action experience shapes infants' action perception.