Conference PaperPDF Available

Contextual Abduction and its Complexity Issues

Authors:

Abstract

In everyday life, it seems that we prefer some explanations for an observation over others because of our contextual background knowledge. Reiter already tried to specify a mechanism within logic that allows us to avoid explicitly considering all exceptions in order to derive a conclusion w.r.t. the usual case. In a recent paper, a contextual reasoning approach has been presented, which takes this contextual background into account and allows us to specify contexts within the logic. This approach is embedded into the Weak Completion Semantics , a Logic Programming approach that aims at adequately modeling human reasoning tasks. As this approach extends the underlying three-valued Łukasiewicz logic, some formal properties of the Weak Completion Semantics do not hold anymore. In this paper, we investigate the effects of this extension and present some surprising results about the complexity issues of contextual abduction.
Contextual Abduction and its Complexity Issues
Emmanuelle-Anna Dietz Saldanha1, Steffen H¨
olldobler1,2, and Tobias Philipp1?
1International Center for Computational Logic, TU Dresden, Germany
2North-Caucasus Federal University, Stavropol, Russian Federation
{dietz,sh}@iccl.tu-dresden.de, tobias.philipp@tu-dresden.de
Abstract. In everyday life, it seems that we prefer some explanations for an ob-
servation over others because of our contextual background knowledge. Reiter
already tried to specify a mechanism within logic that allows us to avoid explic-
itly considering all exceptions in order to derive a conclusion w.r.t. the usual case.
In a recent paper, a contextual reasoning approach has been presented, which
takes this contextual background into account and allows us to specify contexts
within the logic. This approach is embedded into the Weak Completion Seman-
tics, a Logic Programming approach that aims at adequately modeling human rea-
soning tasks. As this approach extends the underlying three-valued Łukasiewicz
logic, some formal properties of the Weak Completion Semantics do not hold
anymore. In this paper, we investigate the effects of this extension and present
some surprising results about the complexity issues of contextual abduction.
1 Introduction
Consider the following scenario, extended and discussed in [10]:
If the brakes are pressed, then the car slows down. If the brakes are not OK,
then car does not slow down. If the car accelerates, then the car does not slow
down. If the road is slippery, then the car does not slow down. If the road is icy,
then the road is slippery. If the road is downhill, then the car accelerates. If the
car has snow chains on the wheels, then the road is not slippery for the car. If
the car has snow chains on the wheels and the brakes are pressed, then the car
does not accelerate when the road is downhill.
[11] proposed to introduce licenses for inferences when modeling conditionals in hu-
man reasoning. [10] suggested to make these conditionals exception-tolerant in logic
programs, by modeling the first conditional in the scenario above as If the brakes are
pressed and nothing abnormal is the case, then the car slows down. Accordingly, we
apply this idea to all conditionals in the previous scenario:
If the brakes are pressed (press) and nothing abnormal is the case (¬ab1), then
the car slows down (slow down). If the brakes are not OK (¬brakes ok), then
something abnormal is the case w.r.t. ab1. If the car accelerates (accelerate),
then something abnormal is the case w.r.t. ab1. If the road is slippery (slippery),
?The authors are mentioned in alphabetical order.
2
then something abnormal is the case w.r.t. ab1. If the road is icy (icy road) and
nothing abnormal is the case (ab2), then the road is slippery. If the road is
downhill (downhill) and nothing abnormal is the case (ab3), then the car ac-
celerates (accelerate). If the car has snow chains (snow chain), then something
abnormal is the case w.r.t. ab2. If the car has snow chains (snow chain) and
the brakes are pressed (press), then something abnormal is the case w.r.t. ab3.
According to [10], when reasoning in such a scenario, abnormalities should be ignored,
unless there is some reason to assume them to be true. As already observed and ques-
tioned by Reiter [9], the issue is whether it is possible to specify a logic-based mech-
anism that allows us to avoid explicitly considering all exceptions in order to derive a
conclusion w.r.t. the usual case.
In this paper, we aim at modeling this idea within a logic programming approach,
the Weak Completion Semantics (WCS) [3] and with the help of contextual reason-
ing [2]. WCS originates from [11], which unfortunately had some technical mistakes.
These were corrected in [4] by using the three-valued Łukasiewicz logic. Since then,
WCS has been successfully applied to various human reasoning tasks,summarized in [3].
[1] shows the correspondence between WCS and the Well-founded Semantics [12] and
that the Well-founded Semantics does not adequately model Byrne’s suppression task.
As has been shown in [2] modeling the famous Tweety example [9] under the
Weak Completion Semantics leads to undesired results, namely that all exception cases
have to be stated explicitly false. [2] proposes to extend the underlying three-valued
Łukasiewicz Semantics and presents a contextual abductive reasoning approach. The
above scenario is similar to Reiter’s goal when he discussed the Tweety example, in the
sense that it describes exceptions, which we don’t want to explicitly consider.
Consider Pcar, representing the previous described scenario, including abnormalites:
slow down press ¬ab1.slippery icy road ¬ab2.
ab1slippery.accelerate downhill ¬ab3.
ab1 ¬brakes ok.ab2snow chain.
ab1accelerate.ab3snow chain press.
Suppose that we observe the brakes are pressed, i.e. O1={press}: Under the WCS, we
cannot derive from Pcar O1that slow down is true, because we don’t know whether
ab1is false, which in turn cannot be derived to be false, because we don’t know whether
the road is slippery, the brakes are OK or the car accelerates. We need to explicitly state
that press and brakes ok are true whereas icy road,downhill and snow chain have to be
assumed false such that we can derive that slow down is true. However, if there is no
evidence to assume that ab1,ab2and ab3are true, we would like to assume the usual
case, i.e. to avoid specifying explicitly that all abnormalities are not true.
Let us observe that the car does not slow down, i.e. O2=slow down}. Given
Pcar, we can either explain this observation by assuming that the brakes are not pressed,
E2={press ⊥}, that the road is icy, E3={icy road >}, that the brakes are not
OK, E4={brakes ok ⊥} or that the road is downhill and the car has no snow chain,
E5={downhill >,snow chain ⊥}. We would like to express that the explanation
that describes the usual case seems more likely: In this case E2is the preferred expla-
nation, as usually, when the car does not slow down, then the brakes are not pressed.
3
Only if there is some evidence that something abnormal is the case, i.e. if we observe
that something else would suggest one of the other explanations, then some other ex-
planation can be considered. For instance, if we observe additionally that the road is
slippery, we would prefer E3over the other explanations, or if we additionally observe
that the road is downhill, we would prefer E5over the other explanations. Let us check
whether the closed world assumption (CWA) w.r.t. undefined atoms helps. Consider the
program Pcar {press ⊥}:1brakes ok is assumed to be false by CWA, which in turn
makes ab1true, and therefore leads us to conclude that slow down is false. However, in
the usual case we would like to derive the contrary, namely that slow down is true.
The two examples above show that neither the Weak Completion Semantics nor
approaches that apply the closed world assumption w.r.t. undefined atoms can ade-
quately model our intention. The contextual abductive reasoning approach presented
in [2] proposes a way of modeling the usual case, i.e. ignoring abnormalities if there
is no evidence to assume them to be true, and expressing a preference among explana-
tions. This approach takes Pereira and Pinto’s inspection points [8] in abductive logic
programming as starting point. In this paper we investigate several problems in terms of
complexity theory, and contrast these results with properties from abductive reasoning
without context.
2 Background
We assume that the reader is familiar with logic and logic programming. The general
notation and terminology is based on [6].
2.1 Contextual Logic Programs
Contextual logic programs are logic programs extended by the truth-functional op-
erator ctxt, called context [2]. (Propositional) contextual logic program clauses are
expressions of the forms AL1.. . Lmctxt(Lm+1). .. ctxt(Lm+p)(called
rules), A > (called facts), and A (called assumptions). Ais called head and
L1.. . Lmctxt(Lm+1).. . ctxt(Lm+p)as well as >and , standing for true
and false, respectively, are called body of the corresponding clauses. A contextual
(logic) program is a set of contextual logic program clauses. atoms(P)denotes the
set of all atoms occurring in P.Ais defined in Piff Pcontains a rule or a fact
with head A.Ais undefined in Piff Ais not defined in P. The set of all atoms that
are undefined in Pis denoted by undef(P). The definition of A in Pis defined as
def(A,P) = {Abody |Abody is a rule or a fact occurring in P}.¬Ais assumed
in Piff Pcontains an assumption with head Aand def(A,P) = /
0.
An exception clause is as a clause of the form abjbody, 1 jm, where mN.
Ais an atom and the Liwith 1 im+pare literals. Conceptually, we suggest to
use the context connective within contextual programs as follows: The ctxt operator is
applied upon every literal in the body of an exception clause in P. We will omit the
word ‘contextual’ when we refer to (logic) programs, if not stated otherwise.
1This notion of falsehood appears to be counterintuitive, but programs will be interpreted under
(weak) completion semantics where the implication sign is replaced by an equivalence sign.
4
Alevel mapping `for a contextual program Pis a function which assigns to each
atom a natural number. It is extended to literals and expressions of the form ctxt(L)as
follows: `(¬A) = `(A)and `(ctxt(L)) = `(L). A contextual program Pis acyclic w.r.t.
a level mapping iff for every AL1. .. Lmctxt(Lm+1). . . ctxt(Lm+p)Pwe
find that `(A)> `(Li)for all 1 im+p. A contextual program Pis acyclic iff it is
acyclic w.r.t. some level mapping.
Consider the following transformation for a given program P: (1) For all A
body1,Abody2,...,AbodynP, where n1, replace by Abody1body2
. . . bodyn. (2) Replace all occurrences of by . The resulting set is the weak com-
pletion of P, denoted by wcP[4].
Example 1 Consider P={sr,r ¬pq,q ,s >}. The first two
clauses are rules, the third is an assumption and the fourth is a fact. s and r are defined,
whereas p and q are not defined in P, i.e. undef(P) = {p,q}.Pis acyclic, as it is
acyclic w.r.t. the following level mapping: `(s) = 3,`(r) = 2and `(p) = `(q) = 1. The
weak completion of Pis wcP={sr >,r ¬pq,q ⊥}.
2.2 Three-Valued Łukasiewicz Logic Extended by the Context Connective
We consider the three-valued Łukasiewicz logic, for which the corresponding truth val-
ues are >,and U, which mean true, false and unknown, respectively. A three-valued
interpretation I is a mapping from atoms(P)to the set of truth values {>,,U}, and
is represented as a pair I=hI>,Iiof two disjoint sets of atoms, where I>={A|
I(A) = >} and I={A|I(A) = ⊥}. Atoms which do not occur in I>Iare mapped
to U. The truth value of a given formula under Iis determined according to the truth
tables in Table 1. A three-valued model Mof Pis a three-valued interpretation such
that M(Abody) = >for each Abody P. Let I=hI>,Iiand J=hJ>,Jibe
two interpretations. IJiff I>J>and IJ.Iis a minimal model of Piff for no
other model Jof Pit holds that JI.Iis the least model of Piff it is the only minimal
model of P. Example 2 shows the models of the program in Example 1.
Our suggestion to apply the ctxt operator upon every literal in the body of an excep-
tion clause in Pwith Table 1 implements the idea of the introduction that abnormalities
should be assumed to be false, if there is no evidence to assume otherwise.
Example 2 I1=h{s},{q,r}i, I2=h{s,p},{q,r}i and I3=h{s,q,r},{p}i are models
of P. I1is the least model of wcPand I3is not a model of wc P.
2.3 Stenning and van Lambalgen Consequence Operator
We reason w.r.t. the Stenning and van Lambalgen consequence operator ΦP[11, 4]:
Let Ibe an interpretation and Pbe a program. The application of Φto Iand P, denoted
by ΦP(I), is the interpretation J=hJ>,Ji:
J>={A|there is Abody Psuch that I(body) = >},
J={A|there is Abody Pand for all Abody P,we find I(body) = ⊥}.
5
F¬F
>
>
U U
> U
> > U
U U U
> U
> > > >
U>U U
> U
> U
> > > >
U U > >
U>
> U
> > U
U U >U
U>
Lctxt(L)
> >
U
Table 1. The truth tables for the connectives under the three-valued Łukasiewicz logic and for
ctxt(L).Lis a literal, >,, and U denote true,false, and unknown, respectively.
The least fixed point of ΦPis denoted by lfp ΦP, if it exists. Acyclic programs admit
several nice properties: The ΦPoperator is a contraction, has a least fixed point2that
can be reached by iterating a finite number of times starting from any interpretation,
and lfp ΦPis a model of P[2]. We define P|=wcs Fiff Pis acyclic and lfp ΦP|=F.
As has been shown in [4], for non-contextual programs, the least fixed point of ΦP
is identical to the least model of the weak completion of P, which always exists. As
Example 3 shows this does not hold for contextual programs: The weak completion of
contextual programs might have more than one minimal model.
Example 3 P={s ¬r,r ¬pq,qctxt(¬p)}then wc P={sr,r ¬p
q,qctxt(¬p)}.lfp ΦP=h{s},{q,r}i.h{q,r},{p,s}i is also a minimal model of wcP.
However, a minimal model that is different to the least fixed point of ΦP, is not sup-
ported in the sense that if we iterate ΦPstarting with this minimal model, then we will
compute lfp ΦP. As lfp ΦPis unique and the only supported minimal model of wcP,
we define P|=wcs Fif and only if Fholds in the least fixed point of ΦP.
2.4 Complexity Classes
Adecision problem is a problem where the answer is either yes or no. A natural corre-
spondence to the decision problem is the word problem, where the word problem deals
with the question Does word w belong to language L? Here, a word is a finite string
over the alphabet Σand a language is a possibly infinite set of words over Σ, where Σ
denotes every word over Σ.
P is the class of decision problems that are solvable in polynomial time. NP is the
class of decision problems, where the yes answers can be verified in polynomial time.
Every decision problem can be specified by a language and every class can be specified
by the set of languages it contains: P is the set of languages, for which it can be decided
in polynomial time whether they are in P. Let Rbe a binary relation on strings. Ris
balanced if (x,y)Rimplies |y| |x|kfor some k1. Let LΣbe a language.
LNP iff there is a polynomially decidable and a polynomial balanced relation Rsuch
that L={x|(x,y)Rfor some y}[7]. That is, NP is the set of languages, for the
ones that belong to this class, it can be decided in polynomial time whether they are
in NP. Given that CO NP ={L|LNP}, a language Lis in the class D P iff there
are two languages L1NP and L2C ON P such that L=L1L2. PSPACE is the
2Note that for acyclic programs, the least fixed point of ΦPis also the unique fixed point of ΦP.
6
set of languages for which it can be decided in polynomial space whether they are
in PS PACE. The relation of the four classes is P NP DP PS PACE.
A language Lis polynomial-time reducible to a language L0, denoted as LpL0if
there is a polynomial-time computable function f:Σ7→ Σsuch that for every xΣ,
xLiff f(x)L0. Reductions are transitive, i.e. if L1pL2and L2pL3then L1pL3
for all languages L1,L2and L3. Given that Cis a complexity class, we say that a lan-
guage Lis C-hard if LpL0for all L0C.Lis C-complete if Lis in Cand Lis
C-hard.
3 Abduction in Contextual Logic Programs
Acontextual abductive framework is a tuple hP,A,|=wcsi, consisting of an acyclic con-
textual program P, a set of abducibles AAPand the entailment relation |=wcs. The
set of abducibles APis defined as
{A←>|Ais undefined in Por Ais head of an exception clause in P}
{A←⊥|Ais undefined in Pand ¬Ais not assumed3in P},
Let an observation Obe a non-empty set of literals. Abductive reasoning can be
characterized as the problem to find an explanation EAsuch that Ocan be inferred
by PEby deductive reasoning. Often, explanations are restricted to be basic and that
they are consistent with P. An explanation Eis basic, if Ecannot be explained by other
facts or assumptions, i.e. Ecan only be explained by itself.4It is easy to see that given
an acyclic logic program Pand that EA, the resulting program PEis acyclic as
well. Further, as the ΦPoperator always yields a least fixed point for acyclic programs,
PEis guaranteed to be consistent. We will impose a further restriction on expla-
nations such that explanations do not allow to change the context of the observation.
Formally, this is defined using the following relation:
Definition 4. The strongly depends on relation w.r.t. Pis the smallest transitive rela-
tion with the following properties:
1. If A L1. . . Lmctxt(Lm+1). .. ctxt(Lm+p)P, then A strongly depends
on Lifor all i {1,...,m}.
2. If L strongly depends on L0, then ¬L strongly depends on L0.
3. If L strongly depends on L0, then L strongly depends on ¬L0.ut
Example 5 Given P={pr,pctxt(q)}, p strongly depends on r and ¬r, ¬p
strongly depends on r and ¬r. p does not strongly depend on q, neither on ctxt(q).
We formalize the abductive reasoning process as follows:
Definition 6. Given the contextual abductive framework hP,A,|=wcsiEis a contex-
tual explanation of Ogiven Piff EA,PE|=wcs O, and for all A > Eand
A←⊥∈Ethere exists an L O, such that L strongly depends on A.
3It is not the case that A←⊥∈Pand this is the only clause where Ais the head of in P.
4If P={pq}and O={q}, then E={q >} is basic.
7
In the following, we abbreviate the contextual abductive framework, by referring to
the abductive problem AP=hP,A,Oi.Eis an explanation for the abductive problem
AP=hP,A,Oiiff Eis a contextual explanation of Ogiven P.
Notice that PEis consistent since the resulting program is acyclic, and therefore
a least fixed point of ΦPexists. We demonstrate the formalism by Example 7.
Example 7 Let us consider again Pcar from the introduction and recall that, if we know
that ‘the brakes are pressed’ is true i.e. press >, then under the Weak Completion
Semantics, we cannot derive from P {press >} that ‘slow down’ is true, because
we don’t know whether the road is slippery, the brakes are OK or the car accelerates.
Given Pcar,Pctxt
car is defined as follows:
slow down press ¬ab1.slippery icy road ¬ab2.
ab1ctxt(slippery).accelerate downhill ¬ab3.
ab1ctxt(¬brakes ok).ab2ctxt(snow chain).
ab1ctxt(accelerate).ab3ctxt(snow chain)ctxt(press).
By iterating ΦPctxt
car until the least fixed point is reached, we obtain h/
0,{ab1,ab2,ab3}i.
All abnormality predicates are false, as nothing is known about ‘slippery’, ‘brakes ok’,
‘accelerate’ and ‘snow chain’. According to Table 1, ctxt(slippery)’, ctxt(brakes ok)’,
ctxt(accelerate) and ctxt(snow chain) are evaluated to false under h/
0,/
0i, which in
turn makes ‘ab1’, ‘ab2 and ‘ab3 false. Assume that we observe O1={press}. A con-
textual explanation E1for O1has to be a subset of the set of abducibles A.Aconsists
of the following facts and assumptions:
press >.brakes ok >.icy road >.ab1 >.
press .brakes ok .icy road .ab2 >.
downhill >.snow chain >.ab2 >.
downhill .snow chain .
E1={press >} is the only contextual explanation for O1. The least fixed point of the
program together with the corresponding explanation is as follows:
lfp (ΦPE1) = h{slow down,press},{ab1,ab2,ab3}i.
Assume that we observe that the car does not slow down, i.e. O2=slow down}. The
only contextual explanation for O2is E2={press ⊥}.lfp (ΦPE2)is as follows:
h/
0,{slow down,press,ab1,ab2,ab3}i,
and indeed this model entails ¬slow down’. Note that neither E3={icy road >}
nor E4={brakes ok ⊥} can be contextual explanations for O2, because the ad-
ditional condition for contextual explanations, that ‘for all A > Eand for all
A Ethere exists an L O, such that L strongly depends on A, does not hold:
¬slow down’ strongly depends on ‘press’ but it does not strongly depend on ‘brakes ok’
neither does it strongly depend on ‘icy road’. Assume that we additionally observe that
the road is slippery:
O3=O2 {slippery}.
As ‘slippery’ strongly depends on ‘icy road’, E3is a contextual explanation for O3.
lfp (ΦPctxt
car E3) = h{icy road,slippery,ab1,},{slow down,ab2,ab3}i and entails both
¬slow down’ and ‘slippery’. E3is the only contextual explanation for O3.
8
Example 8 Let us extend the scenario from the introduction as follows:
If the engine shaft rotates (rotate E) and nothing is abnormal (ab4), then the
wheels rotate (rotate W). If the clutch is not pressed (¬press clutch), then
something abnormal is the case w.r.t. ab4. If the wheels rotate (rotate W ) and
nothing is abnormal (ab4), then the shaft rotates. If the wheels rotate then some-
thing is the case w.r.t. ab1. If the wheels engine shaft rotates then something is
the case w.r.t. ab1.
Pctxt
car is extended by the following clauses:
rotate W rotate E ¬ab4.ab4ctxt(¬press clutch).
rotate E rotate W ¬ab4.ab1ctxt(rotate W).
ab1ctxt(rotate E).
Even though there is a cycle in this program, by iterating Φw.r.t. this program, the
following least fixed point is reached: h/
0,{ab1,ab2,ab3,ab4}i.
Example 8 shows that some programs with cycles still can reach a least fixed point. We
assume that acyclicity only needs to be restricted w.r.t. the literals within ctxt.
4 Complexity of Consistency of Contextual Abductive Problems
A contextual abductive problem A=hP,A,Oiis consistent if there is an explanation
for A. We will now investigate the complexity of deciding consistency. First, we show
that computing the least fixed point of ΦPfor acyclic contextual programs can be done
in polynomial time. From this, we can easily show that consistency is in NP. Hardness
follows analogously to [5].
For showing that ΦPcan be computed in polynomial time, observe that several nice
properties of ΦPdo not hold if we consider contextual programs. For instance, for logic
programs that do not contain the context connective, ctxt, the least fixed point of ΦPis
monotonously increasing if we add facts and assumptions whose head is undefined. As
the following example demonstrates, this does not hold for contextual programs.
Example 9 Consider P={pctxt(r)}where lfp ΦP=h/
0,{p}i.
However, h/
0,{p}i 6⊆ h{r,p},/
0i=lfp ΦP∪{r←>}.
ΦPis non-monotonic even for acyclic programs as the following example demonstrates:
Example 10 Given P={pctxt(q)}, I1=h/
0,/
0i h/
0,{p,q}i =I2,
and F ={q > }. Then ΦP(I1) = h/
0,{p}i h{q},{p}i =ΦPF(I2).
However, lfp ΦP(I1) = h/
0,{p}i 6⊆ h{p,q},/
0i=lfp ΦPF(I2).ut
We can establish a weak form of monotonicity for a logic program Pthat is acyclic
w.r.t. `: If the atom Ais true (false, resp.) after the nth application of ΦPstarting from
the empty interpretation, and `(A)n, then Aremains true (false, resp.). We define
ΦP0=h/
0,/
0iand ΦP(n+1) = ΦP(ΦPn)for all nN.
9
Lemma 11. Let Pbe a logic program that is acyclic w.r.t. a level mapping `. Let In=
hI>
n,I
ni=ΦPn for all n N. If n <m, then:
I>
n {A|`(A)n} I>
mand I
n {A|`(A)n} I
m.
Proof. We show the claim by induction on n. For the induction base case, this is
straightforward as I>
0=I
0=/
0. For the induction step, assume the claim holds for n:
I>
n {A|`(A)n} I>
mfor all mNwith n<m,(1)
I
n {A|`(A)n} I
mfor all mNwith n<m.(2)
To show: I>
n+1 {A|`(A)n+1} I>
kfor all kNwith n+1k.
1. We show it by contradiction, i.e. assume that i) AI>
n+1, ii) `(A)n+1 and
iii) A6∈ I>
k.
2. As i), there is Abody Pwith the property that In(body) = >.
3. As Pis acyclic, `(L)< `(A)for all literals Lappearing in body. For all Lthe
following holds:
(a) if L=B, then BI>
nand as ii) `(B)<n, by (1), BI>
k1.
(b) if L=¬B, then BI
nand as ii) `(B)<n, by (2), BI
k1.
4. By 3a and 3b follows that Ik1(body) = >. Accordingly, AI>
kwhich contra-
dicts iii).
To show: I
n+1 {A|`(A)n+1} I
kfor all kNwith n+1k.
1. Again, we show by contradiction, i.e. assume that i) AI
n+1, ii) `(A)n+1
and iii.) A6∈ I
k.
2. As i), there is Abody P, and we find that In(body) = for all Abody
P. As Pis acyclic, `(L)< `(A)for all literals Lappearing in body. For at least
one Lin each body the following holds:
(a) if L=B, then BI
nand as ii) `(B)<n, by (1), BI
k1.
(b) if L=¬B, then BI>
nand as ii) `(B)<n, by (2), BI>
k1.
3. By 2a and 2b follows that Ik1(body) = for all Abody P. Accordingly,
AJ
kwhich contradicts iii).
Proposition 12. Computing lfp ΦPcan be done in polynomial time for acyclic logic
programs P.
Proof. By [2, Corollary 4] the least fixed point can be obtained from finite applications
of ΦP, i.e. there is nsuch that ΦPn=ΦPmfor all m>n. We show that nis polyno-
mially restricted in Pas follows: The number of atoms appearing in Pis polynomially
restricted in the length of the string P. Consequently, we can assume a maximum level
msuch that `(A)mfor all atoms Aappearing in P. We now compute ΦPmwhich
can be done in polynomial time. By Lemma 11, we know that ΦPis monotonic after
msteps. Afterwards, we can add only polynomially many atoms to I>or Iusing ΦP.
Hence, after polynomial iterations, we have reached the least fixed point.
Theorem 13. Deciding, whether a contextual problem hP,A,Oihas an explanation is
NP-complete.
10
Proof. We show that the problem belongs to NP, and afterwards we show NP-hard.
To show NP-membership, observe that explanations are polynomially bounded by
the abductive framework. Then, showing NP-membership only requires to show that
checking whether a set Eis an explanation. This is done as follows:
1. Eis a consistent subset of A: This can be done in polynomial time [5].
2. PE|=wcs O: Computing M=lfp ΦPEcan be done in polynomial time (Propo-
sition 12). The last step is to check whether PE|=wcs Lfor all LO, can be
done as follows. For all literals LO, if L=A, then check if AI>and if L=¬A,
then check if AI
3. for all A←>∈Eand for all A←⊥∈E, respectively, there exists LO
such that Lstrongly depends on A > and A , respectively: The strongly
depends on relation for every two literals can be checked in |P|steps, and thus the
computation can be done in polynomial time.
It remains to show that consistency is NP-hard. As already consistency with no
context connective is NP-hard [5], it easily follows that consistency is NP-hard.
5 Complexity of Skeptical Reasoning with Abductive Explanations
We are not only interested in deciding whether an observation can be explained, but
what can be inferred from the possible explanations. We distinguish between skepti-
cal and credulous reasoning: Given an abductive problem AP=hP,A,Oi,F follows
skeptically from APiff APis consistent, and for all explanations Efor APit holds
that PE|=wcs F. The formula F follows credulously from APiff there exists an ex-
planation Eof APand PE|=wcs F.
Proposition 14. Deciding if PE|=wcs F does not hold for all explanations Egiven AP
is NP-complete.
Proof. To show that the problem is in NP, we guess a EAfor APand check in
polynomial time whether Eis an explanation for Oand whether PE6|=wcs F. This
can be done in polynomial time.
To show that the problem is NP-hard, we use the result from Theorem 13, by reducing
consistency to the problem above, i.e. reduce the question whether a contextual problem
hP,A,Oihas an explanation to the question whether there exists an explanation Esuch
that PE6|=wcs ¬(AA)for all Aatoms(P)given AP. The correctness of the
construction follows because for every interpretation I, it holds that I6|=¬(AA).
Proposition 15. Let L Σbe a language. Then L is NP-complete iff L is C ONP-
complete.
Proof. See [7, Proposition 10.1].
Proposition 16. Deciding if PE|=wcs F holds for all explanations Egiven AP is
CO NP-complete.
11
Proof. The opposite problem is shown to be NP-complete by Proposition 14. By Propo-
sition 15, deciding the above problem is CO NP-complete.
Theorem 17. The question, whether F follows skeptically from an abductive problem
hP,A,Oiis DP-complete.
Proof. We first show that the problem belongs to DP, and afterwards we show that it is
DP-hard. Let AP=hP,A,Oibe an abductive problem and Fa formula. PE|=wcs
Ffor all explanations Efor APiff i.) APis consistent and ii.) Ffollows from all
explanations Efor AP.
By Theorem 13, i.) is in NP and by Proposition 16, ii.) is in CONP. Hence, deciding
whether Ffollows skeptically from APis in DP.
Let Pbe a decision problem in DP. P consists of two decision problems P1and P2,
where P1N P and P2CONP by the definition of the class DP. By Theorem 13, i.)
is NP-complete, thus we know that P1is polynomially reducible to consistency. By
Proposition 16 ii.) is CO NP-complete, thus P2is polynomially reducible to it. Hence,
P can be polynomially reduced to the combined problem i) and ii.). Hence, whether F
follows skeptically from hP,A,Oiis DP-hard.
6 Skeptical Reasoning with Minimal Abductive Explanations
Often, one is interested in reasoning w.r.t. minimal explanations, i.e. there is no other
contextual explanation E0Efor an observation O. If explanations are monotonic,
i.e. the addition of further facts and assumptions are still an explanation, then checking
minimality can be done in polynomial time [5]: It is enough to check that E\ {A ⊥}
and E\ {A >} is not an explanation for all A←>∈Eand A E. We cannot
even guarantee that explanations are monotonic for logic programs without the context
operator as Example 18 shows. Yet, if the set of abducibles is restricted to the set of facts
and assumptions w.r.t. the undefined atoms in P, i.e. A={A←>|Aundef (P)}
{A←⊥|Aundef(P)}then explanations are indeed monotonic [5].
Example 18 Given P={pqr,p ¬q,q ⊥} and observation O={p}.E1=
{q >,r >} is an explanation for O.E1 {q >} is not an explanation for O,
but E2=/
0 {q >} E1is again an explanation for O.
Yet, restricting the the set of abducibles, does not make explanations monotonic if
we consider contextual programs, as Example 19 shows.
Example 19 Given P={pq,pctxt(r)}and observation O={p}. Then, E=
{q >} is a contextual explanation for O, but {q >,r >} Enot anymore,
because r does not strongly depend on p.
As Example 20 shows, given that Eis a contextual explanation for O, we cannot simply
iterate over all A←>∈E(A←⊥∈E, resp.) and check whether E\ {A >} (E\
{A , resp.) is a contextual explanation for O. If this would be the case, then we
could decide whether Eis a minimal contextual explanation in polynomial time [5].
Instead, we might have to check all subsets of E, for which there are 2|E|many, i.e. this
might have to be done exponentially in time.
12
Example 20 Given P={pr ¬t,tctxt(q),tctxt(s),prqs}. Assume
that O={p}:E1={r >} and E2={r >,q >,s >} are both contextual
explanations for O. As E1E2holds, E1is a minimal contextual explanation, whereas
E2is not. However, note that none of E2\ {r >},E2\ {q >} or E2\ {s >} is
a contextual explanation for O.
Still, we can show an upper bound for the complexity of deciding minimality:
Theorem 21. The question, whether a set Eis a minimal explanation for an abductive
problem hP,A,Oiis in PSPACE.
Proof. Given that hP,A,Oiis an abductive problem, we need to check all subsets of
E, in order to decide whether Eis a minimal explanation for O. As we don’t need to
store the subsets of Eas soon as we have tested them, deciding whether Eis minimal
can be done polynomial in space.
7 Conclusion
This paper investigates contextual abductive reasoning, a new approach embedded within
the Weak Completion Semantics. We first show with the help of an example the limi-
tations of the Weak Completion Semantics, when we want to express the preference of
the usual case over the exception cases. Furthermore, we cannot syntactically specify
contextual knowledge in the logic programs as they have been presented so far.
After that, we introduce contextual programs together with contextual abduction,
we show how the previous limitations can be solved. This contextual reasoning ap-
proach allows us to indicate contextual knowledge and express the preference among
explanations, depending on the context.
However, as has already been shown previously in [2], some advantageous prop-
erties which hold for programs under the Weak Completion Semantics, do not hold
for contextual programs. For instance, the ΦPoperator is not necessarily monotonic.
Further, if a contextual program contains a cycle, it might not even have a fixed point.
In this paper, we first show that even though ΦPis not monotonic, the least fixed
point can still be computed in polynomial time for acyclic contextual programs. There-
after, we show that whether an observation has a contextual explanation, is NP-complete.
Furthermore, by examining the complexity of skeptical reasoning, deciding whether
something follows skeptically from an observation is DP-complete. Unfortunately, ex-
planations might not be monotonic in contextual abduction anymore, a property that
holds in abduction for non-contextual programs [5]. We can however show that decid-
ing whether a contextual explanation is minimal lies in PSPACE.
The approach discussed here brings up a number of interesting questions: In the end
of Section 2.3, we have shown that the weak completion of contextual programs might
have more than only one minimal model. It seems that a possible characterization for the
model computed by the ΦPoperator, is the only minimal model for which all undefined
atoms in Pare mapped to unknown. Yet, another aspect which arises from Section 6,
is whether skeptical reasoning with minimal explanations is PSPACE-hard. Further,
we would like to investigate how a development of a neural network perspective for
reasoning with contextual programs could be done.
13
Acknowledgements We are grateful to the constructive comments and ideas of the three
reviewers. The Graduate Academy at TU Dresden supported Tobias Philipp.
References
1. E.-A. Dietz, S. H¨
olldobler, and C. Wernhard. Modeling the suppression task under weak
completion and well-founded semantics. Journal of Applied Non-Classical Logics, 24(1-
2):61–85, 2014.
2. E.-A. Dietz Saldanha, S. H¨
olldobler, and L. M. Pereira. Contextual reasoning: Usually birds
can abductively fly. In Proceedings of the 14th International Conference on Logic Program-
ming and Nonmonotonic Reasoning (LPNMR 2017), 2017. to appear.
3. S. H¨
olldobler. Weak completion semantics and its applications in human reasoning. In
U. Furbach and C. Schon, editors, Proceedings of the 1st Workshop on Bridging the Gap
between Human and Automated Reasoning (Bridging-2015), CEUR Workshop Proceedings,
pages 2–16. CEUR-WS.org, 2015.
4. S. H¨
olldobler and C. D. Kencana Ramli. Logic programs under three-valued Łukasiewicz
semantics. In P. M. Hill and D. S. Warren, editors, Proceedings of the 25th International
Conference on Logic Programming (ICLP 2009), volume 5649 of LNCS, pages 464–478,
Heidelberg, 2009. Springer.
5. S. H¨
olldobler, T. Philipp, and C. Wernhard. An abductive model for human reasoning. In
Logical Formalizations of Commonsense Reasoning, Papers from the AAAI 2011 Spring
Symposium, AAAI Spring Symposium Series Technical Reports, pages 135–138, Cam-
bridge, MA, 2011. AAAI Press.
6. J. W. Lloyd. Foundations of Logic Programming. Springer-Verlag New York, Inc., New
York, NY, USA, 1984.
7. C. H. Papadimitriou. Computational complexity. Addison-Wesley, 1994.
8. L. M. Pereira and A. M. Pinto. Inspecting side-effects of abduction in logic programs. In
M. Balduccini and T. C. Son, editors, Logic Programming, Knowledge Representation, and
Nonmonotonic Reasoning: Essays in honour of Michael Gelfond, volume 6565 of LNAI,
pages 148–163. Springer, 2011.
9. R. Reiter. A logic for default reasoning. Artificial Intelligence, 13:81–132, 1980.
10. K. Stenning, L. Martignon, and A. Varga. Adaptive reasoning: integrating fast and frugal
heuristics with a logic of interpretation. Decision, in press.
11. K. Stenning and M. van Lambalgen. Human Reasoning and Cognitive Science. A Bradford
Book. MIT Press, Cambridge, MA, 2008.
12. A. Van Gelder, K. A. Ross, and J. S. Schlipf. The well-founded semantics for general logic
programs. Journal of the ACM, 38(3):619–649, 1991.
... This leads us to two related observations: (1) From a complexity point of view, computing skeptical conclusions is quite expensive [11,14], and in case the set of abducibles become larger, skeptical reasoning appears to be infeasible. (2) From a cognitive perspective, it does not seem adequate that humans consider all explanations as equally plausible, but rather, that they only consider a relevant subset bounded by the context, other observations and the type of conditional. ...
Chapter
Psychological experiments have shown that humans do not reason according to classical logic. Therefore, we might argue that logic-based approaches in general are not suitable for modeling human reasoning. Yet, we take a different view and are convinced that logic can help us as an underlying formalization of a cognitive theory, but claim rather that classical logic is not adequate for this purpose. In this chapter we investigate abduction and its link to human reasoning. In particular we discuss three different variations we have explored and show how they can be adequately modeled within a novel computational and integrated, cognitive theory, the Weak Completion Semantics.
Conference Paper
Full-text available
I present a logic programming approach based on the weak completions semantics to model human reasoning tasks, and apply the approach to model the suppression task, the selection task as well as the belief-bias effect, to compute preferred mental models of spatial reasoning tasks and to evaluate indicative as well as counterfactual conditionals.
Article
Full-text available
Formal approaches that aim at representing human reasoning should be evaluated based on how humans actually reason. One way of doing so is to investigate whether psychological findings of human reasoning patterns are represented in the theoretical model. The computational logic approach discussed here is the so-called weak completion semantics which is based on the three-valued ukasiewicz logic. We explain how this approach adequately models Byrne’s suppression task, a psychological study where the experimental results show that participants’ conclusions systematically deviate from the classical logically correct answers. As weak completion semantics is a novel technique in the field of computational logic, it is important to examine how it corresponds to other already established non-monotonic approaches. For this purpose we investigate the relation of weak completion with respect to completion and three-valued stable model semantics. In particular, we show that well-founded semantics, a widely accepted approach in the field of non-monotonic reasoning, corresponds to weak completion semantics for a specific class of modified programs.
Chapter
Full-text available
In the context of abduction in Logic Programs, when finding an abductive solution for a query, one may want to check too whether some other literals become true (or false) as a consequence, strictly within the abductive solution found, that is without performing additional abductions, and without having to produce a complete model to do so. That is, such consequence literals may consume, but not produce, the abduced literals of the solution. We show how this type of reasoning requires a new mechanism, not provided by others already available. To achieve it, we present the concept of Inspection Point in Abductive Logic Programs, and show, by means of examples, how one can employ it to investigate side-effects of interest (the inspection points) in order to help choose among abductive solutions. We show how to implement inspection points on top of already existing abduction solving systems — ABDUAL and XSB-XASP — in a way that can be adopted by other systems too. KeywordsLogic Programs–Abduction–Side-Effects
Conference Paper
Full-text available
In this paper we contribute to bridging the gap between human reasoning as studied in Cognitive Science and commonsense reasoning based on formal logics and formal theories. In particular, the suppression task studied in Cognitive Science provides an interesting challenge problem for human reasoning based on logic. The work presented in the paper is founded on the recent approach by Stenning and van Lambalgen to model human reasoning by means of logic programs with a specific three-valued completion semantics and a semantic fixpoint operator that yields a least model, as well as abduction. Their approach has been subsequently made more precise and technically accurate by switching to three-valued Łukasiewicz logic. In this paper, we extend this refined approach by abduction. We show that the inclusion of abduction permits to adequately model additional empiric results reported from Cognitive Science. For the arising abductive reasoning tasks we give complexity results. Finally, we outline several open research issues that emerge from the application of logic to model human reasoning. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Article
Full-text available
A new proposal for integrating the employment of formal and empirical methods in the study of human reasoning.
Article
Full-text available
A general logic program (abbreviated to "program" hereafter) is a set of rules that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) sitting above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the "meaning of the program," or its "declarative semantics." Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a "satisfactory" total model; for such programs, the question of an appropriate partial model arises. We introduce unfounded sets and well-founded partial models, and define the well-founded semantics of a program to be its well-founded partial model. If the well-founded partial model is in fact a total model, we call it the well-founded model. We show that the class of programs possessing a total well-founded model properly in...
Conference Paper
We present a new logic programming approach to contextual reasoning, based on the Weak Completion Semantics (WCS), the latter of which has been successfully applied in the past to adequately model various human reasoning tasks. One of the properties of WCS is the open world assumption with respect to undefined atoms. This is a characteristic that is different to other common Logic Programming semantics, a property that seems suitable when modeling human reasoning. Notwithstanding, we have noticed that the famous Tweety default reasoning example, originally introduced by Reiter, cannot be modeled straightforwardly under WCS. Hence, to address the issue and taking Pereira and Pinto’s inspection points as inspiration, we develop a notion of contextual reasoning for which we introduce contextual logic programs. We reconsider the formal properties of WCS with respect to these and verify whether they still hold. Finally, we set forth contextual abduction and show that not only the original Tweety example can be nicely modeled within the new approach, but more sophisticated examples as well, where context plays an important role.
Article
The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.