Content uploaded by Steffen Hölldobler
Author content
All content in this area was uploaded by Steffen Hölldobler on Mar 10, 2015
Content may be subject to copyright.
On Indicative Conditionals
Emmanuelle Anna Dietz1, Steffen H¨olldobler1, and Lu´ıs Moniz Pereira2?
1International Center for Computational Logic, TU Dresden, Germany,
{dietz,sh}@iccl.tu-dresden.de
2NOVA Laboratory for Computer Science and Informatics, Caparica, Portugal,
lmp@fct.unl.pt
Abstract In this paper we present a new approach to evaluate indicative
conditionals with respect to some background information specified by a
logic program. Because the weak completion of a logic program admits
a least model under the three-valued Lukasiewicz semantics and this
semantics has been successfully applied to other human reasoning tasks,
conditionals are evaluated under these least L-models. If such a model
maps the condition of a conditional to unknown, then abduction and
revision are applied in order to satisfy the condition. Different strategies
in applying abduction and revision might lead to different evaluations of
a given conditional. Based on these findings we outline an experiment to
better understand how humans handle those cases.
1 Indicative Conditionals
Conditionals are statements of the form if condition then consequence. In the
literature the condition is also called if part,if clause or protasis, whereas the
consequence is called then part,then clause or apodosis. Conditions as well as
consequences are assumed to be finite sets (or conjunctions) of ground literals.
Indicative conditionals are conditionals whose condition may or may not be
true and, consequently, whose consequence also may or may not be true; however,
the consequence is asserted to be true if the condition is true. Examples for
indicative conditionals are the following:
If it is raining, then he is inside. (1)
If Kennedy is dead and Oswald did not shoot him, then someone else did. (2)
If rifleman A did not shoot, then the prisoner is alive. (3)
If the prisoner is alive, then the captain did not signal. (4)
If rifleman A shot, then rifleman B shot as well. (5)
If the captain gave no signal and rifleman A decides to shoot,
then the prisoner will die and rifleman B will not shoot. (6)
Conditionals may or may not be true in a given scenario. For example, if we
are told that a particular person is living in a prison cell, then most people are
?The authors are mentioned in alphabetical order.
2
expected to consider (1) to be true, whereas if we are told that he is living in
the forest, then most people are expected to consider (1) to be false. Likewise,
most people consider (2) to be true.
The question which we shall be discussing in this paper is how to automate
reasoning such that conditionals are evaluated by an automated deduction sys-
tem like humans do. This will be done in a context of logic programming (cf.
[11]), abduction [9], Stenning and van Lambalgen’s representation of condition-
als as well as their semantic operator [19] and three-valued Lukasiewicz logic
[12], which has been put together in [6,7,5,8,3] and has been applied to the sup-
pression [2] and the selection task [1], as well as to model the belief-bias effect
[15] and contextual abductive reasoning with side-effects [16].
The methodology of the approach presented in this paper differs significantly
from methods and techniques applied in well-known approaches to evaluate
(mostly subjunctive) conditionals like Ramsey’s belief-retention approach [17],
Lewis’s maximal world-similarity one [10], Rescher’s systematic reconstruction
of the belief system using principles of saliency and prioritization [18], Ginsberg’s
possible worlds approach [4] and Pereira and Apar´ıco’s improvements thereof by
requiring relevancy [14]. Our approach is inspired by Pearl’s do-calculus [13] in
that it allows revisions to satisfy conditions whose truth-value is unknown and
which cannot be explained by abduction, but which are amenable to hypothetical
intervention instead.
2 Preliminaries
We assume the reader to be familiar with logic and logic programming. A (logic)
program is a finite set of (program) clauses of the form A←B1∧. . . ∧Bnwhere
Ais an atom and Bi, 1 ≤i≤n, are literals or of the form >and ⊥, denoting
truth- and falsehood, respectively. Ais called head and B1∧. . . ∧Bnis called
body of the clause. We restrict terms to be constants and variables only, i.e., we
consider so-called data logic programs. Clauses of the form A← > and A← ⊥
are called positive and negative facts, respectively.
In the this paper we assume for each program that the alphabet consists
precisely of the symbols mentioned in the program. When writing sets of literals
we will omit curly brackets if the set has only one element.
Let Pbe a program. gPdenotes the set of all ground instances of clauses
occurring in P. A ground atom Ais defined in gPiff gPcontains a clause whose
head is A; otherwise Ais said to be undefined. Let Sbe a set of ground literals.
def (S,P) = {A←body ∈gP | A∈ S ∨ ¬A∈ S} is called definition of S.
Let Pbe a program and consider the following transformation:
1. For each defined atom A, replace all clauses of the form A←body1,...,
A←bodymoccurring in gPby A←body1∨. . . ∨bodym.
2. If a ground atom Ais undefined in gP, then add A← ⊥ to the program.
3. Replace all occurrences of ←by ↔.
The ground program obtained by this transformation is called completion of P,
whereas the ground program obtained by applying only the steps 1. and 3. is
called weak completion of Por wcP.
3
We consider the three-valued Lukasiewicz (or L-) semantics [12] and represent
each interpretation Iby a pair hI>, I ⊥i, where I>contains all atoms which are
mapped to true by I,I⊥contains all atoms which are mapped to false by I, and
I>∩I⊥=∅. Atoms occurring neither in I>not in I⊥are mapped to unknown.
Let hI>, I⊥iand hJ>, J⊥ibe two interpretations. We define
hI>, I⊥i⊆hJ>, J⊥iiff I>⊆J>and I⊥⊆J⊥.
Under L-semantics we find F∧ > ≡ F∨ ⊥ ≡ Ffor each formula F, where ≡
denotes logical equivalence. Hence, occurrences of the symbols >and ⊥in the
bodies of clauses can be restricted to those occurring in facts.
It has been shown in [6] that logic programs as well as their weak completions
admit a least model under L-semantics. Moreover, the least L-model of the weak
completion of Pcan be obtained as least fixed point of the following semantic
operator, which was introduced in [19]: ΦP(hI>, I⊥i) = hJ>, J ⊥i, where
J>={A|A←body ∈gPand body is true under hI>, I ⊥i},
J⊥={A|def (A, P)6=∅and
body is false under hI>, I⊥ifor all A←body ∈def (A, P)}.
We define P |=lmwc
L Fiff formula Fholds in the least L-model of wcP.
As shown in [2], the L-semantics is related to the well-founded semantics
as follows: Let Pbe a program which does not contain a positive loop and let
P+=P \ {A← ⊥ | A← ⊥ ∈ P}. Let ube a new nullary relation symbol not
occurring in Pand Bbe a ground atom in
P∗=P+∪ {B←u|def (B, P) = ∅} ∪ {u← ¬u}.
Then, the least L-model of wcPand the well-founded model for P∗coincide.
An abductive framework consists of a logic program P, a set of abducibles
AP={A←>|Ais undefined in gP} ∪ {A←⊥|Ais undefined in gP},a set
of integrity constraints IC, i.e., expressions of the form ⊥ ← B1∧. . . ∧Bn,and
the entailment relation |=lmwc
L , and is denoted by hP,AP,I C,|=lmwc
L i.
One should observe that each finite set of positive and negative ground facts
has an L-model. It can be obtained by mapping all heads occurring in this set
to true. Thus, in the following definition, explanations are always satisfiable.
An observation Ois a set of ground literals; it is explainable in the abductive
framework hP,AP,I C,|=lmwc
L iiff there exists an E ⊆ APcalled explanation
such that P ∪ E is satisfiable, the least L-model of the weak completion of P ∪ E
satisfies IC , and P ∪ E |=lmwc
L Lfor each L∈ O.
3 A Reduction System for Indicative Conditionals
When parsing conditionals we assume that information concerning the mood of
the conditionals has been extracted. In this paper we restrict our attention to
indicative mood. In the sequel let cond(T,A) be a conditional with condition T
4
and consequence A, both of which are assumed to be finite sets of literals not
containing a complementary pair of literals, i.e., a pair Band ¬B.
Conditionals are evaluated wrt background information specified as a logic
program and a set of integrity constraints. More specifically, as the weak com-
pletion of each logic program always admits a least L-model, the conditionals
are evaluated under these least L-models. In the reminder of this section let P
be a program, IC be a finite set of integrity constraints, and MPbe the least
L-model of wcPsuch that MPsatisfies IC. A state is either an expression of
the form ic(P,I C,T,A) or true,false,unknown , or vacuous.
3.1 A Revision Operator
Let Sbe a finite set of ground literals not containing a complementary pair of
literals and let Bbe a ground atom in
rev(P,S)=(P \ def (S,P)) ∪ {B←>|B∈ S } ∪ {B← ⊥ | ¬B∈ S}.
The revision operator ensures that all literals occurring in Sare mapped to true
under the least L-model of wc rev(P,S).
3.2 The Abstract Reduction System
Let cond(T,A) be an indicative conditional which is to be evaluated in the
context of a logic program Pand integrity constraints IC such that the least
L-model MPof wcPsatisfies IC. The initial state is ic(P,IC,T,A).
If the condition of the conditional is true, then the conditional holds if its
consequent is true as well; otherwise it is either false or unknown.
ic(P,I C,T,A)−→it true iff MP(T) = true and MP(A) = true
ic(P,I C,T,A)−→if false iff MP(T) = true and MP(A) = false
ic(P,I C,T,A)−→iu unknown iff MP(T) = true and MP(A) = unknown
If the condition of the conditional is false, then the conditional is true under
L-semantics. However, we believe that humans might make a difference between
a conditional whose condition and consequence is true and a conditional whose
condition is false. Hence, for the time being we consider a conditional whose
condition is false as vacuous.
ic(P,I C,T,A)−→iv vacuous iff MP(T) = false
If the condition of the conditional is unknown, then we could assign a truth-
value to the conditional in accordance with the L-semantics. However, we suggest
that in this case abduction and revision shall be applied in order to satisfy the
condition. We start with the abduction rule:
ic(P,I C,T,A)−→ia ic(P ∪ E ,IC,T \ O,A)
5
iff MP(T) = unknown and Eexplains O ⊆ T in the abductive framework
hP,AP,IC,|=lmwc
L iand O 6=∅. Please note that Tmay contain literals which
are mapped to true by MP. These literals can be removed from Tby the rule
−→ia because the empty set explains them.
Now we turn to the revision rule:
ic(P,I C,T,A)−→ir ic(rev(P,S),IC,T \ S,A)
iff MP(T) = unknown,S ⊆ T ,S 6=∅, for each L∈ S we find MP(L) =
unknown, and the least L-model of wc rev(P,S) satisfies IC.
Altogether we obtain the reduction system RIC operating on states and
consisting of the rules {−→it ,−→if ,−→iu ,−→iv ,−→ia ,−→ir }.
4 Examples
4.1 Al in the Jailhouse
Rainy Day Suppose we are told that Al is imprisoned in a jailhouse on a rainy
day, i.e.., he is living in a cell inside the jailhouse and it is raining:
P1={inside(X)←imprisoned (X),imprisoned(al)← >,raining ← >}.
The least L-model of wcP1is h{imprisoned(al ),inside(al),raining},∅i.In order
to evaluate conditional (1) with respect to P1we observe that this model maps
raining and inside to true. Hence,
ic(P1,∅,raining,inside)−→it true.
Sunny Day Let us assume that Al is still imprisoned but that it is not raining:
P2={inside(X)←imprisoned (X),imprisoned(al)← >,raining ← ⊥}.
The least L-model of wc P2is h{imprisoned(al),inside(al)},{raining}i.In order
to evaluate conditional (1) wrt P2we observe that this model maps raining to
false. Hence,
ic(P2,∅,raining,inside)−→iv vacuous.
No Information about the Weather Suppose we are told that Al is imprisoned
in a jailhouse but we know nothing about the weather:
P3={inside(X)←imprisoned (X),imprisoned(al)← >}.
The least L-model of wcP1is h{imprisoned(al ),inside (al )},∅i.In order to eval-
uate conditional (1) wrt P3we observe that this model maps raining to un-
known. Hence, we view raining as an observation which needs to be explained.
The only possible explanation wrt hP3,{raining ← >,raining ← ⊥},∅,|=lmwc
L i
is {raining ← >}.Altogether we obtain
ic(P3,∅,raining,inside)−→ia ic(P1,∅,∅,inside)−→it true.
Please note that P3∪ {raining ← >} =P1=rev(P3,raining).Hence, we could
replace −→ia by −→ir in the previous reduction sequence.
6
4.2 The Shooting of Kennedy
President Kennedy was killed. There was a lengthy investigation about who
actually shot the president and in the end it was determined that Oswald did it:
P4={Kennedy dead ←os shot,Kennedy dead ←se shot,os shot ← >}.
The least L-model of wcP4is h{os shot ,Kennedy dead},∅i.Evaluating the in-
dicative conditional (2) under this model we find that its condition
T={Kennedy dead,¬os shot}is mapped to false. Hence,
ic(P4,∅,{Kennedy dead,¬os shot },se shot )−→iv vacuous.
Now consider the case that we do not know that Oswald shot the president:
P5={Kennedy dead ←os shot,Kennedy dead ←se shot}.
As least L-model of wcP5we obtain h∅,∅i and find that it maps Tto un-
known. We may try to consider Tas an observation and explain it wrt the
abductive framework hP5,AP5,∅,|=lmwc
L i,where AP5consists of the positive
and negative facts for os shot and se shot. The only possible explanation is
E={os shot ← ⊥,se shot ← >}.As least L-model of wc(P5∪ E ) we obtain
h{Kennedy dead,se shot},{os shot}i.As this model maps se shot to true we
find ic(P5,∅,{Kennedy dead,¬os shot },se shot )
−→ia ic(P5∪ E,∅,∅,se shot)−→it true.
In this example we could also apply revision. Let
P6=rev(P5,T) = {Kennedy dead ← >,os shot ← ⊥}.
We obtain ic(P5,∅,{Kennedy dead,¬os shot },se shot )
−→ir ic(P6,∅,∅,se shot)−→iu unknown
because the least L-model of wcP6is h{Kennedy dead},{os shot }i and maps
se shot to unknown. However, as conditional (2) can be evaluated by abduction
and without revising the initial program, this derivation is not preferred.
4.3 The Firing Squad
This example is presented in [13]. If the court orders an execution, then the
captain will give the signal upon which riflemen Aand Bwill shoot the prisoner.
Consequently, the prisoner will be dead. We assume that the court’s decision is
unknown, that both riflemen are accurate, alert and law-abiding, and that the
prisoner is unlikely to die from any other causes. Let
P7={sig ←execution,rmA ←sig,rmB ←sig,
dead ←rmA,dead ←rmB,alive ← ¬dead }.
The least L-model of wcP7is
h∅,∅i.(7)
7
Rifleman A did not Shoot To evaluate conditional (3) wrt this model we first
observe that the condition rmA is mapped to unknown by (7). Considering the
abductive framework
hP7,{execution ← >,execution ← ⊥},∅,|=lmwc
L i,(8)
¬rmA can be explained by
{execution ← ⊥}.(9)
Let P8=P7∪(9).The least L-model of wcP8is
h{alive},{execution,sig,rmA,rmB,dead}i.(10)
Because alive is mapped to true under this model, we obtain
ic(P7,∅,¬rmA,alive )−→ia ic(P8,∅,∅,alive)−→it true.
The Prisoner is Alive Now consider conditional (4). Because (7) maps alive
to unknown we treat alive as an observation. Considering again the abductive
framework (8) this observation can be explained by (9). Hence, we evaluate the
consequence of (4) under (10) and find that the captain did not signal:
ic(P7,∅,alive ,¬sig )−→ia ic(P8,∅,∅,¬sig)−→it true.
Rifleman A Shot Let us turn the attention to conditional (5). Because (7)
maps rmA to unknown, we treat rmA as an observation. Considering the ab-
ductive framework (8) this observation can be explained by
{execution ← >}.(11)
Let P9=P7∪(11).The least L-model of wcP9is
h{execution,sig,rmA,rmB,dead},{alive}i.(12)
Because rmB is mapped to true under this model, we obtain
ic(P7,∅,rmA,rmB )−→ia ic(P9,∅,∅,rmB )−→it true.
The Captain Gave no Signal Let us now consider conditional (6). Its condition
T={¬sig,rmA}is mapped to unknown by (7). We can only explain ¬sig by (9)
and rmA by (11), but we cannot explain Tbecause
wc((9) ∪(11)) = {execution ↔ > ∨ ⊥} ≡ {execution ↔ >}.
In order to evaluate this conditional we have to consider revisions.
8
1. A brute force method is to revise the program wrt all conditions. Let
P10 =rev(P7,{¬sig ,rmA})
= (P7\def ({¬sig,rmA},P7)) ∪ {sig ← ⊥,rmA ← >}.
The least L-model of wcP10 is
h{rmA,dead},{sig ,rmB ,alive }i.(13)
This model maps dead to true and rmB to false and we obtain
ic(P7,∅,{¬sig ,rmA},{dead,¬rmB})
−→ir ic(P10,∅,∅,{dead,¬rmB})−→it true.
2. As we prefer minimal revisions let us consider
P11 =rev(P7,rmA)=(P7\def (rmA,P7)) ∪ {rmA ← >}.
The least L-model of wcP11 is h{dead,rmA},{alive}i.Unfortunately, ¬sig
is still mapped to unknown by this model, but it can be explained in the ab-
ductive framework hP11,{execution ← >,execution ← ⊥},∅,|=lmwc
L iby (9).
Let P12 =P11 ∪(9). Because the least L-model of wcP12 is
h{dead,rmA},{alive ,execution,sig ,rmB }i (14)
we obtain
ic(P7,∅,{¬sig ,rmA},{dead,¬rmB})
−→ir ic(P11,∅,¬sig,{dead,¬rmB})
−→ia ic(P12,∅,∅,{dead,¬rmB} −→it true.
The revision leading to P11 is minimal in the sense that only the definition
of rmA is revised and without this revision the condition of (6) cannot be
explained. This is the only minimal revision as we will show in the sequel.
3. An alternative minimal revision could be the revision of P7wrt to ¬sig :
P13 =rev(P7,¬sig ) = (P7\def (¬sig ,P7)) ∪ {sig ← ⊥}.
The least L-model of wcP13 is
h{alive},{sig ,dead,rmA,rmB }i.(15)
Because this model maps rmA to false we obtain:
ic(P7,∅,{¬sig ,rmA},{dead,¬rmB})
−→ir ic(P13,∅,rmA,{dead,¬rmB})−→iv vacuous.
4. So far the first step in evaluating the conditional was a revision step. Alter-
natively, we could start with an abduction step. ¬sig can be explained in
the abductive framework (8) by (9) leading to the program P8and the least
L-model (10). Because this model maps rmA to false we obtain:
ic(P7,∅,{¬sig ,rmA},{dead,¬rmB})
−→ia ic(P8,∅,rmA,{dead,¬rmB })−→iv vacuous.
9
5. Let us now reverse the order in which the conditions are treated and start
by explaining rmA. This has already been done before and we obtain P9and
the least L-model (12). Because this model maps ¬sig to false we obtain:
ic(P7,∅,{¬sig ,rmA},{dead,¬rmB})
−→ia ic(P9,∅,¬sig,{dead ,¬rmB })−→iv vacuous.
In the last example we have discussed five different approaches to handle
the case that the truth value of the conditions of a conditional is unknown and
cannot be explained: maximal (parallel) revision (MaxRev), partial (sequen-
tial) revision as well as partial (sequential) explanation, where in the sequential
approaches the literals in the condition of the conditionals are treated in differ-
ent orders: left-to-right and right-to-left, where we consider sets to be ordered
(PRevLR,PRevRL,PExLR,PExRL). The results are summarized in Table 1,
where the conditional as well as the literals are evaluated wrt the final least
L-model computed in the different approaches.
Which approach shall be preferred? Because rifleman A causally depends on
the captain’s signal but not vice-versa, plus given that in this example clauses
express causes, and effects come after causes, it would make sense to take the
cause ordering as the preferred one for abducing the conditions. Hence, PExLR
would be preferred. However, because rifleman A is an agent, the causes of his
actions can be internal to him, his decisions. Hence, when autonomous agents are
involved (or spontaneous phenomena like radioactivity), the ordering of abducing
the conditions is independent of causal dependency.
5 Properties
In this section, let Pbe a program, hI>, I ⊥ithe least L-model of wcP,IC a
set of integrity constraints, hP,AP,IC,|=lmwc
L ian abductive framework, and L
a literal.
Proposition 1. If Ocan be explained by E ⊆ APand hJ>, J ⊥iis the least
L-model of wc(P ∪ E), then hI>, I⊥i⊆hJ>, J⊥i.
MaxRev PRevRL PRevLR PExLR PExRL
final program P10 P12 P13 P8P9
final least L-model (13) (14) (15) (10) (12)
sig false false false false true
rmA true true false false true
dead true true false false true
rmB false false false false true
alive false false true true false
execution unknown false unknown false true
conditional (6) true true vacuous vacuous vacuous
Table1. Different approaches to evaluate conditional (6).
10
Proof. The least L-models hI>, I⊥iand hJ>, J⊥iare the least fixed points of
the semantic operators ΦPand ΦP∪E , respectively. Let hI>
n, I⊥
niand hJ>
n, J ⊥
ni
be the interpretations obtained after applying ΦPand ΦP∪E n-times to h∅,∅i,
respectively. We can show by induction on nthat hI>
n, I⊥
ni⊆hJ>
n, J ⊥
ni.The
proposition follows immediately.
Proposition 1 guarantees that whenever −→ia is applied, previously checked
conditions of a conditional need not to be re-checked. The following Proposition 3
gives the same guarantee whenever −→ir is applied.
Proposition 2. If the least L-model of wcPmaps Lto unknown and hJ>, J⊥i
is the least L-model of wc rev (P, L), then hI>, I ⊥i ⊂ hJ>, J⊥i.
Proof. By induction on the number of applications of ΦPand Φrev(P,L).
Proposition 3. RIC is terminating.
Proof. Each application of −→it ,−→if ,−→iu or −→iv leads to an irreducible
expression. Let cond(T,A) be the conditional to which RIC is applied. When-
ever −→ir is applied then the definition of at least one literal Loccurring in Tis
revised such that the least L-model of the weak completion of revised program
maps Lto true. Because Tdoes not contain a complementary pair of literals
this revised definition of Lis never revised again. Hence, there cannot exist a
rewriting sequence with infinitely many occurrences of −→ir . Likewise, there
cannot exist a rewriting sequence with infinitely many occurrences of −→ia be-
cause each application of −→ia to a state ic(P,I C,T,A) reduces the number of
literals occurring in the T.
Proposition 4. RIC is not confluent.
Proof. This follows immediately from the examples presented in Section 4.
6 Open Questions and the Proposal of an Experiment
Open Questions The new approach gives rise to a number of questions. Which
of the approaches is preferable? This may be a question of pragmatics imputable
to the user. The default, because no pragmatic information has been added, is
maximals revision for skepticism and minimal revisions for credulity. Do humans
evaluate multiple conditions sequentially or in parallel? If multiple conditions
are evaluated sequentially, are they evaluated by some preferred order? Shall
explanations be computed skeptically or credulously? How can the approach be
extended to handle subjunctive conditionals?
The Proposal of an Experiment Subjects are given the background informa-
tion specified in the program P9. They are confronted with the conditionals
like (6) as well as variants with different consequences (e.g., execution instead
of {dead,¬rmB }or conditionals where the order of two conditions are reversed.
We then ask the subjects to answer questions like: Does the conditional hold?
or Did the court order an execution? Depending on the answers we may learn
which approaches are preferred by humans.
11
Acknowledgements We thank Bob Kowalski for valuable comments on an ear-
lier draft of the paper.
References
1. E.-A. Dietz, S. H¨olldobler, and M. Ragni. A computational logic approach to
the abstract and the social case of the selection task. In Proceedings Eleventh
International Symposium on Logical Formalizations of Commonsense Reasoning,
2013.
2. E.-A. Dietz, S. H¨olldobler, and M. Ragni. A computational logic approach to the
suppression task. In N. Miyake, D. Peebles, and R. P. Cooper, editors, Proceedings
of the 34th Annual Conference of the Cognitive Science Society, pages 1500–1505.
Cognitive Science Society, 2012.
3. E.-A. Dietz, S. H¨olldobler, and C. Wernhard. Modelling the suppression task under
weak completion and well-founded semantics. Journal of Applied Non-Classical
Logics, 24:61–85, 2014.
4. M. L. Ginsberg. Counterfactuals. Artificial Intelligence, 30(1):35–79, 1986.
5. S. H¨olldobler and C. D. P. Kencana Ramli. Contraction properties of a semantic
operator for human reasoning. In Lei Li and K. K. Yen, editors, Proceedings of
the Fifth International Conference on Information, pages 228–231. International
Information Institute, 2009.
6. S. H¨olldobler and C. D. P. Kencana Ramli. Logic programs under three-valued
Lukasiewicz’s semantics. In P. M. Hill and D. S. Warren, editors, Logic Pro-
gramming, volume 5649 of Lecture Notes in Computer Science, pages 464–478.
Springer-Verlag Berlin Heidelberg, 2009.
7. S. H¨olldobler and C. D. P. Kencana Ramli. Logics and networks for human rea-
soning. In C. Alippi, Marios M. Polycarpou, Christos G. Panayiotou, and Georgios
Ellinasetal, editors, Artificial Neural Networks – ICANN, volume 5769 of Lecture
Notes in Computer Science, pages 85–94. Springer-Verlag Berlin Heidelberg, 2009.
8. S. H¨olldobler, T. Philipp, and C. Wernhard. An abductive model for hu-
man reasoning. In Proceedings Tenth International Symposium on Logi-
cal Formalizations of Commonsense Reasoning, 2011. commonsensereason-
ing.org/2011/proceedings.html.
9. A. C. Kakas, R. A. Kowalski, and F. Toni. Abductive Logic Programming. Journal
of Logic and Computation, 2(6):719–770, 1993.
10. D. Lewis. Counterfactuals. Blackwell Publishers, Oxford, 1973.
11. J. W. Lloyd. Foundations of Logic Programming. Springer, Berlin, Heidelberg,
1987.
12. J. Lukasiewicz. O logice tr´ojwarto´sciowej. Ruch Filozoficzny, 5:169–171, 1920.
English translation: On Three-Valued Logic. In: Jan Lukasiewicz Selected Works.
(L. Borkowski, ed.), North Holland, 87-88, 1990.
13. J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press,
New York, USA, 2000.
14. L. M. Pereira and J. N. Apar´ıcio. Relevant counterfactuals. In Proceedings 4th
Portuguese Conference on Artificial Intelligence (EPIA), volume 390 of Lecture
Notes in Computer Science, pages 107–118. Springer, 1989.
15. L. M. Pereira, E.-A. Dietz, and S. H¨olldobler. An abductive reasoning approach to
the belief-bias effect. In C. Baral, G. De Giacomo, and T. Eiter, editors, Principles
of Knowledge Representation and Reasoning: Proceedings of the 14th International
Conference, pages 653–656, Cambridge, MA, 2014. AAAI Press.
12
16. L. M. Pereira, E.-A. Dietz, and S. H¨olldobler. Contextual abductive reasoning
with side-effects. In I. Niemel¨a, editor, Theory and Practice of Logic Programming
(TPLP), volume 14, pages 633–648, Cambridge, UK, 2014. Cambridge University
Press.
17. F. Ramsey. The Foundations of Mathematics and Other Logical Essays. Harcourt,
Brace and Company, 1931.
18. N. Rescher. Conditionals. MIT Press, Cambridge, MA, 2007.
19. K. Stenning and M. van Lambalgen. Human Reasoning and Cognitive Science.
MIT Press, 2008.