Conference PaperPDF Available

On Indicative Conditionals

Authors:

Abstract

In this paper we present a new approach to evaluate indicative conditionals with respect to some background information specified by a logic program. Because the weak completion of a logic program admits a least model under the three-valued Lukasiewicz semantics and this semantics has been successfully applied to other human reasoning tasks, conditionals are evaluated under these least L-models. If such a model maps the condition of a conditional to unknown, then abduction and revision are applied in order to satisfy the condition. Different strategies in applying abduction and revision might lead to different evaluations of a given conditional. Based on these findings we outline an experiment to better understand how humans handle those cases.
On Indicative Conditionals
Emmanuelle Anna Dietz1, Steffen H¨olldobler1, and Lu´ıs Moniz Pereira2?
1International Center for Computational Logic, TU Dresden, Germany,
{dietz,sh}@iccl.tu-dresden.de
2NOVA Laboratory for Computer Science and Informatics, Caparica, Portugal,
lmp@fct.unl.pt
Abstract In this paper we present a new approach to evaluate indicative
conditionals with respect to some background information specified by a
logic program. Because the weak completion of a logic program admits
a least model under the three-valued Lukasiewicz semantics and this
semantics has been successfully applied to other human reasoning tasks,
conditionals are evaluated under these least L-models. If such a model
maps the condition of a conditional to unknown, then abduction and
revision are applied in order to satisfy the condition. Different strategies
in applying abduction and revision might lead to different evaluations of
a given conditional. Based on these findings we outline an experiment to
better understand how humans handle those cases.
1 Indicative Conditionals
Conditionals are statements of the form if condition then consequence. In the
literature the condition is also called if part,if clause or protasis, whereas the
consequence is called then part,then clause or apodosis. Conditions as well as
consequences are assumed to be finite sets (or conjunctions) of ground literals.
Indicative conditionals are conditionals whose condition may or may not be
true and, consequently, whose consequence also may or may not be true; however,
the consequence is asserted to be true if the condition is true. Examples for
indicative conditionals are the following:
If it is raining, then he is inside. (1)
If Kennedy is dead and Oswald did not shoot him, then someone else did. (2)
If rifleman A did not shoot, then the prisoner is alive. (3)
If the prisoner is alive, then the captain did not signal. (4)
If rifleman A shot, then rifleman B shot as well. (5)
If the captain gave no signal and rifleman A decides to shoot,
then the prisoner will die and rifleman B will not shoot. (6)
Conditionals may or may not be true in a given scenario. For example, if we
are told that a particular person is living in a prison cell, then most people are
?The authors are mentioned in alphabetical order.
2
expected to consider (1) to be true, whereas if we are told that he is living in
the forest, then most people are expected to consider (1) to be false. Likewise,
most people consider (2) to be true.
The question which we shall be discussing in this paper is how to automate
reasoning such that conditionals are evaluated by an automated deduction sys-
tem like humans do. This will be done in a context of logic programming (cf.
[11]), abduction [9], Stenning and van Lambalgen’s representation of condition-
als as well as their semantic operator [19] and three-valued Lukasiewicz logic
[12], which has been put together in [6,7,5,8,3] and has been applied to the sup-
pression [2] and the selection task [1], as well as to model the belief-bias effect
[15] and contextual abductive reasoning with side-effects [16].
The methodology of the approach presented in this paper differs significantly
from methods and techniques applied in well-known approaches to evaluate
(mostly subjunctive) conditionals like Ramsey’s belief-retention approach [17],
Lewis’s maximal world-similarity one [10], Rescher’s systematic reconstruction
of the belief system using principles of saliency and prioritization [18], Ginsberg’s
possible worlds approach [4] and Pereira and Apar´ıco’s improvements thereof by
requiring relevancy [14]. Our approach is inspired by Pearl’s do-calculus [13] in
that it allows revisions to satisfy conditions whose truth-value is unknown and
which cannot be explained by abduction, but which are amenable to hypothetical
intervention instead.
2 Preliminaries
We assume the reader to be familiar with logic and logic programming. A (logic)
program is a finite set of (program) clauses of the form AB1. . . Bnwhere
Ais an atom and Bi, 1 in, are literals or of the form >and , denoting
truth- and falsehood, respectively. Ais called head and B1. . . Bnis called
body of the clause. We restrict terms to be constants and variables only, i.e., we
consider so-called data logic programs. Clauses of the form A← > and A← ⊥
are called positive and negative facts, respectively.
In the this paper we assume for each program that the alphabet consists
precisely of the symbols mentioned in the program. When writing sets of literals
we will omit curly brackets if the set has only one element.
Let Pbe a program. gPdenotes the set of all ground instances of clauses
occurring in P. A ground atom Ais defined in gPiff gPcontains a clause whose
head is A; otherwise Ais said to be undefined. Let Sbe a set of ground literals.
def (S,P) = {Abody gP | A∈ S ∨ ¬A∈ S} is called definition of S.
Let Pbe a program and consider the following transformation:
1. For each defined atom A, replace all clauses of the form Abody1,...,
Abodymoccurring in gPby Abody1. . . bodym.
2. If a ground atom Ais undefined in gP, then add A← ⊥ to the program.
3. Replace all occurrences of by .
The ground program obtained by this transformation is called completion of P,
whereas the ground program obtained by applying only the steps 1. and 3. is
called weak completion of Por wcP.
3
We consider the three-valued Lukasiewicz (or L-) semantics [12] and represent
each interpretation Iby a pair hI>, I i, where I>contains all atoms which are
mapped to true by I,Icontains all atoms which are mapped to false by I, and
I>I=. Atoms occurring neither in I>not in Iare mapped to unknown.
Let hI>, Iiand hJ>, Jibe two interpretations. We define
hI>, Ii⊆hJ>, Jiiff I>J>and IJ.
Under L-semantics we find F∧ > F∨ ⊥ Ffor each formula F, where
denotes logical equivalence. Hence, occurrences of the symbols >and in the
bodies of clauses can be restricted to those occurring in facts.
It has been shown in [6] that logic programs as well as their weak completions
admit a least model under L-semantics. Moreover, the least L-model of the weak
completion of Pcan be obtained as least fixed point of the following semantic
operator, which was introduced in [19]: ΦP(hI>, Ii) = hJ>, J i, where
J>={A|Abody gPand body is true under hI>, I i},
J={A|def (A, P)6=and
body is false under hI>, Iifor all Abody def (A, P)}.
We define P |=lmwc
L Fiff formula Fholds in the least L-model of wcP.
As shown in [2], the L-semantics is related to the well-founded semantics
as follows: Let Pbe a program which does not contain a positive loop and let
P+=P \ {A← ⊥ | A← ⊥ ∈ P}. Let ube a new nullary relation symbol not
occurring in Pand Bbe a ground atom in
P=P+∪ {Bu|def (B, P) = ∅} ∪ {u← ¬u}.
Then, the least L-model of wcPand the well-founded model for Pcoincide.
An abductive framework consists of a logic program P, a set of abducibles
AP={A←>|Ais undefined in gP} ∪ {A←⊥|Ais undefined in gP},a set
of integrity constraints IC, i.e., expressions of the form ⊥ ← B1. . . Bn,and
the entailment relation |=lmwc
L , and is denoted by hP,AP,I C,|=lmwc
L i.
One should observe that each finite set of positive and negative ground facts
has an L-model. It can be obtained by mapping all heads occurring in this set
to true. Thus, in the following definition, explanations are always satisfiable.
An observation Ois a set of ground literals; it is explainable in the abductive
framework hP,AP,I C,|=lmwc
L iiff there exists an E ⊆ APcalled explanation
such that P ∪ E is satisfiable, the least L-model of the weak completion of P ∪ E
satisfies IC , and P ∪ E |=lmwc
L Lfor each L∈ O.
3 A Reduction System for Indicative Conditionals
When parsing conditionals we assume that information concerning the mood of
the conditionals has been extracted. In this paper we restrict our attention to
indicative mood. In the sequel let cond(T,A) be a conditional with condition T
4
and consequence A, both of which are assumed to be finite sets of literals not
containing a complementary pair of literals, i.e., a pair Band ¬B.
Conditionals are evaluated wrt background information specified as a logic
program and a set of integrity constraints. More specifically, as the weak com-
pletion of each logic program always admits a least L-model, the conditionals
are evaluated under these least L-models. In the reminder of this section let P
be a program, IC be a finite set of integrity constraints, and MPbe the least
L-model of wcPsuch that MPsatisfies IC. A state is either an expression of
the form ic(P,I C,T,A) or true,false,unknown , or vacuous.
3.1 A Revision Operator
Let Sbe a finite set of ground literals not containing a complementary pair of
literals and let Bbe a ground atom in
rev(P,S)=(P \ def (S,P)) ∪ {B←>|B∈ S } ∪ {B← ⊥ | ¬B∈ S}.
The revision operator ensures that all literals occurring in Sare mapped to true
under the least L-model of wc rev(P,S).
3.2 The Abstract Reduction System
Let cond(T,A) be an indicative conditional which is to be evaluated in the
context of a logic program Pand integrity constraints IC such that the least
L-model MPof wcPsatisfies IC. The initial state is ic(P,IC,T,A).
If the condition of the conditional is true, then the conditional holds if its
consequent is true as well; otherwise it is either false or unknown.
ic(P,I C,T,A)it true iff MP(T) = true and MP(A) = true
ic(P,I C,T,A)if false iff MP(T) = true and MP(A) = false
ic(P,I C,T,A)iu unknown iff MP(T) = true and MP(A) = unknown
If the condition of the conditional is false, then the conditional is true under
L-semantics. However, we believe that humans might make a difference between
a conditional whose condition and consequence is true and a conditional whose
condition is false. Hence, for the time being we consider a conditional whose
condition is false as vacuous.
ic(P,I C,T,A)iv vacuous iff MP(T) = false
If the condition of the conditional is unknown, then we could assign a truth-
value to the conditional in accordance with the L-semantics. However, we suggest
that in this case abduction and revision shall be applied in order to satisfy the
condition. We start with the abduction rule:
ic(P,I C,T,A)ia ic(P ∪ E ,IC,T \ O,A)
5
iff MP(T) = unknown and Eexplains O ⊆ T in the abductive framework
hP,AP,IC,|=lmwc
L iand O 6=. Please note that Tmay contain literals which
are mapped to true by MP. These literals can be removed from Tby the rule
ia because the empty set explains them.
Now we turn to the revision rule:
ic(P,I C,T,A)ir ic(rev(P,S),IC,T \ S,A)
iff MP(T) = unknown,S T ,S 6=, for each L∈ S we find MP(L) =
unknown, and the least L-model of wc rev(P,S) satisfies IC.
Altogether we obtain the reduction system RIC operating on states and
consisting of the rules {−it ,if ,iu ,iv ,ia ,ir }.
4 Examples
4.1 Al in the Jailhouse
Rainy Day Suppose we are told that Al is imprisoned in a jailhouse on a rainy
day, i.e.., he is living in a cell inside the jailhouse and it is raining:
P1={inside(X)imprisoned (X),imprisoned(al)← >,raining ← >}.
The least L-model of wcP1is h{imprisoned(al ),inside(al),raining},∅i.In order
to evaluate conditional (1) with respect to P1we observe that this model maps
raining and inside to true. Hence,
ic(P1,,raining,inside)it true.
Sunny Day Let us assume that Al is still imprisoned but that it is not raining:
P2={inside(X)imprisoned (X),imprisoned(al)← >,raining ← ⊥}.
The least L-model of wc P2is h{imprisoned(al),inside(al)},{raining}i.In order
to evaluate conditional (1) wrt P2we observe that this model maps raining to
false. Hence,
ic(P2,,raining,inside)iv vacuous.
No Information about the Weather Suppose we are told that Al is imprisoned
in a jailhouse but we know nothing about the weather:
P3={inside(X)imprisoned (X),imprisoned(al)← >}.
The least L-model of wcP1is h{imprisoned(al ),inside (al )},∅i.In order to eval-
uate conditional (1) wrt P3we observe that this model maps raining to un-
known. Hence, we view raining as an observation which needs to be explained.
The only possible explanation wrt hP3,{raining ← >,raining ← ⊥},,|=lmwc
L i
is {raining ← >}.Altogether we obtain
ic(P3,,raining,inside)ia ic(P1,,,inside)it true.
Please note that P3∪ {raining ← >} =P1=rev(P3,raining).Hence, we could
replace ia by ir in the previous reduction sequence.
6
4.2 The Shooting of Kennedy
President Kennedy was killed. There was a lengthy investigation about who
actually shot the president and in the end it was determined that Oswald did it:
P4={Kennedy dead os shot,Kennedy dead se shot,os shot ← >}.
The least L-model of wcP4is h{os shot ,Kennedy dead},∅i.Evaluating the in-
dicative conditional (2) under this model we find that its condition
T={Kennedy dead,¬os shot}is mapped to false. Hence,
ic(P4,,{Kennedy dead,¬os shot },se shot )iv vacuous.
Now consider the case that we do not know that Oswald shot the president:
P5={Kennedy dead os shot,Kennedy dead se shot}.
As least L-model of wcP5we obtain h∅,∅i and find that it maps Tto un-
known. We may try to consider Tas an observation and explain it wrt the
abductive framework hP5,AP5,,|=lmwc
L i,where AP5consists of the positive
and negative facts for os shot and se shot. The only possible explanation is
E={os shot ← ⊥,se shot ← >}.As least L-model of wc(P5∪ E ) we obtain
h{Kennedy dead,se shot},{os shot}i.As this model maps se shot to true we
find ic(P5,,{Kennedy dead,¬os shot },se shot )
ia ic(P5∪ E,,,se shot)it true.
In this example we could also apply revision. Let
P6=rev(P5,T) = {Kennedy dead ← >,os shot ← ⊥}.
We obtain ic(P5,,{Kennedy dead,¬os shot },se shot )
ir ic(P6,,,se shot)iu unknown
because the least L-model of wcP6is h{Kennedy dead},{os shot }i and maps
se shot to unknown. However, as conditional (2) can be evaluated by abduction
and without revising the initial program, this derivation is not preferred.
4.3 The Firing Squad
This example is presented in [13]. If the court orders an execution, then the
captain will give the signal upon which riflemen Aand Bwill shoot the prisoner.
Consequently, the prisoner will be dead. We assume that the court’s decision is
unknown, that both riflemen are accurate, alert and law-abiding, and that the
prisoner is unlikely to die from any other causes. Let
P7={sig execution,rmA sig,rmB sig,
dead rmA,dead rmB,alive ← ¬dead }.
The least L-model of wcP7is
h∅,∅i.(7)
7
Rifleman A did not Shoot To evaluate conditional (3) wrt this model we first
observe that the condition rmA is mapped to unknown by (7). Considering the
abductive framework
hP7,{execution ← >,execution ← ⊥},,|=lmwc
L i,(8)
¬rmA can be explained by
{execution ← ⊥}.(9)
Let P8=P7(9).The least L-model of wcP8is
h{alive},{execution,sig,rmA,rmB,dead}i.(10)
Because alive is mapped to true under this model, we obtain
ic(P7,,¬rmA,alive )ia ic(P8,,,alive)it true.
The Prisoner is Alive Now consider conditional (4). Because (7) maps alive
to unknown we treat alive as an observation. Considering again the abductive
framework (8) this observation can be explained by (9). Hence, we evaluate the
consequence of (4) under (10) and find that the captain did not signal:
ic(P7,,alive ,¬sig )ia ic(P8,,,¬sig)it true.
Rifleman A Shot Let us turn the attention to conditional (5). Because (7)
maps rmA to unknown, we treat rmA as an observation. Considering the ab-
ductive framework (8) this observation can be explained by
{execution ← >}.(11)
Let P9=P7(11).The least L-model of wcP9is
h{execution,sig,rmA,rmB,dead},{alive}i.(12)
Because rmB is mapped to true under this model, we obtain
ic(P7,,rmA,rmB )ia ic(P9,,,rmB )it true.
The Captain Gave no Signal Let us now consider conditional (6). Its condition
T=sig,rmA}is mapped to unknown by (7). We can only explain ¬sig by (9)
and rmA by (11), but we cannot explain Tbecause
wc((9) (11)) = {execution ↔ > ∨ ⊥} ≡ {execution ↔ >}.
In order to evaluate this conditional we have to consider revisions.
8
1. A brute force method is to revise the program wrt all conditions. Let
P10 =rev(P7,sig ,rmA})
= (P7\def (sig,rmA},P7)) ∪ {sig ← ⊥,rmA ← >}.
The least L-model of wcP10 is
h{rmA,dead},{sig ,rmB ,alive }i.(13)
This model maps dead to true and rmB to false and we obtain
ic(P7,,sig ,rmA},{dead,¬rmB})
ir ic(P10,,,{dead,¬rmB})it true.
2. As we prefer minimal revisions let us consider
P11 =rev(P7,rmA)=(P7\def (rmA,P7)) ∪ {rmA ← >}.
The least L-model of wcP11 is h{dead,rmA},{alive}i.Unfortunately, ¬sig
is still mapped to unknown by this model, but it can be explained in the ab-
ductive framework hP11,{execution ← >,execution ← ⊥},,|=lmwc
L iby (9).
Let P12 =P11 (9). Because the least L-model of wcP12 is
h{dead,rmA},{alive ,execution,sig ,rmB }i (14)
we obtain
ic(P7,,sig ,rmA},{dead,¬rmB})
ir ic(P11,,¬sig,{dead,¬rmB})
ia ic(P12,,,{dead,¬rmB} −it true.
The revision leading to P11 is minimal in the sense that only the definition
of rmA is revised and without this revision the condition of (6) cannot be
explained. This is the only minimal revision as we will show in the sequel.
3. An alternative minimal revision could be the revision of P7wrt to ¬sig :
P13 =rev(P7,¬sig ) = (P7\def (¬sig ,P7)) ∪ {sig ← ⊥}.
The least L-model of wcP13 is
h{alive},{sig ,dead,rmA,rmB }i.(15)
Because this model maps rmA to false we obtain:
ic(P7,,sig ,rmA},{dead,¬rmB})
ir ic(P13,,rmA,{dead,¬rmB})iv vacuous.
4. So far the first step in evaluating the conditional was a revision step. Alter-
natively, we could start with an abduction step. ¬sig can be explained in
the abductive framework (8) by (9) leading to the program P8and the least
L-model (10). Because this model maps rmA to false we obtain:
ic(P7,,sig ,rmA},{dead,¬rmB})
ia ic(P8,,rmA,{dead,¬rmB })iv vacuous.
9
5. Let us now reverse the order in which the conditions are treated and start
by explaining rmA. This has already been done before and we obtain P9and
the least L-model (12). Because this model maps ¬sig to false we obtain:
ic(P7,,sig ,rmA},{dead,¬rmB})
ia ic(P9,,¬sig,{dead ,¬rmB })iv vacuous.
In the last example we have discussed five different approaches to handle
the case that the truth value of the conditions of a conditional is unknown and
cannot be explained: maximal (parallel) revision (MaxRev), partial (sequen-
tial) revision as well as partial (sequential) explanation, where in the sequential
approaches the literals in the condition of the conditionals are treated in differ-
ent orders: left-to-right and right-to-left, where we consider sets to be ordered
(PRevLR,PRevRL,PExLR,PExRL). The results are summarized in Table 1,
where the conditional as well as the literals are evaluated wrt the final least
L-model computed in the different approaches.
Which approach shall be preferred? Because rifleman A causally depends on
the captain’s signal but not vice-versa, plus given that in this example clauses
express causes, and effects come after causes, it would make sense to take the
cause ordering as the preferred one for abducing the conditions. Hence, PExLR
would be preferred. However, because rifleman A is an agent, the causes of his
actions can be internal to him, his decisions. Hence, when autonomous agents are
involved (or spontaneous phenomena like radioactivity), the ordering of abducing
the conditions is independent of causal dependency.
5 Properties
In this section, let Pbe a program, hI>, I ithe least L-model of wcP,IC a
set of integrity constraints, hP,AP,IC,|=lmwc
L ian abductive framework, and L
a literal.
Proposition 1. If Ocan be explained by E ⊆ APand hJ>, J iis the least
L-model of wc(P ∪ E), then hI>, Ii⊆hJ>, Ji.
MaxRev PRevRL PRevLR PExLR PExRL
final program P10 P12 P13 P8P9
final least L-model (13) (14) (15) (10) (12)
sig false false false false true
rmA true true false false true
dead true true false false true
rmB false false false false true
alive false false true true false
execution unknown false unknown false true
conditional (6) true true vacuous vacuous vacuous
Table1. Different approaches to evaluate conditional (6).
10
Proof. The least L-models hI>, Iiand hJ>, Jiare the least fixed points of
the semantic operators ΦPand ΦP∪E , respectively. Let hI>
n, I
niand hJ>
n, J
ni
be the interpretations obtained after applying ΦPand ΦP∪E n-times to h∅,∅i,
respectively. We can show by induction on nthat hI>
n, I
ni⊆hJ>
n, J
ni.The
proposition follows immediately.
Proposition 1 guarantees that whenever ia is applied, previously checked
conditions of a conditional need not to be re-checked. The following Proposition 3
gives the same guarantee whenever ir is applied.
Proposition 2. If the least L-model of wcPmaps Lto unknown and hJ>, Ji
is the least L-model of wc rev (P, L), then hI>, I i ⊂ hJ>, Ji.
Proof. By induction on the number of applications of ΦPand Φrev(P,L).
Proposition 3. RIC is terminating.
Proof. Each application of it ,if ,iu or iv leads to an irreducible
expression. Let cond(T,A) be the conditional to which RIC is applied. When-
ever ir is applied then the definition of at least one literal Loccurring in Tis
revised such that the least L-model of the weak completion of revised program
maps Lto true. Because Tdoes not contain a complementary pair of literals
this revised definition of Lis never revised again. Hence, there cannot exist a
rewriting sequence with infinitely many occurrences of ir . Likewise, there
cannot exist a rewriting sequence with infinitely many occurrences of ia be-
cause each application of ia to a state ic(P,I C,T,A) reduces the number of
literals occurring in the T.
Proposition 4. RIC is not confluent.
Proof. This follows immediately from the examples presented in Section 4.
6 Open Questions and the Proposal of an Experiment
Open Questions The new approach gives rise to a number of questions. Which
of the approaches is preferable? This may be a question of pragmatics imputable
to the user. The default, because no pragmatic information has been added, is
maximals revision for skepticism and minimal revisions for credulity. Do humans
evaluate multiple conditions sequentially or in parallel? If multiple conditions
are evaluated sequentially, are they evaluated by some preferred order? Shall
explanations be computed skeptically or credulously? How can the approach be
extended to handle subjunctive conditionals?
The Proposal of an Experiment Subjects are given the background informa-
tion specified in the program P9. They are confronted with the conditionals
like (6) as well as variants with different consequences (e.g., execution instead
of {dead,¬rmB }or conditionals where the order of two conditions are reversed.
We then ask the subjects to answer questions like: Does the conditional hold?
or Did the court order an execution? Depending on the answers we may learn
which approaches are preferred by humans.
11
Acknowledgements We thank Bob Kowalski for valuable comments on an ear-
lier draft of the paper.
References
1. E.-A. Dietz, S. H¨olldobler, and M. Ragni. A computational logic approach to
the abstract and the social case of the selection task. In Proceedings Eleventh
International Symposium on Logical Formalizations of Commonsense Reasoning,
2013.
2. E.-A. Dietz, S. H¨olldobler, and M. Ragni. A computational logic approach to the
suppression task. In N. Miyake, D. Peebles, and R. P. Cooper, editors, Proceedings
of the 34th Annual Conference of the Cognitive Science Society, pages 1500–1505.
Cognitive Science Society, 2012.
3. E.-A. Dietz, S. H¨olldobler, and C. Wernhard. Modelling the suppression task under
weak completion and well-founded semantics. Journal of Applied Non-Classical
Logics, 24:61–85, 2014.
4. M. L. Ginsberg. Counterfactuals. Artificial Intelligence, 30(1):35–79, 1986.
5. S. H¨olldobler and C. D. P. Kencana Ramli. Contraction properties of a semantic
operator for human reasoning. In Lei Li and K. K. Yen, editors, Proceedings of
the Fifth International Conference on Information, pages 228–231. International
Information Institute, 2009.
6. S. H¨olldobler and C. D. P. Kencana Ramli. Logic programs under three-valued
Lukasiewicz’s semantics. In P. M. Hill and D. S. Warren, editors, Logic Pro-
gramming, volume 5649 of Lecture Notes in Computer Science, pages 464–478.
Springer-Verlag Berlin Heidelberg, 2009.
7. S. H¨olldobler and C. D. P. Kencana Ramli. Logics and networks for human rea-
soning. In C. Alippi, Marios M. Polycarpou, Christos G. Panayiotou, and Georgios
Ellinasetal, editors, Artificial Neural Networks – ICANN, volume 5769 of Lecture
Notes in Computer Science, pages 85–94. Springer-Verlag Berlin Heidelberg, 2009.
8. S. H¨olldobler, T. Philipp, and C. Wernhard. An abductive model for hu-
man reasoning. In Proceedings Tenth International Symposium on Logi-
cal Formalizations of Commonsense Reasoning, 2011. commonsensereason-
ing.org/2011/proceedings.html.
9. A. C. Kakas, R. A. Kowalski, and F. Toni. Abductive Logic Programming. Journal
of Logic and Computation, 2(6):719–770, 1993.
10. D. Lewis. Counterfactuals. Blackwell Publishers, Oxford, 1973.
11. J. W. Lloyd. Foundations of Logic Programming. Springer, Berlin, Heidelberg,
1987.
12. J. Lukasiewicz. O logice tr´ojwarto´sciowej. Ruch Filozoficzny, 5:169–171, 1920.
English translation: On Three-Valued Logic. In: Jan Lukasiewicz Selected Works.
(L. Borkowski, ed.), North Holland, 87-88, 1990.
13. J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press,
New York, USA, 2000.
14. L. M. Pereira and J. N. Apar´ıcio. Relevant counterfactuals. In Proceedings 4th
Portuguese Conference on Artificial Intelligence (EPIA), volume 390 of Lecture
Notes in Computer Science, pages 107–118. Springer, 1989.
15. L. M. Pereira, E.-A. Dietz, and S. H¨olldobler. An abductive reasoning approach to
the belief-bias effect. In C. Baral, G. De Giacomo, and T. Eiter, editors, Principles
of Knowledge Representation and Reasoning: Proceedings of the 14th International
Conference, pages 653–656, Cambridge, MA, 2014. AAAI Press.
12
16. L. M. Pereira, E.-A. Dietz, and S. H¨olldobler. Contextual abductive reasoning
with side-effects. In I. Niemel¨a, editor, Theory and Practice of Logic Programming
(TPLP), volume 14, pages 633–648, Cambridge, UK, 2014. Cambridge University
Press.
17. F. Ramsey. The Foundations of Mathematics and Other Logical Essays. Harcourt,
Brace and Company, 1931.
18. N. Rescher. Conditionals. MIT Press, Cambridge, MA, 2007.
19. K. Stenning and M. van Lambalgen. Human Reasoning and Cognitive Science.
MIT Press, 2008.
... Let P be a program, M P be the least model of wcP and if C then D be a conditional. [5,8] introduced an abstract reduction system for conditionals (ARSC) where the states are either the truth values or tuples containing a program and two consistent and finite sets of literals. 1 The initial state for a given program P and a conditional if C then D is P, C, D . ...
... • P, C, D −→ r rev(P, S), C\ S, D iff M P (C) = unknown, S ⊆ C, S ∅, for all L ∈ S we find M P (L) = unknown. [5,8] proposed the following strategy for the evaluation of conditionals: if C then D is evaluated as follows: ...
Conference Paper
Full-text available
Conditionals play a prominent role in human reasoning and, hence, all cog-nitive theories try to evaluate conditionals like humans do. In this paper, we are particularly interested in the Weak Completion Semantics, a new cognitive theory based on logic programming, the weak completion of a program, the three-valued Łukasiewicz logic, and abduction. We show that the evaluation of conditionals within the Weak Completion Semantics as defined so far leads to counterintuitive results. We propose to distinguish between obligation and factual conditionals with necessary or sufficient conditions, and adapt the set of abducibles accordingly. This does not only remove the previously encountered counterintuitive results, but also leads to a new model for the Wason Selection Task.
... Due to the similarity of common features to WFS and WCS, the Propositions and Proofs in [45] can be transposed to the WFS setting, which we do not repeat here, given the distinct emphasizes just made salient about each of these two otherwise conceptually similar complementary approaches. 8 LP abduction and revision are employed in [15] to evaluate indicative conditionals, but not counterfactual conditionals. LP abduction is employed through a rewrite system to find solutions for an abductive framework; the rewrite system intuitively captures the natural semantics of indicative conditionals. ...
Chapter
This paper supplies a computational model, via Logic Programming (LP), of counterfactual reasoning of autonomous agents with application to morality. Counterfactuals are conjectures about what would have happened had an alternative event occurred. The first contribution of the paper is showing how counterfactual reasoning is modeled using LP, benefiting from LP abduction and updating. The approach is inspired by Pearl’s structural causal model of counterfactuals, where causal direction and conditional reasoning are captured by inferential arrows of rules in LP. Herein, LP abduction hypothesizes background conditions from given evidence or observations, whereas LP updating frame these background conditions as a counterfactual’s context, and then imposes causal interventions on the program through defeasible LP rules. The second contribution it to apply counterfactuals to agent morality using this LP-based approach. We demonstrate its potential for specifying and querying moral issues, by examining viewpoints on moral permissibility via classic moral principles and examples taken from the literature. Application results were validated on a prototype implementing the approach on top of an integrated LP abduction and updating system supporting tabling.
... More details about the evaluation of conditionals under WCS can be found in [7,9]. ...
Conference Paper
Full-text available
I present a logic programming approach based on the weak completions semantics to model human reasoning tasks, and apply the approach to model the suppression task, the selection task as well as the belief-bias effect, to compute preferred mental models of spatial reasoning tasks and to evaluate indicative as well as counterfactual conditionals.
... (4) serves as a memory for neuron done which indicates thal all explanations have been explored. (5) induces C to output the next possible explanation by activating next, which depends on sync. ...
Conference Paper
Full-text available
We present a new connectionist network to compute skeptical abduction. Combined with the CORE method to compute least fixed points of semantic operators for logic programs, the network is a pre-requisite to solve human reasoning tasks like the suppression task in a connectionist setting.
... Another issue that we need to investigate-and already proposed in [6]-is to carry out psychological experiments which verify whether our assumption of MRFA is indeed adequate for human reasoning. Furthermore, as discussed in the last part of Section 8, we need to clarify which of the two options is more adequate, in case the condition of a counterfactual is unknown. ...
Conference Paper
Full-text available
We present a new approach to evaluate conditionals in human reasoning. This approach is based on the weak completion semantics which has been successfully applied to adequately model various other human reasoning tasks in the past. The main idea is to explicitly consider the case, where the condition of a conditional is unknown with respect to some background knowledge, and to evaluate it with minimal revision followed by abduction. We formally compare our approach to a recent approach by Schulz and demonstrate that our proposal is superior in that it can handle more human reasoning tasks.
Chapter
Counterfactuals capture the process of reasoning about a past event that did not occur, namely what would have happened had this event occurred; or, vice-versa, to reason about an event that did occur but what if it had not. In this chapter, we innovatively make use of LP abduction and updating in an implemented procedure for evaluating counterfactuals, taking the established structural approach of Pearl as reference. Our approach concentrates on pure non-probabilistic counterfactual reasoning in LP, resorting to abduction and updating, in order to determine the logical validity of counterfactuals under the Well-Founded Semantics. Nevertheless, the approach is adaptable to other semantics, too. Even though the LP technique introduced in this chapter is relevant for modeling counterfactual moral reasoning, its use is general, not specific to morality.
Article
Full-text available
Formal approaches that aim at representing human reasoning should be evaluated based on how humans actually reason. One way of doing so is to investigate whether psychological findings of human reasoning patterns are represented in the theoretical model. The computational logic approach discussed here is the so-called weak completion semantics which is based on the three-valued ukasiewicz logic. We explain how this approach adequately models Byrne’s suppression task, a psychological study where the experimental results show that participants’ conclusions systematically deviate from the classical logically correct answers. As weak completion semantics is a novel technique in the field of computational logic, it is important to examine how it corresponds to other already established non-monotonic approaches. For this purpose we investigate the relation of weak completion with respect to completion and three-valued stable model semantics. In particular, we show that well-founded semantics, a widely accepted approach in the field of non-monotonic reasoning, corresponds to weak completion semantics for a specific class of modified programs.
Article
Full-text available
The belief bias effect is a phenomenon which occurs when we think that we judge an argument based on our reasoning, but are actually influenced by our beliefs and prior knowledge. Evans, Barston and Pollard carried out a psychological syllogistic reasoning task to prove this effect. Participants were asked whether they would accept or reject a given syllogism. We discuss one specific case which is commonly assumed to be believable but which is actually not logically valid. By introducing abnormalities, abduction and background knowledge, we adequately model this case under the weak completion semantics. Our formalization reveals new questions about possible extensions in abductive reasoning. For instance, observations and their explanations might include some relevant prior abductive contextual information concerning some side-effect or leading to a contestable or refutable side-effect. A weaker notion indicates the support of some relevant consequences by a prior abductive context. Yet another definition describes jointly supported relevant consequences, which captures the idea of two observations containing mutually supportive side-effects. Though motivated with and exemplified by the running psychology application, the various new general abductive context definitions are introduced here and given a declarative semantics for the first time, and have a much wider scope of application. Inspection points, a concept introduced by Pereira and Pinto, allows us to express these definitions syntactically and intertwine them into an operational semantics.
Book
A new proposal for integrating the employment of formal and empirical methods in the study of human reasoning. In Human Reasoning and Cognitive Science, Keith Stenning and Michiel van Lambalgen—a cognitive scientist and a logician—argue for the indispensability of modern mathematical logic to the study of human reasoning. Logic and cognition were once closely connected, they write, but were “divorced” in the past century; the psychology of deduction went from being central to the cognitive revolution to being the subject of widespread skepticism about whether human reasoning really happens outside the academy. Stenning and van Lambalgen argue that logic and reasoning have been separated because of a series of unwarranted assumptions about logic. Stenning and van Lambalgen contend that psychology cannot ignore processes of interpretation in which people, wittingly or unwittingly, frame problems for subsequent reasoning. The authors employ a neurally implementable defeasible logic for modeling part of this framing process, and show how it can be used to guide the design of experiments and interpret results. Bradford Books imprint
Article
Written by one of the preeminent researchers in the field, this book provides a comprehensive exposition of modern analysis of causation. It shows how causality has grown from a nebulous concept into a mathematical theory with significant applications in the fields of statistics, artificial intelligence, economics, philosophy, cognitive science, and the health and social sciences. Judea Pearl presents and unifies the probabilistic, manipulative, counterfactual, and structural approaches to causation and devises simple mathematical tools for studying the relationships between causal connections and statistical associations. The book will open the way for including causal analysis in the standard curricula of statistics, artificial intelligence, business, epidemiology, social sciences, and economics. Students in these fields will find natural models, simple inferential procedures, and precise mathematical definitions of causal concepts that traditional texts have evaded or made unduly complicated. The first edition of Causality has led to a paradigmatic change in the way that causality is treated in statistics, philosophy, computer science, social science, and economics. Cited in more than 5,000 scientific publications, it continues to liberate scientists from the traditional molds of statistical thinking. In this revised edition, Judea Pearl elucidates thorny issues, answers readers’ questions, and offers a panoramic view of recent advances in this field of research. Causality will be of interests to students and professionals in a wide variety of fields. Anyone who wishes to elucidate meaningful relationships from data, predict effects of actions and policies, assess explanations of reported events, or form theories of causal understanding and causal speech will find this book stimulating and invaluable.