ArticlePDF Available

Towards Efficient Default Reasoning

Authors:

Abstract and Figures

A decision method for Reiter's default logic is developed. It can determine whether a default theory has an extension, whether a formula is in some extension of a default theory and whether a formula is in every extension of a default theory. The method handles full propositional default logic. It can be implemented to work in polynomial space and by using only a theorem prover for the underlying propositional logic as a subroutine. The method divides default reasoning into two major subtasks: the search task of examining every alternative for extensions, which is solved by backtracking search, and the classical reasoning task, which can be implemented by a theorem prover for the underlying classical logic. Special emphasis is given to the search problem. The decision method employs a new compact representation of extensions which reduces the search space. EOEcient techniques for pruning the search space further are developed. 1 Introduction In this paper we develop a theorem-proving ...
Content may be subject to copyright.
Towards Efficient Default Reasoning
Ilkka Niemela
Department of Computer Science
Helsinki University of Technology
Otakaari 1, FIN-02150 Espoo, Finland
Ilkka.NiemelaQhut.fi
Abstract
A decision method for Reiter's default logic is
developed. It can determine whether a default
theory has an extension, whether a formula is in
some extension of a default theory and whether
a formula is in every extension of a default the-
ory. The method handles full propositional de-
fault logic. It can be implemented to work in
polynomial space and by using only a theorem
prover for the underlying propositional logic as
a subroutine. The method divides default rea-
soning into two major subtasks: the search task
of examining every alternative for extensions,
which is solved by backtracking search, and the
classical reasoning task, which can be imple-
mented by a theorem prover for the underly-
ing classical logic. Special emphasis is given to
the search problem. The decision method em-
ploys a new compact representation of exten-
sions which reduces the search space. Efficient
techniques for pruning the search space further
are developed.
1 Introduction
In this paper we develop a theorem-proving method for
default logic of Reiter [1980]. Default logic is one of
the most well-known nonmonotonic logics [Marek and
Truszczyriski, 1993]. There is a body of results indicat-
ing that default logic captures a large number of different
forms of nonmonotonic reasoning. Default logic is closely
related to logic programs and deductive databases [Gel-
fond and Lifschitz, 1990). Connections have been es-
tablished between default logic and autoepistemic logic
and McDermott and Doyle style nonmonotonic modal
logics [Konolige, 1988; Truszczyiiski, 1991], circumscrip-
tion [Etherington, 1987], diagnosis [Reiter, 1987], and
abductive reasoning [Poole, 1988].
In default logic knowledge is represented by a default
theory, which consists of ordinary first-order formulae
and nonmonotonic inference rules, default rules. Possi-
ble sets of conclusions from a default theory are defined
in terms of extensions of the default theory. In this pa-
per we consider three basic reasoning tasks: extension
existence (whether a default theory has an extension),
brave reasoning (whether a formula is in some extension
of a default theory) and cautious reasoning (whether a
formula belongs to every extension of a default theory).
Computational properties of nonmonotonic reason-
ing have received considerable attention. This research
provides a valuable basis for developing nonmonotonic
theorem-proving techniques. The complexity of proposi-
tional default reasoning has been located at the second
level of the polynomial time hierarchy [Garey and John-
son, 1979]: extension existence and brave reasoning are
Ep/2-complete and cautious reasoning Ilp/2-complete [Got-
tlob, 1992]. This means that full propositional default
logic can be implemented in polynomial space and that
it is strictly harder than propositional reasoning unless
the polynomial time hierarchy collapses.
Developing theorem-proving techniques for default
logic and other nonmonotonic logics has turned out to be
difficult. Existing techniques are quite straightforward
and little emphasis has been laid on efficiency consider-
ations. For example, in recent approaches [Junker and
Konolige, 1990; Risch and Schwind, 1994; Baader and
Hollunder, 1992] to automating default logic exponen-
tial space is needed or the methods are based on solving
subtasks that seem to be computationally harder than
the original default reasoning task.
We consider first approaches where a reasoning prob-
lem in default logic is reduced into another problem like a
truth maintenance problem [Junker and Konolige, 1990]
or a constraint satisfaction problem [Ben-Eliyahu and
Dechter, 1991]. A crucial feature of the reduction map-
pings is that classical deductions needed in default rea-
soning have to be encoded. This implies that reductions
are computationally feasible only for very restricted sub-
classes of default logic. Typically, the reductions lead to
an exponential increase in the problem size because of
the exponential number of deductions to be encoded.
Moreover, computing the reduction mapping appears to
be harder than the original default reasoning problem.
This is because even the problem of finding a single de-
duction, (i.e., a proof of a given formula from a set of for-
mulae) is closely related to logic-based abduction, which
is p/2-complete [Eiter and Gottlob, 1992]. For a more
detailed discussion, see [Niemela, 1994].
Risch and Schwind [1994] propose a tableau-based
method for finding extensions. Also this approach suf-
312 AUTOMATED REASONING
c
fers from the problem of exponential worst-case space
complexity. Baader and Hollunder [1992] present an ap-
proach to generate all extensions of a default theory by
pruning defaults in a top-down way. When eliminating
defaults, this method uses heavily a subroutine comput-
ing all maximal consistent subsets, i.e., given sets E and
H the subroutine is expected to find all maximal sub-
sets H' of H such that E U H' is consistent. It seems
that finding all such maximal subsets is computationally
expensive and at least as hard as the decision problems
in default logic. This is because, for example, finding
a maximal subset not containing a given formula in
is closely related to logic-based abduction. To see this
consider a maximal subset H' C H for which E U H1 is
consistent but ip H - H''. Hence, EuFu {ifi} is not
consistent which implies that -^ is a logical consequence
of Ell//7, i.e. H' is an abductive explanation of from
the hypotheses H and the background theory E.
In this paper we develop a decision method for de-
fault logic that handles important subclasses of default
reasoning (i.e., the full propositional case as well as
closed default rules together with a decidable fragment
of the underlying first-order logic) but does not suffer
from the two problems: (i) exponential space require-
ments and (ii) the use of computationally too difficult
subtasks as a part of the method. As a basis of the
work we have taken a decision method for autoepistemic
logic presented in [NiemelSL, 1994] because this approach
satisfies the two requirements. As autoepistemic logic
and default logic are closely related [Konolige, 1988;
Niemela, 1992], the approach is directly applicable to
default logic. However, there seems to be some room
for improvements. In this paper we take the basic ideas
from |Niemela, 1994] and apply them directly to default
logic in order to fully exploit the special characteristics
of default reasoning.
The paper is organized as follows. Section 2 intro-
duces default logic and develops a concise representa-
tion of extensions. This representation is based on ideas
from autoepistemic reasoning and it forms the basis for
the decision method. Section 3 develops a basic algo-
rithm for default reasoning. It provides a framework for
integrating optimization techniques. Section 4 presents
optimizations of the basic algorithm and Section 5 con-
tains the concluding remarks.
2 Default Logic
We are going to use intuitions from autoepistemic rea-
soning and to facilitate this we employ somewhat non-
standard notations for default logic. First, we introduce
a new operator nb(): nb(0) expresses that a formula <fi
is "not believed", i.e. <j> does not belong to the extension
in question. Second, we write default rules using the new
operator. A default rale is an expression of the form
ai,...,an,nb(6i),...,nb(6m) *- c (1)
where a\,..., an, b\,..., 6m, c are arbitrary (first-order)
formulae. This is just an alternative notation for a de-
fault rule in the standard form
ai A • • • A an : -»6i,..., -»6m
A default theory is a set of default rules of the form
(1). In Reiter's [1980] original presentation a default the-
ory can contain ordinary first-order formulae in addition
to default rules. Here for uniformity the first-order for-
mulae are represented as default rules, i.e., a first-order
formula <p is represented as a rule <— </>.
A default rule of the form (1) can be thought of as
representing an autoepistemic formula
La\ A • • • A Lan A L-^Lbx A • • • A L->Lbm c.
Under this translation, proposed by Truszczyriski [1991],
an extension of a default theory is the L free part of an L-
hierarchic expansion of the translated theory [Niemela,
1992] or the L free part of a consistent ^-expansion of
the translated theory for a range of nonmonotonic modal
logics S [Truszczynski, 1991]. Hence default rules of
the form (1) provide an interesting "standard" form of
autoepistemic formulae.
Next we give the definition of an extension of a default
theory. Technically our definition is somewhat different
from that given by Reiter [1980] but it leads to the same
class of extensions. The definition is given by using the
notions of a deductive closure and NB-formuiae.
We call NB-formulae expressions of the form nb(</>)
where 0 is a formula. For a set of formulae 5, nb(5) =
{nb(4>) | 4> G S}. By S+ (5") we denote set of
formulae (NB-formulae) in S. For example, if S -
{a,nb(6),nb(c)}, S+ = {a} and S' - {nb(6),nb(c)}.
We denote by Dcl(E, L) the deductive closure of a set
of rules E of the form (1) and a set of formulae and NB-
formulae L. Dcl(E,L) is the smallest set of formulae
which contains L^ and which is closed under E' and
first-order derivations where
E' = {ai,...,an <->c |
ai,... ,an,nb(6i),... ,nb(6m) «- c E
and for allz = 1,... ,m,nb(&i) £ L~~}. (2)
For example, let E {nb(p) <— q; -i-ig c ->->r} and
L = \p,nb(p)}. Then E' - {<-+ q; -i-ng t-* -»-«r} and
Dcl(E,L) - Th({p,g,-«-ir}) where Th(S') denotes the
set of the first-order consequences of a set of formulae S.
This means that, e.g., r G Dcl(E, {p, nb(p)}).
Notice that the deductive closure is a monotonic oper-
ator also with respect to the premises L: if L C U, then
Dcl(E,L)CDcl(E,L').
For a set of rules E, a set of formulae A is called an
extension of E iff A - Dcl(E,nb(A)) where A is the
complement of A, i.e., the formulae not in A.
We develop a compact characterizing condition for ex-
tensions. The formulae that occur inside the nb() oper-
ator play a crucial role. For a set of rules E, we denote
by NAnt(E) the negative antecedents in E, i.e., the set of
formulae b such that nb(6) appears in E. For example,
NAnt({nb(p) «-+ q; ^q «- -«^r}) = {p}. The follow-
ing proposition makes the role of NAnt(E) evident. It
is based on the observation that for the deductive clo-
sure of a set of rules only the NB-formulae appearing
in the rules are of importance, i.e., Dcl(E,nb(A)) =
Dcl(E,nb(NAnt(E)-A)).
Proposition 2.1 For a set of rules E, a set of formulae
A i'5 an extension o/S t/fA = Dcl(E,nb(NAnt(E)-A)).
NIEMELA 313
The situation is similar to that in autoepistemic logic
where a stable expansion is uniquely determined by
the modal subformulae in the premises [Niemela, 1990],
For characterizing extensions, we are able to use ideas
from the full set based characterization of stable expan-
sions [Niemela, 1990]. The novelty here is that we exploit
the strong groundedness of extensions which implies that
ordinary formulae appearing as antecedents of rules (pre-
requisites) do not play a role in determining extensions
and only NB-formulae (justifications) are essential.
Our aim is to provide a compact characterizing set
for each extension. The characterizing sets are called
full sets and they are sets of NB-formulae built from
formulae in NAnt
Definition 2.2 For a set of rules a set of NB-
formulae is called -full iff the following condition holds
for all
For example, let = {nb{p) p).
Then (nb(p)} is _ full, because {nb(p)}) and
but, e.g., " is not -full, as p E
. It turns out that for every full set there is
an extension and for each extension a corresponding full
set.
Theorem 2.3 Let be a set of rules.
(i) If a set is an extension of .
(ii) If there is an extension of , then
nb is a-full set such that
Theorem 2.3 suggests the following straightforward
method for finding all extensions. For each subset S
of J, test whether nb(5) is . If a full set A
is found, is an extension of by Theorem 2.3
(i) and by Theorem 2.3 (ii) for each extension ' there is
a corresponding full set A such that A).
Example 2.1 The straightforward method is not very
practical. If there are n NB-formulae in fullness
tests are needed although there can be a single extension
which is easily constructible as the following set of rules
shows.
There are 2n candidates for sets but only one
full set This can be seen by the following ar-
gument. For any set which implies
. Hence nb(a1) cannot belong to any
full set neither can nb(a-2) and so on.
314 AUTOMATED REASONING
3 The Basic Algorithm
In this section we develop a basic algorithm for solv-
ing default reasoning problems. The basic algorithm
serves as a framework for developing further optimiza-
tion methods, which are discussed in the next section.
The algorithm is based on ideas introduced in [Niemela,
1994] in the context of autoepistemic reasoning. The
algorithm presented in Figure 1 is given as a function
extensionsDL that is the skeleton for the decision pro-
cedures for brave and cautious reasoning as well as for
checking the existence of extensions.
When describing the algorithm we use the following
three concepts.
(i) A set of formulae and NB-formulae A is grounded
in a set of rules For example,
and {b, nb(a)} are grounded in
is not.
(ii) We say that a set of formulae S agrees with a set of
formulae and NB-formulae A if for all formulae ,
nellset E S and for all . For example, the
set {a, b] agrees with but not with
(iii) A set of formulae and NB-formulae A covers a
formula 0 if either For example,
the set {nb(a), b) covers a. A set of formulae S is covered
by A if each formula in S is covered by A.
The function extensionsDL takes as input a set of rules
sets B and F which determine the common part of
the extensions to be considered and a formula , which is
just passed as an argument to the function test. The aim
of extensionsDL is to return true iff there is an extension
of agreeing with BUF such that test returns
true. This is accomplished by constructing sets
agreeing with B U F until a full set A is found such
that for the corresponding extension = ,
testi returns true.
We represent a partially constructed full set using the
set B that contains NB-formulae and ordinary formulae.
The NB-formulae B~ are the formulae included in the
partially constructed full set. An ordinary formula X in
B indicates that the corresponding NB-formula nb(x)
cannot be included in the full set. The idea is to expand
B until it forms a full set. The number of possibilities
can be reduced by observing that if a formula is grounded
in the rules given the partially constructed full set
cannot be included in
the full set and X is added to B. The set F contains
formulae for which nb(x) is excluded from the full set
to be constructed (and x should be included in B), but
which are "frozen": is added to B only when the
groundedness condition is satisfied, i.e.,
The function extensionsDL uses three functions
expand (for expanding B)
conflict (for detecting conflicts), and
test (for testing extensions).
By changing the function test the various decision proce-
dures are obtained. For the first two functions we present
minimal requirements (E1-E4, C1-C2) that the imple-
mentations of these functions have to satisfy in order to
guarantee the soundness and completeness of the deci-
sion procedures. These two functions form the crucial
Figure 1: The Skeleton for the Decision Procedures for
Default Logic
points in the algorithm where optimization techniques
can be applied. In the next section such optimization
methods are developed.
We first introduce the requirements on the functions
expand and conflict and then explain their role in guar-
anteeing the correctness of extensionsDL- The func-
tion expand is assumed to fulfill the following conditions
where
El:
E2: If B is grounded in then is grounded in
E3: If there is an extension such that agrees
with
E4: For implies that
X is covered by
The function conflict returns either true or false and
satisfies the following conditions.
i returns true if for some nb(x)
C2: If conflict returns true, then there exists
no extension such that agrees with BUF.
The function extensionsDL starts by expanding cau-
tiously the set B. In order to ensure the correctness
of the decision procedures we must insist that the ex-
pansion = expand extends B (El) in such
a way that groundedness is preserved (E2) and that no
extensions agreeing with B U F are lost (E3). To pre-
serve completeness we have to require that each formula
in that is grounded but not already covered by
B is included in B (E4). Then a conflict test is per-
formed. Here it must be the case that all direct conflicts
are detected (CI) and if a conflict is reported, then there
is no extension agreeing with BUF (C2). If there are
no conflicts and BU F covers every NB-formula in the
premises, then the status of frozen formulae F is exam-
ined. If all of them have been included in B, B~ is a
4 Optimizations
The basic algorithm divides default reasoning into two
major subtasks. One is the search problem of examin-
ing all the alternatives for full sets. This is implemented
using (chronological) backtracking and in the worst case
the algorithm can search through alternatives where n
is the number of different NB-formulae in the premises.
The other is the classical reasoning problem of decid-
ing whether a formula is in the deductive closure of a
set of rules given some formulae (and NB-formulae)
as premises holds. The
membership in the deductive closure is needed in the
functions expand and conflict.
The two difficult tasks are in accordance with the re-
cent results on the complexity of default reasoning show-
ing that default reasoning is a complete problem with
respect to the second level of the polynomial time hi-
erarchy [Gottlob, 1992]. The result implies that there
are two orthogonal sources of complexity in default rea-
soning, too. This suggests that we have not introduced
additional sources of computational complexity in the
basic algorithm.
In this section we present optimization techniques
which lead to more efficient methods for solving the two
subtasks. As there is quite a lot of research on classical
reasoning, the emphasis is on the search problem. But
before moving to it we make a couple of remarks about
the classical reasoning problem.
First, notice that the classical reasoning problem is
reducible to deciding logical consequence for the un-
derlying (first-order) logic. Testing the membership
in the deductive closure of a set of n rules can be
implemented using at most tests for logical con-
sequence in the following way. First construct the
set of rules (2). Then apply rules in until no
new rule fires as follows.
repeat
For the resulting
Second, the tests for the membership in the deduc-
tive closure have a very regular pattern where the
set of premises gradually grows. This pattern can
be exploited.
Third, when dealing with a large set of rules, it is
important to develop methods for testing the mem-
bership in the closure in a goal-directed way so that
only a relevant subset of rules is used.
Now we turn to the search problem. The poten-
tial search space is exponential and even for a set of
rules with 50 different NB-formulae, its size is over 1015.
Hence it is essential that the search space is pruned effec-
tively. In extensionsDL the functions expand and conflict
handle the pruning of the search space.
Expanding B
The function expand extends the current common part
B of the full sets to be constructed. The more formulae
are added, the fewer choice points for backtracking are
left; every new formula included to B cuts the remaining
search space by half.
Although the implementation E-IO can reduce the
search dramatically like in the case of Example 2.1, its
basic weakness is that it cannot detect when a formula
cannot be in an extension. An optimized version of the
implementation should be able to detect simple case like
316 AUTOMATED REASONING
5 Conclusions
In this paper we develop a decision method for default
logic which solves the extension existence problem as well
as the brave and cautious reasoning problems. It handles
the full propositional case and the first-order subclasses
of default theories with closed default rules and a de-
cidable fragment of the underlying first-order logic. The
method differs from other recent approaches [Junker and
Konolige, 1990; Risch and Schwind, 1994; Baader and
Hollunder, 1992] to automating default logic in two ma-
jor respects: (i) in the propositional case it can be im-
plemented in polynomial space and (ii) it does not rely
on solutions of subtasks which appear to be computa-
tionally harder than the original default reasoning prob-
lem. The method partitions default reasoning into two
major subtasks: the search problem of examining every
alternative for extensions, which is solved by backtrack-
ing search, and the classical reasoning task, which can
be implemented by a theorem prover for the underlying
classical logic. Special emphasis is given to the search
problem. The method employs a new compact charac-
terization of extensions based on considering only the
justifications of the rules. This reduces the search space
for alternatives for extensions. Techniques for pruning
the search space are developed. Initial experiments in-
dicate that using the implementations E-Il and C-Il of
NIEMELA 317
expand and conflict the search space is often kept rel-
atively small and we have been able to handle default
theories with a few hundred default rules.
The method developed in this paper is closely re-
lated to that presented in [Niemela, 1994] for autoepis-
temic reasoning. The key difference is that the novel
method uses the new compact characterization of ex-
tensions which leads to a smaller initial search space.
We exploit the strong groundedness of default exten-
sions which implies that extensions are determined by
the justifications of the default rules, while in the earlier
approach [Niemela, 1994] both prerequisites and justi-
fications of the default rules are employed in the char-
acterization. Optimization techniques that prune the
search space are discussed already in [Niemela, 1994).
The technique of expanding the set B (E-Il) is proposed
in [Niemela, 1994] but the conflict detection method
(C-Il) is novel.
There are interesting areas for further research. The
decision method in this paper uses heavily classical rea-
soning for pruning the search space. This implies that
the development of efficient theorem-proving techniques
for implementing the needed classical reasoning in a goal-
directed way is important. The potential search space
is very large and further work is needed for developing
new pruning techniques. The basic algorithm with the
requirements for the functions expand and conflict offers
a framework for developing these kinds of optimizations.
The decision method can be used in a goal-directed man-
ner by initializing the sets B and F accordingly. For
example, if we are interested in extensions containing p
but not q, we can start the method with B = {nb(q)}
and F - {p}. An interesting topic for further research
is to develop goal-directed techniques where a default
reasoning task is analyzed and divided to appropriate
subtasks. In the method there is a heuristic choice when
a new formula X NAnt(E) not covered by B U F is
selected. A further area of study is the development of
efficient search heuristics.
References
[Baader and Hollunder, 1992] F. Baader and B. Hollun-
der. Embedding defaults into terminological knowl-
edge representation formalisms. In Proceedings of the
3rd International Conference on Principles of Knowl-
edge Representation and Reasoning, pages 306-317,
Cambridge, MA, USA, October 1992. Morgan Kauf-
mann Publishers.
[Ben-Eliyahu and Dechter, 1991] R. Ben-Eliyahu and
R. Dechter. Default logic, propositional logic and
constraints. In Proceedings of the 9th National Con-
ference on Artificial Intelligence, pages 370-385. The
MIT Press, July 1991.
[Eiter and Gottlob, 1992] T. Eiter and G. Gottlob. The
complexity of logic-based abduction. Technical Report
CD-TR 92/35, Christian Doppler Labor fur Experten-
systeme, Institut fur Informationssysteme, Technische
Universitat Wien, Vienna, Austria, July 1992.
[Etherington, 1987] D.W. Etherington. Relating default
logic and circumscription. In Proceedings of the 10th
International Joint Conference on Artificial Intelli-
gence, pages 489-494, Milan, Italy, August 1987. Mor-
gan Kaufmann Publishers.
[Garey and Johnson, 1979] M.R. Garey and D.S. John-
son. Computers and Intractability. W.H. Freeman and
Company, San Francisco, 1979.
[Gelfond and Lifschitz, 1990] M. Gelfond and V. Lifs-
chitz. Logic programs with classical negation. In Pro-
ceedings of the 7th International Conference on Logic
Programming, pages 579-597, Jerusalem, Israel, June
1990. The MIT Press.
[Gottlob, 1992] G. Gottlob. Complexity results for non-
monotonic logics. Journal of Logic and Computation,
2(3):397-425, June 1992.
[Junker and Konolige, 1990] U. Junker and K. Konolige.
Computing the extensions of autoepistemic and
fault logics with a truth maintenance system. In Pro-
ceedings of the 8th National Conference on Artificial
Intelligence, pages 278-283, Boston, MA, USA, July
1990. The MIT Press.
[Konolige, 1988] K. Konolige. On the relation between
default and autoepistemic logic. Artificial Intelligence,
35:343-382, 1988.
[Marek and Truszczyriski, 1993] W. Marek
and M. Truszczyriski. Nonmonotonic Logic: Context-
Dependent Reasoning. Springer-Verlag, Berlin, 1993.
[Niemela, 1990] I. Niemela. Towards automatic au-
toepistemic reasoning. In Proceedings of the Euro-
pean Workshop on Logics in Artificial Intelligence—
JELIA '90, pages 428-443, Amsterdam, The Nether-
lands, September 1990. Springer-Verlag.
[Niemela, 1992] I. Niemela. A unifying framework for
nonmonotonic reasoning. In Proceedings of the 10th
European Conference on Artificial Intelligence, pages
334-338, Vienna, Austria, August 1992. John Wiley.
[Niemela, 1994] I. Niemela. A decision method for non-
monotonic reasoning based on autoepistemic reason-
ing. In Proceedings of the 4th International Conference
on Principles of Knowledge Representation and Rea-
soning, pages 473-484, Bonn, Germany, May 1994.
Morgan Kaufmann Publishers.
[Poole, 1988] D. Poole. A logical framework for default
reasoning. Artificial Intelligence, 36:27-47, 1988.
[Reiter, 1980] R. Reiter. A logic for default reasoning.
Artificial Intelligence, 13:81-132, 1980.
[Reiter, 1987] R. Reiter. A theory of diagnosis from first
principles. Artificial Intelligence, 32:57-95, 1987.
[Risch and Schwind, 1994] V. Risch and C. Schwind.
Tableau-based characterization and theorem proving
for default logic. Journal of Automated Reasoning,
13:223-242, 1994.
[Truszczyriski, 1991] M. Truszczyriski. Embedding de-
fault logic into modal nonmonotonic logics. In Pro-
ceedings of the 1st International Workshop on Logic
Programming and Non-monotonic Reasoning, pages
151-165, Washington, D.C., USA, July 1991. The
MIT Press.
318 AUTOMATED REASONING
Article
This paper is an informal and preliminary investigation of how in a logical framework we might account for how we come to have a large, complex system of con-ditional desires. I try to show how work that has been done in the formalization of causal reasoning might be adapted to show how this is possible. Introduction This paper is highly appropriate for a workshop, since it rep-resents preliminary and tentative thoughts on what seems to me to be a difficult and important problem. The preferences that operate in realistic examples of practical reasoning are frequently novel and highly circumstantial. (Consider, for instance, my preference, in planning a trip that I have never made before, to rent a car rather than to use public trans-portation, and my revision of this preference after a discus-sion with the rental agency about available cars and prices.) Where do these preferences come from? How can they be concluded from beliefs and from preferences that are more stable? I don't yet have a satisfactory solution to this problem. The purpose of this paper is merely to describe the problem, to relate it to ideas in the literature, to indicate some con-straints on possible solutions, and to solicit help from the readers of this paper. This question arises out of my recent attempts to advance the approach that is taken in (Thomason 2000) to practical reasoning based on defeasible desires. I'll begin by summa-rizing the ideas of that paper. The distinction between prima facie and all-things-considered attitudes is crucial. I model prima facie beliefs and desires as defaults, using the approach to nonmonotonic logic known as Default Logic. All-things-considered be-liefs (representing actual epistemic commitments) and all-things-considered desires (representing goals) are selected by choosing an extension that is allowed by the logic; this is done in terms of a prioritized formulation of Default Logic. In designing the system, it is crucial to avoid fallacies of wisl~d thinking, in which assumptions are justified merely by a desire for them to be true. The approach only makes sense in the context of a more general planning formalism, in which the desires and be-Copyright (~) 2002, American Association for Artificial Intelli-gence (www.aaai.org). All rights reserved.
Data
The correspondence between Reiter's default reasoning and logic pro-gramming has been exhaustively studied (e.g. [1], [2], [3]). A Contrario the rela-tion with the many variants of the initial theory of Reiter seems far less known. This paper aims to present a preliminary investigation on applying a variant of de-fault reasoning proposed by Witold Łukaszewicz [5] to extended logic programs. We show that the modification made to the notion of extension by Łukaszewicz has its counterpart as a relaxed notion of answer set of an extended logic program. As can be expected from this correspondence: (1) any extended logic program has always at least one relaxed answer set; (2) classical answer sets can be completely characterized among the set of relaxed answer sets of an extended logic program.
Article
Default reasoning is concerned with making inferences in cases where the information at hand is incomplete. In such cases it is necessary to make plausible assumptions, which in default reasoning are based on default rules. This paper gives an introduction to the field. It discusses in depth one particular approach, default logic, including properties, semantics and computational models. It also gives an overview of other ideas and approaches.
Chapter
Full-text available
The Smodels system is a C++ implementation of the well-founded and stable model semantics for range-restricted function-free normal programs. The system includes two modules: (i) smodels which implements the two semantics for ground programs and (ii) parse which computes a grounded version of a range-restricted function-free normal program. The latter module does not produce the whole set of ground instances of the program but a subset that is sufficient in the sense that no stable models are lost. The implementation of the stable model semantics for ground programs is based on bottom-up backtracking search where a powerful pruning method is employed. The pruning method exploits an approximation technique for stable models which is closely related to the well-founded semantics. One of the advantages of this novel technique is that it can be implemented to work in linear space. This makes it possible to apply the stable model semantics also in areas where resulting programs are highly non-stratified and can possess a large number of stable models. The implementation has been tested extensively and compared with a state of the art implementation of the stable model semantics, the SLG system. In tests involving ground programs it clearly outperforms SLG.
Article
A novel logic program like language, weight constraint rules, is developed for answer set programming purposes. It generalizes normal logic programs by allowing weight constraints in place of literals to represent, e.g., cardinality and resource constraints and by providing optimization capabilities. A declarative semantics is developed which extends the stable model semantics of normal programs. The computational complexity of the language is shown to be similar to that of normal programs under the stable model semantics. A simple embedding of general weight constraint rules to a small subclass of the language called basic constraint rules is devised. An implementation of the language, the smodels system, is developed based on this embedding. It uses a two level architecture consisting of a front-end and a kernel language implementation. The front-end allows restricted use of variables and functions and compiles general weight constraint rules to basic constraint rules. A major part of the work is the development of an efficient search procedure for computing stable models for this kernel language. The procedure is compared with and empirically tested against satisfiability checkers and an implementation of the stable model semantics. It offers a competitive implementation of the stable model semantics for normal programs and attractive performance for problems where the new types of rules provide a compact representation.
Article
This paper surveys the main results appearing in the literature on the computational complexity of non-monotonic inference tasks. We not only give results about the tractability/intractability of the individual problems but we also analyze sources of complexity and explain intuitively the nature of easy/hard cases. We focus mainly on non-monotonic formalisms, like default logic, autoepistemic logic, circumscription, closed-world reasoning, and abduction, whose relations with logic programming are clear and well studied. Complexity as well as recursion-theoretic results are surveyed.
Article
Extending the relational data model using classical logic to give deductive databases has some significant benefits. In particular, classical logic rules offer an efficient representation: a universally quantified rule can represent many facts. However, classical logic does not support the representation of general rules or synonymously defaults.General rules are rules that are usually valid, but occasionally have exceptions. They are useful in a database since they can allow for the derivation of relations on the basis of incomplete information. The need for incorporating general rules into a database is reinforced when considering that participants in the development process may naturally describe rules for a deductive database in the form of general rules.In order to meet this need for using general rules in databases, we extend the notion of deductive databases. In particular, we use default logic, an extension of classical logic that has been developed for representing and reasoning with default knowledge, to formalize the use of general rules in deductive databases, to give what we call default databases. In this paper, we provide an overview of default logic, motivate its applicability to capturing general rules in databases, and then develop a framework for default databases. In particular, we propose a methodology for developing default databases that is based on entity-relationship modelling.
Conference Paper
Full-text available
The correspondence between Reiter's default reasoning and logic pro- gramming has been exhaustively studied (e.g. (1), (2), (3)). A Contrario the rela- tion with the many variants of the initial theory of Reiter seems far less known. This paper aims to present a preliminary investigation on applying a variant of de- fault reasoning proposed by Witold Ëukaszewicz (5) to extended logic programs. We show that the modification made to the notion of extension by Ëukaszewicz has its counterpart as a relaxed notion of answer set of an extended logic program. As can be expected from this correspondence: (1) any extended logic program has always at least one relaxed answer set; (2) classical answer sets can be completely characterized among the set of relaxed answer sets of an extended logic program.
Conference Paper
Full-text available
We define abstract proof procedures for performing credulous and sceptical non-monotonic reasoning, with respect to the argumentation-theoretic formulation of non-monotonic reasoning proposed in [1]. Appropriate instances of the proposed proof procedures provide concrete proof procedures for concrete formalisms for non-monotonic reasoning, for example logic programming with negation as failure and default logic.We propose (credulous and sceptical) proof procedures under different argumentation-theoretic semantics, namely the conventional stable model semantics and the more liberal partial stable model or preferred extension semantics. We study the relationships between proof procedures for different semantics, and argue that, in many meaningful cases, the (simpler) proof procedures for reasoning under the preferred extension semantics can be used as sound and complete procedures for reasoning under the stable model semantics. In many meaningful cases still, proof procedures for credulous reasoning under the preferred extension semantics can be used as (much simpler) sound and complete procedures for sceptical reasoning under the preferred extension semantics. We compare the proposed proof procedures with existing proof procedures in the literature.
Conference Paper
Full-text available
Default Logic is recognized as a powerful framework for knowledge representation and incomplete information management. Its expressive power is suitable for non monotonic reasoning, but the counterpart is its very high level of computational complexity. The purpose of this paper is to show how heuristics such as Genetic Algorithms, Ant Colony Optimization and Local Search can be used to elaborate an efficient non monotonic reasoning system.
Article
Full-text available
Default logic is a formal means of reasoning about defaults: what normally is the case, in the absence of contradicting information. Autoepistemic logic, on the other hand, is meant to describe the consequences of reasoning about ignorance: what must be true if a certain fact is not known. Although the motivation and formal character of these two systems are different, a closer analysis shows that they share a common trait, which is the indexical nature of certain elements in the theory. In this paper we compare the expressive power of the two systems. First, we give an effective translation of default logic into autoepistemic logic; default theories can thus be embedded into autoepistemic logic. We also present a more surprising result: the reverse translation is also possible, so that every set of sentences in autoepistemic logic can be effectively rewritten as a default theory. The formal equivalence of these two differing systems is thus established. This analysis gives an interpretive semantics to default logic, and yields insight into the nature of defaults in autoepistemic reasoning.
Article
In this paper, we present a new method for computing extensions and for deriving formulae from a default theory. It is based on the semantic tableaux method and works for default theories with a finite set of defaults that are formulated over a decidable subset of first-order logic. We prove that all extensions (if any) of a default theory can be produced by constructing the semantic tableau ofone formula built from the general laws and the default consequences. This result allows us to describe an algorithm that provides extensions if there are any, and to decide if there are none. Moreover, the method gives a necessary and sufficient criterion for the existence of extensions of default theories with finitely many defaults provided they are formulated on a decidable subset of FOL.
Article
Suppose one is given a description of a system, together with an observation of the system's behaviour which conflicts with the way the system is meant to behave. The diagnostic problem is to determine those components of the system which, when assumed to be functioning abnormally, will explain the discrepancy between the observed and correct system behaviour.We propose a general theory for this problem. The theory requires only that the system be described in a suitable logic. Moreover, there are many such suitable logics, e.g. first-order, temporal, dynamic, etc. As a result, the theory accommodates diagnostic reasoning in a wide variety of practical settings, including digital and analogue circuits, medicine, and database updates. The theory leads to an algorithm for computing all diagnoses, and to various results concerning principles of measurement for discriminating among competing diagnoses. Finally, the theory reveals close connections between diagnostic reasoning and nonmonotonic reasoning.
Conference Paper
We present a mapping from a class of default theories to sentences in propositional logic, such that each model of the latter corresponds to an extension of the former. Using this mapping we show that many properties of default theories can be determined by solving propositional satisfiability. In particular, we show how CSP techniques can be used to identify, analyze and solve tractable subsets of Reiter's default logic.
Conference Paper
In this paper we develop a proof procedure for autoepistemic (AEL) and default logics (DL), based on translating them into a Truth Maintenance System (TMS). The translation is decidable if the theory consists of a finite number of defaults and premises and classical derivability for the base language is decidable. To determine all extensions of a network, we develop variants of Doyle's labelling algorithms.
Conference Paper
Nonmonotonic reasoning is one of most important and active areas of research in knowledge representation and reasoning. Autoepistemic logic introduced by Moore offers a very promising approach to formalize nonmonotonic reasoning. The key concept in autoepistemic reasoning is a stable expansion of a set of premises. Stable expansions are defined non- constructively using a fixed point equation which is rather hard to reason about. In the paper first-order autoepistemic logic is studied where quantifying into a modal context is not allowed. A simple new syntactic characterization of stable expansions is developed. For each stable expansion a unique set of sentences, a kernel, is constructed which completely determines the expansion. For finite sets of premises the kernel is also finite and thus stable expansions can be given a finite representation even though they are infinite sets of sentences. Using this new characterization conditions ensuring that a set of premises has at least one stable expansion and exactly one stable expansion can be stated and an upper bound on the number of stable expansions of a finite set of premises can be given. Furthermore, preliminary results on decidability and complexity of autoepistemic reasoning are obtained. E.g., it is shown that autoepistemic reasoning is decidable iff the underlying monotonic consequence relation is decidable.
Conference Paper
Default logic and the various forms of cir­ cumscription were developed to deal with similar problems. In this paper, we consider what is known about the relationships between the two approaches and present several new results extending this knowledge. We show that there are interesting cases in which the two formalisms do not correspond, as well as cases where default logic subsumes cir­ cumscription. We also consider positive and negative results on translating between defaults and cir­ cumscription, and develop a context in which they can be evaluated.