ArticlePDF Available

The Well-Founded Semantics for General Logic Programs

Authors:

Abstract

A general logic program (abbreviated to "program" hereafter) is a set of rules that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) sitting above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the "meaning of the program," or its "declarative semantics." Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a "satisfactory" total model; for such programs, the question of an appropriate partial model arises. We introduce unfounded sets and well-founded partial models, and define the well-founded semantics of a program to be its well-founded partial model. If the well-founded partial model is in fact a total model, we call it the well-founded model. We show that the class of programs possessing a total well-founded model properly in...
... It is not difficult to verify that, when O = / 0, this notion of unfounded sets coincides with the one for the corresponding logic programs ( Van Gelder et al. 1991). ...
... The well-founded model of a logic program ( Van Gelder et al. 1991) can be used to simplify the program so that the resulting program would no longer contain atoms appearing in the model. The well-founded model has been used in grounding engines of most ASP solvers to simplify programs (Baral 2003). ...
... For a normal logic program, the well-founded model uniquely exists, which can be computed by the operator based on alternating fixpoint construction (Gelder 1993) as well as by the one based on unfounded sets ( Van Gelder et al. 1991). However, for normal hybrid MKNF knowledge bases different well-founded operators are possible. ...
Preprint
Hybrid MKNF knowledge bases have been considered one of the dominant approaches to combining open world ontology languages with closed world rule-based languages. Currently, the only known inference methods are based on the approach of guess-and-verify, while most modern SAT/ASP solvers are built under the DPLL architecture. The central impediment here is that it is not clear what constitutes a constraint propagator, a key component employed in any DPLL-based solver. In this paper, we address this problem by formulating the notion of unfounded sets for nondisjunctive hybrid MKNF knowledge bases, based on which we propose and study two new well-founded operators. We show that by employing a well-founded operator as a constraint propagator, a sound and complete DPLL search engine can be readily defined. We compare our approach with the operator based on the alternating fixpoint construction by Knorr et al [2011] and show that, when applied to arbitrary partial partitions, the new well-founded operators not only propagate more truth values but also circumvent the non-converging behavior of the latter. In addition, we study the possibility of simplifying a given hybrid MKNF knowledge base by employing a well-founded operator, and show that, out of the two operators proposed in this paper, the weaker one can be applied for this purpose and the stronger one cannot. These observations are useful in implementing a grounder for hybrid MKNF knowledge bases, which can be applied before the computation of MKNF models. The paper is under consideration for acceptance in TPLP.
... Indeed, his proposal, which is based on a monotonic semantic operator in Kleene's strong three-valued logic, has been pursued in both communities, for example by Kunen (Kunen 1987) for giving a semantics for pure Prolog, and by Apt and Pedreschi (Apt and Pedreschi 1993) in their fundamental paper on termination analysis of negation as failure, leading to the notion of acceptable program. On the other hand, however, Fitting himself (Fitting 1991a;Fitting 2002), using a bilattice-based approach which was further developed by Denecker, Marek, and Truszczynski (Denecker et al. 2000), tied his semantics closely to the major semantics inspired by nonmonotonic reasoning, namely the stable model semantics due to Gelfond and Lifschitz (Gelfond and Lifschitz 1988), which is based on a nonmonotonic semantic operator, and the well-founded semantics due to van Gelder, Ross, and Schlipf (van Gelder et al. 1991), originally defined using a different monotonic operator in three-valued logic together with a notion of unfoundedness. ...
... Then Φ P ↑ 1 = {¬r }, and Φ P ↑ 2 = {q, ¬r } = Φ P ↑ 3 is the Fitting model of P . Now, for I ⊆ B P ∪ ¬B P , let U P (I ) be the greatest unfounded set (of P ) with respect to I , which always exists due to van Gelder, Ross, and Schlipf (van Gelder et al. 1991). Finally, define W P (I ) = T P (I ) ∪ ¬U P (I ) for all I ⊆ B P ∪ ¬B P . ...
... Finally, define W P (I ) = T P (I ) ∪ ¬U P (I ) for all I ⊆ B P ∪ ¬B P . The operator W P , which operates on the cpo B P ∪ ¬B P , is due to van Gelder et al. (van Gelder et al. 1991) and is monotonic, hence has a least fixed point by the Tarski fixed-point theorem, as above for Φ P . It turns out that W P ↑ α is in I P for each ordinal α, and so the least fixed point of W P is also in I P and is called the well-founded model of P , giving the well-founded semantics of P . ...
Preprint
Part of the theory of logic programming and nonmonotonic reasoning concerns the study of fixed-point semantics for these paradigms. Several different semantics have been proposed during the last two decades, and some have been more successful and acknowledged than others. The rationales behind those various semantics have been manifold, depending on one's point of view, which may be that of a programmer or inspired by commonsense reasoning, and consequently the constructions which lead to these semantics are technically very diverse, and the exact relationships between them have not yet been fully understood. In this paper, we present a conceptually new method, based on level mappings, which allows to provide uniform characterizations of different semantics for logic programs. We will display our approach by giving new and uniform characterizations of some of the major semantics, more particular of the least model semantics for definite programs, of the Fitting semantics, and of the well-founded semantics. A novel characterization of the weakly perfect model semantics will also be provided.
... Many useful logic programs are cyclic; indeed, the use of recursion is at the heart of logic programs and its various semantics (Gelfond & Lifschitz, 1988;van Gelder, Ross, & Schlipf, 1991). Many applications, such as non-recursive structural equation models (Berry, 1984;Pearl, 2009) and models with "feedback" (Nodelman, Shelton, & Koller, 2002;Poole & Crowley, 2013), defy the acyclic character of Bayesian networks. ...
... The second strategy that is often used to define the semantics of normal logic programs is to select some models of the program to be its semantics. There are many proposals in the literature as to which models should be selected; however, currently there are two selections that have received most attention: the stable model (Gelfond & Lifschitz, 1988) and the well-founded (van Gelder et al., 1991) semantics. We now describe these semantics; alas, their definitions are not simple. ...
... Example 3. Consider a game where a player wins if there is another player with no more moves ( van Gelder et al., 1991;Gelder, 1993), as expressed by the cyclic rule: ...
Preprint
We examine the meaning and the complexity of probabilistic logic programs that consist of a set of rules and a set of independent probabilistic facts (that is, programs based on Sato's distribution semantics). We focus on two semantics, respectively based on stable and on well-founded models. We show that the semantics based on stable models (referred to as the "credal semantics") produces sets of probability models that dominate infinitely monotone Choquet capacities, we describe several useful consequences of this result. We then examine the complexity of inference with probabilistic logic programs. We distinguish between the complexity of inference when a probabilistic program and a query are given (the inferential complexity), and the complexity of inference when the probabilistic program is fixed and the query is given (the query complexity, akin to data complexity as used in database theory). We obtain results on the inferential and query complexity for acyclic, stratified, and cyclic propositional and relational programs, complexity reaches various levels of the counting hierarchy and even exponential levels.
... The WFS (Van Gelder et al., 1991) assigns a three valued model to a program. A three valued interpretation I is a pair I = I T ; I F where both I T and I F are disjoint subsets of B P and represent the sets of true and false atoms, respectively. ...
... If u(WFM (P )) = {} (i.e., the set of undefined atoms of the WFM of P is empty), the WFM is two-valued and the program is called dynamically stratified . The WFS enjoys the property of relevance, and the SMS and WFS are related since, for a normal program P , the WFM of P is a subset of every stable model of P seen as a three-valued interpretation, as proven by Van Gelder et al. (1991). ...
Article
Full-text available
When we want to compute the probability of a query from a probabilistic answer set program, some parts of a program may not influence the probability of a query, but they impact on the size of the grounding. Identifying and removing them is crucial to speed up the computation. Algorithms for SLG resolution offer the possibility of returning the residual program which can be used for computing answer sets for normal programs that do have a total well-founded model. The residual program does not contain the parts of the program that do not influence the probability. In this paper, we propose to exploit the residual program for performing inference. Empirical results on graph datasets show that the approach leads to significantly faster inference. The paper has been accepted at the ICLP2024 conference and under consideration in Theory and Practice of Logic Programming (TPLP).
... a n , b ∈ T ω The previous theorem states that the set of complete extensions of any BAF Δ coincides with the set of PSMs of the derived logic program P Δ . Consequently, the set of stable and preferred extensions (resp., the grounded extension) coincide with the set of total stable and maximal-stable models (resp., the well-founded model) of P Δ (Van Gelder et al . 1991;Saccà 1997). ...
Article
Full-text available
Dung’s abstract Argumentation Framework (AF) has emerged as a key formalism for argumentation in artificial intelligence. It has been extended in several directions, including the possibility to express supports, leading to the development of the Bipolar Argumentation Framework (BAF), and recursive attacks and supports, resulting in the Recursive BAF (Rec-BAF). Different interpretations of supports have been proposed, whereas for Rec-BAF (where the target of attacks and supports may also be attacks and supports) even different semantics for attacks have been defined. However, the semantics of these frameworks have either not been defined in the presence of support cycles or are often quite intricate in terms of the involved definitions. We encompass this limitation and present classical semantics for general BAF and Rec-BAF and show that the semantics for specific BAF and Rec-BAF frameworks can be defined by very simple and intuitive modifications of that defined for the case of AF. This is achieved by providing a modular definition of the sets of defeated and acceptable elements for each AF-based framework. We also characterize, in an elegant and uniform way, the semantics of general BAF and Rec-BAF in terms of logic programming and partial stable model semantics.
... It has been shown that every approximation operator admits a unique ≤ i -minimal stable fixpoint (Denecker, Marek, and Truszczyński, 2000). Pelov, Denecker, and Bruynooghe (2007) show that for normal logic programs, the fixpoints based on the four-valued immediate consequence operator IC P (recall Definition 1) for a logic program P give rise to the following correspondences: the three-valued stable fixpoints of IC P coincide with the three-valued stable semantics as defined by Przymusinski (1990), the well-founded fixpoint of of IC P coincides with the homonymous semantics (Przymusinski, 1990;Van Gelder, Ross, and Schlipf, 1991), and the two-valued stable fixpoints of IC P coincide with the two-valued (or total) stable models. ...
Preprint
Full-text available
Conditional independence is a crucial concept supporting adequate modelling and efficient reasoning in probabilistics. In knowledge representation, the idea of conditional independence has also been introduced for specific formalisms, such as propositional logic and belief revision. In this paper, the notion of conditional independence is studied in the algebraic framework of approximation fixpoint theory. This gives a language-independent account of conditional independence that can be straightforwardly applied to any logic with fixpoint semantics. It is shown how this notion allows to reduce global reasoning to parallel instances of local reasoning, leading to fixed-parameter tractability results. Furthermore, relations to existing notions of conditional independence are discussed and the framework is applied to normal logic programming.
... In this context, "minimum" means that the set of positive literals in M is minimized or, equivalently, that the positive literals of this model are contained in every other model. Since Π is stratifiable, this minimum model exists and is unique (Gelder et al. 1991). According to Tena Cucala's notation (Tena Cucala et al. 2021), we say that a stratifiable program Π and a set of facts D entail a fact P (τ )@ϱ, written as (Π, D) |= P (τ )@ϱ, if C Π,D |= P (τ )@ϱ. ...
Preprint
Full-text available
In the wake of the recent resurgence of the Datalog language of databases, together with its extensions for ontological reasoning settings, this work aims to bridge the gap between the theoretical studies of DatalogMTL (Datalog extended with metric temporal logic) and the development of production-ready reasoning systems. In particular, we lay out the functional and architectural desiderata of a modern reasoner and propose our system, Temporal Vadalog. Leveraging the vast amount of experience from the database community, we go beyond the typical chase-based implementations of reasoners, and propose a set of novel techniques and a system that adopts a modern data pipeline architecture. We discuss crucial architectural choices, such as how to guarantee termination when infinitely many time intervals are possibly generated, how to merge intervals, and how to sustain a limited memory footprint. We discuss advanced features of the system, such as the support for time series, and present an extensive experimental evaluation. This paper is a substantially extended version of "The Temporal Vadalog System" as presented at RuleML+RR '22. Under consideration in Theory and Practice of Logic Programming (TPLP).
... Simply adding the rules of U to KB does not give a satisfactory solution in practice, even in simple cases. For example, if KB contains the rules a ← b and b ← , and U consists of the rule ¬a ← stating that a is false, then the union KB ∪ U is not consistent under predominant semantics such as the answer set semantics (Gelfond & Lifschitz, 1991) or the well-founded semantics (Van Gelder et al., 1991). However, by attributing higher priority to the update ¬a ← , a result is intuitively expected which has a consistent semantics, where the emerging conflict between old and new information is resolved. ...
Preprint
We consider an approach to update nonmonotonic knowledge bases represented as extended logic programs under answer set semantics. New information is incorporated into the current knowledge base subject to a causal rejection principle enforcing that, in case of conflicts, more recent rules are preferred and older rules are overridden. Such a rejection principle is also exploited in other approaches to update logic programs, e.g., in dynamic logic programming by Alferes et al. We give a thorough analysis of properties of our approach, to get a better understanding of the causal rejection principle. We review postulates for update and revision operators from the area of theory change and nonmonotonic reasoning, and some new properties are considered as well. We then consider refinements of our semantics which incorporate a notion of minimality of change. As well, we investigate the relationship to other approaches, showing that our approach is semantically equivalent to inheritance programs by Buccafurri et al. and that it coincides with certain classes of dynamic logic programs, for which we provide characterizations in terms of graph conditions. Therefore, most of our results about properties of causal rejection principle apply to these approaches as well. Finally, we deal with computational complexity of our approach, and outline how the update semantics and its refinements can be implemented on top of existing logic programming engines.
... Paracoherent Semantics. Many non-monotonic semantics for logic programs with negation have been proposed that can be considered as paracoherent semantics [33,36,15,35,7,32,1,18,30]. However, [4] have shown that only semi-stable semantics [34] and semi-equilibrium semantics [4] satisfy the following desiderata properties: (i) every consistent answer set of a program corresponds to a paracoherent answer set (answer set coverage); (ii) if a program has some (consistent) answer set, then its paracoherent answer sets correspond to answer sets (congruence); (iii) if a program has a classical model, then it has a paracoherent answer set (classical coherence); (iv) a minimal set of atoms should be undefined (minimal undefinedness); (v) every true atom must be derived from the program (justifiability). ...
Preprint
Answer Set Programming (ASP) is a well-established formalism for nonmonotonic reasoning. An ASP program can have no answer set due to cyclic default negation. In this case, it is not possible to draw any conclusion, even if this is not intended. Recently, several paracoherent semantics have been proposed that address this issue, and several potential applications for these semantics have been identified. However, paracoherent semantics have essentially been inapplicable in practice, due to the lack of efficient algorithms and implementations. In this paper, this lack is addressed, and several different algorithms to compute semi-stable and semi-equilibrium models are proposed and implemented into an answer set solving framework. An empirical performance comparison among the new algorithms on benchmarks from ASP competitions is given as well.
Article
Logic programming, as exemplified by datalog, defines the meaning of a program as its unique smallest model: the deductive closure of its inference rules. However, many problems call for an enumeration of models that vary along some set of choices while maintaining structural and logical constraints—there is no single canonical model. The notion of stable models for logic programs with negation has successfully captured programmer intuition about the set of valid solutions for such problems, giving rise to a family of programming languages and associated solvers known as answer set programming. Unfortunately, the definition of a stable model is frustratingly indirect, especially in the presence of rules containing free variables. We propose a new formalism, finite-choice logic programming, that uses choice, not negation, to admit multiple solutions. Finite-choice logic programming contains all the expressive power of the stable model semantics, gives meaning to a new and useful class of programs, and enjoys a least-fixed-point interpretation over a novel domain. We present an algorithm for exploring the solution space and prove it correct with respect to our semantics. Our implementation, the Dusa logic programming language, has performance that compares favorably with state-of-the-art answer set solvers and exhibits more predictable scaling with problem size.
Conference Paper
A survey of treatments of negation in logic programming. The following aspects are discussed: elimination of negation by renaming, definite Horn programs and queries, the relation between the closed world assumption and the completed data base, and their relation to negation as failure, negation as failure for definite Horn programs, special classes of program for which negation as failure coincides with classical negation applied to the completed data base or closed world assumption, semantics for negation in terms of special classes of models, semantics for negation as failure in terms of 3-valued logic.
Article
The use of the negation as failure rule in logic programming is often considered to be tantamount to reasoning from Clark's “completed data base” [2]. Continuing the investigations of Clark and Shepherdson [2,7], we show that this is not fully equivalent to negation as failure either using classical logic or the more appropriate intuitionistic logic. We doubt whether there is any simple and useful logical meaning of negation as failure in the general case, and study in detail some special kinds of data base where the relationship of the completed data base to negation as failure is closer, e.g. where the data base is definite Horn or hierarchic.
Article
A logic program can be viewed as a set of predicate formulas, and its declarative meaning can be defined by specifying a certain Herbrand model of that set. For programs without negation, this model is defined either as the Herbrand model with the minimal set of positive ground atoms, or, equivalently, as the minimal fixed point of a certain operator associated with the program (van Emden and Kowalski). These solutions do not apply to general logic programs, because a program with negation may have many minimal Herbrand models, and the corresponding operator may have many minimal fixed points. Apt, Blair, and Walker and, independently, Van Gelder, introduced a class of stratified programs which disallows certain combinations of recursion and negation and showed how to use the fixed point approach to define a declarative semantics for such programs. Using the concept of circumscription, we extend the minimal model approach to stratified programs and show that it leads to the same semantics.
Article
We introduce and describe the alternating fixpoint of a logic program with negation. The underlying idea is to monotonically build up a set of negative conclusions until the least fixpoint is reached, using a transformation ...
Article
The study of negation in logic programming has been the topic of substantial research activity during the past several years, starting with the negation as failure semantics in Clark (1978), and Apt and van Emden (1982). More recently, a major direction of research has focused on the class of stratified logic programs, in which no predicate is defined recursively in terms of its own negation and which can be given natural semantics in terms of iterated fixpoints. Stratified logic programs were introduced and studied first by Chandra and Hare1 (1985), but soon attracted the interest of researchers from both database theory and artificial intelligence. Recent research work on stratified logic programs and their generalizations includes the papers by Apt, Blair, and Walker (1988), Van Gelder (1986) Lifschitz (1988), Przymusinski (1988), Apt and Pugin (1987) and others. At the same time, stratified logic programs became the choice for the treatment of negation in the NAIL ! system developed at Stanford University by Ullman and his co-workers (cf. Morris