The Journal of Logic Programming

Published by Elsevier
Print ISSN: 0743-1066
Publications
Given a set of clauses in propositional logic that have been found satisfiable, we wish to check whether satisfiability is preserved when the clause set is incremented with a new clause. We describe an efficient implementation of the Davis-Putnam-Loveland algorithm for checking the satisfiability of the original set. We then show how to modify the algorithm for efficient solution of the incremental problem, which is NP-complete. We also report computational results.
 
This article surveys the major developments in sequential Prolog implementation during the period 1983–1993. In this decade, implementation technology has matured to such a degree that Prolog has left the university and become useful in industry. The survey is divided into four parts. The first part gives an overview of the important technical developments starting with the Warren abstract machine. The second part presents the history and the contributions of the major software and hardware systems. The third part charts the evolution of Prolog performance since Warren's DEC-10 compiler. The fourth part extrapolates current trends regarding the evolution of sequential logic languages, their implementation, and their role in the marketplace.
 
The independent choice logic (ICL) is part of a project to combine logic and decision/game theory into a coherent framework. The ICL has a simple possible-worlds semantics characterised by independent choices and an acyclic logic program that specifies the consequences of these choices. This paper gives an abductive characterization of the ICL. The ICL is defined model-theoretically, but we show that it is naturally abductive: the set of explanations of a proposition g is a concise description of the worlds in which g is true. We give an algorithm for computing explanations and show it is sound and complete with respect to the possible-worlds semantics. What is unique about this approach is that the explanations of the negation of g can be derived from the explanations of g. The use of probabilities over choices in this framework and going beyond acyclic logic programs are also discussed.
 
In this paper, we propose a semantics for logic programs with negation as failure, the Finite Failure Stable Model semantics (FF-SM semantics), which is a three-valued extension of Gelfond and Lifschitz' Stable Model semantics. FF-SM semantics is defined in the style of Gelfond and Lifschitz Stable Model semantics, but it builds on an underlying Kripke/Kleene semantics, in which loops causing nonterminating computations are modeled by means of the truth-value undefined. It is different from the eXtended Stable Model (XSM) semantics defined by Przymusinski, since it does not capture infinite failure. We also introduce an abductive proof procedure which is an abductive extension of SLDNF-resolution based on the ideas underlying Eshghi and Kowalski's abductive procedure. We prove that our procedure is sound and complete with respect to FF-SM semantics. We compare the FF-SM semantics with the XSM semantics, and provide a reconstruction for it within the bilattice-based framework proposed by Fitting. In the paper, we deal with the propositional case.
 
We present a method to compute abduction in logic programming. We translate an abductive framework into a normal logic program with integrity constraints and show the correspondence between generalized stable models and stable models for the translation of the abductive framework. Abductive explanations for an observation can be found from the stable models for the translated program by adding a special kind of integrity constraint for the observation. Then, we show a bottom-up procedure to compute stable models for a normal logic program with integrity constraints. The proposed procedure excludes the unnecessary construction of stable models on early stages of the procedure by checking integrity constraints during the construction and by deriving some facts from integrity constraints. Although a bottom-up procedure has the disadvantage of constructing stable models not related to an observation for computing abductive explanations in general, our procedure avoids the disadvantage by expecting which rule should be used for satisfaction of integrity constraints and starting bottom-up computation based on the expectation. This expectation is not only a technique to scope rule selection but also an indispensable part of our stable model construction because the expectation is done for dynamically generated constraints as well as the constraint for the observation.
 
We propose a framework that supports the recognition of plans and intentions behind speech acts through abductive inferences over discourse sentences. These inferences allow each agent to have an active and intelligent participation in dialogues, namely, in cooperative information-seeking dialogues. In our framework, the possible actions, events, states, and world knowledge are represented by extended logic programs (LP with explicit negation), and the abductive inference porcess is modeled by the framework proposed by Pereira et al. [13], which is based on the Well Founded Semantics augmented with explicit negation (WFSX) and contradiction removal semantics (CRSX). It will be shown how this framework supports abductive planning with Event Calculus [5], and some examples will be shown [10, 14] in the domain of information-seeking dialogues. Finally, some open problems will be pointed out.
 
In this paper, we propose an argumentation-based semantic framework, called DAS, for disjunctive logic programming. The basic idea is to translate a disjunctive logic program into an argumentation-theoretic framework. One unique feature of our proposed framework is to consider the disjunctions of negative literals as possible assumptions so as to represent incomplete information. In our framework, three semantics preferred disjunctive hypothesis (PDH), complete disjunctive hypothesis (CDH) and well-founded disjunctive hypothesis (WFDH) are defined by three kinds of acceptable hypotheses to represent credulous, moderate and skeptical reasoning in artificial intelligence (AI), respectively. Furthermore, our semantic framework can be extended to a wider class than that of disjunctive programs (called bi-disjunctive logic programs). In addition to being a first serious attempt in establishing an argumentation-theoretic framework for disjunctive logic programming, DAS integrates and naturally extends many key semantics, such as the minimal models, extended generalized closed world assumption (EGCWA), the well-founded model, and the disjunctive stable models. In particular, novel and interesting argumentation-theoretic characterizations of the EGCWA and the disjunctive stable semantics are shown. Thus the framework presented in this paper does not only provide a new way of performing argumentation (abduction) in disjunctive deductive databases, but also is a simple, intuitive and unifying semantic framework for disjunctive logic programming.
 
Nonmonotonic reasoning has been explored as a form of abductive reasoning where default assumptions are treated as abductive hypotheses. While the semantics and proof theories under this approach have been studied extensively, the question of how disjunctive programs may be used to reason abductively has rarely been investigated. At the center of the question is how to embed disjunctive reasoning into that of negation-as-failure. A more concrete question is about whether the elegant abductive proof procedure by Eshghi and Kowalski can be extended to answer queries for disjunctive programs, and if yes, what is the semantics that such an extended procedure computes. In this paper we answer these questions by formulating a semantics, the regular extension semantics, for disjunctive programs, and by presenting a sound and complete extension of the Eshghi–Kowalski procedure, called disjunctive EK procedure, for query answering with respect to ground disjunctive programs under this semantics.
 
In 1969 Cordell presented his seminal description of planning as theorem proving with the situation calculus. The most pleasing feature of Green's account was the negligible gap between high-level logical specification and practical implementation. This paper attempts to reinstate the ideal of planning via theorem proving in a modern guise. In particular, the paper shows that if we adopt the event calculus as our logical formalism and employ abductive logic programming as our theorem proving technique, then the computation performed mirrors closely that of a hand-coded partial-order planning algorithm. Soundness and completeness results for this logic programming implementation are given. Finally the paper shows that, if we extend the event calculus in a natural way to accommodate compound actions, then using the same abductive theorem proving techniques we can obtain a hierarchical planner.
 
An example initial state.
Performance on randomly generated problems
One solution, job shop scheduling problem using ECLiPSe and ACLP.
Performance measurements as the complexity of the problem increases
Rescheduling experiments for resource unavailability
This paper presents the framework of Abductive Constraint Logic Programming (ACLP), which integrates Abductive Logic Programming (ALP) and Constraint Logic Programming (CLP). In ACLP, the task of abduction is supported and enhanced by its non-trivial integration with constraint solving. This integration of constraint solving into abductive reasoning facilitates a general form of constructive abduction and enables the application of abduction to computationally demanding problems. The paper studies the formal declarative and operational semantics of the ACLP framework together with its application to various problems. The general characteristics of the computation of ACLP and of its application to problems are also discussed. Empirical results based on an implementation of the ACLP framework on top of the CLP language of ECLiPSe show that ACLP is computationally viable, with performance comparable to the underlying CLP framework on which it is built. In addition, our experiments show the natural ability for ACLP to accommodate easily and in a robust way new or changing requirements of the original problem. ACLP thus combines the advantages of modularity and flexibility of the high-level representation afforded by abduction together with the computational effectiveness of low-level specialised constraint solving.
 
A new fixpoint semantics for abductive logic programs is provided, in which the belief models of an abductive program are characterized as the fixpoint of a disjunctive program obtained by a suitable program transformation. In the transformation, both negative hypotheses through negation as failure and positive hypotheses from the abducibles are dealt with uniformly. The result is further generalized to a fixpoint semantics for abductive extended disjunctive programs. These characterizations allow us to have a parallel bottom-up model generation procedure for computing abductive explanations from any (range-restricted and function-free) normal, extended, and disjunctive programs with integrity constraints.
 
We present SLDNFA, an extension of SLDNF resolution for abductive reasoning on abductive logic programs. SLDNFA solves the floundering abduction problem: nonground abductive atoms can be selected. SLDNFA also provides a partial solution for the floundering negation problem. Different abductive answers can be derived from an SLDNFA refutation; these answers provide different compromises between generality and comprehensibility. Two extensions of SLDNFA are proposed that satisfy stronger completeness results. The soundness of SLDNFA and its extensions is proved. Their completeness for minimal solutions with respect to implication, cardinality, and set inclusion is investigated. The formalization of SLDNFA presented here is an update of an older version and does not rely on skolemization of abductive atoms.
 
Abductive logic programming (ALP) and disjunctive logic programming (DLP) are two different extensions of logic programming. This paper investigates the relationship between ALP and DLP from the program transformation viewpoint. It is shown that the belief set semantics of an abductive program is expressed by the answer set semantics and the possible model semantics of a disjunctive program. In converse, the possible model semantics of a disjunctive program is equivalently expressed by the belief set semantics of an abductive program, while such a transformation is generally impossible for the answer set semantics. Moreover, it is shown that abductive disjunctive programs are always reducible to disjunctive programs both under the answer set semantics and the possible model semantics. These transformations are verified from the complexity viewpoint. The results of this paper turn out that ALP and DLP are just different ways of looking at the same problem if we choose an appropriate semantics.
 
We show how declarative diagnosis techniques can be extended to cope with verification of operational properties, such as computed and correct answers, and of abstract properties, such as depth(k) answers and groundness dependencies. The extension is achieved by using a simple semantic framework, based on abstract interpretation. The resulting technique (abstract diagnosis) leads to elegant bottom-up and top-down verification methods, which do not require to determine the symptoms in advance, and which are effective in the case of abstract properties described by finite domains.
 
We present simple and powerful generalized algebraic semantics for constraint logic programs that are parameterized with respect to the underlying constraint system. The idea is to abstract away from standard semantic objects by focusing on the general properties of any—possibly nonstandard—semantic definition. In constraint logic programming, this corresponds to a suitable definition of the constraint system supporting the semantic definition. An algebraic structure is introduced to formalize the notion of a constraint system, thus making classical mathematical results applicable. Both top-down and bottom-up semantics are considered. Nonstandard semantics for constraint logic programs can then be formally specified using the same techniques used to define standard semantics. Different nonstandard semantics for constraint logic languages can be specified in this framework. In particular, abstract interpretation of constraint logic programs can be viewed as an instance of the constraint logic programming framework itself.
 
PEPSys (Parallel ECRC PROLOG System) is a research project started in 1984 in the Computer Architecture Group of the European Computer-Industry Research Centre (ECRC). Its general goals are to study and evaluate new and practicable solutions to the problems of parallel logic programming. The PEPSys Abstract Machine described in this paper was designed to allow an efficient implementation of the PEPSys computational model. Based on the WAM, it incorporates a number of novel features to support the management of the logical variable and the control of the search space of the PEPSys computational model. Both a parallel implementation on a multiprocessor and a simulation system of scalable multiprocessor architectures implement the PEPSys Abstract Machine and yield effective speedups in parallel computations.
 
This paper describes a uniprocessor implementation of Flat Concurrent Prolog, based on an abstract machine and a compiler for it. The machine instruction set includes the functionality necessary to implement efficiently the parallel semantics of the language. In addition, the design includes a novel approach to the integration of a module system into a language. Both the compiler and the emulator for the abstract machine have been implemented and form the basis of Logix, a practical programming environment for Flat Concurrent Prolog. Its performance suggests that a process-oriented language need not be less efficient then a procedure-oriented language, even on a uniprocessor. In particular, it shows that a process queue and process spawning can be implemented as effectively as an activation stack and procedure calling, and thus debunks the “expensive-process-spawn myth”.
 
Traditional schemes for abstract interpretation-based global analysis of logic programs generally focus on obtaining procedure-argument mode and type information. Variable-sharing information is often given only the attention needed to preserve the correctness of the analysis. However, such sharing information can be very useful. In particular, it can be used for predicting run-time goal independence, which can eliminate costly run-time checks in AND-parallel execution. In this paper, a new algorithm for doing abstract interpretation in logic programs is described which concentrates on inferring the dependencies of the terms bound to program variables with increased precision and at all points in the execution of the program, rather than just at a procedure level. Algorithms are presented for computing abstract entry and success substitutions which extensively keep track of variable-aliasing and term-dependence information. In addition, a new, abstract domain-independent fixpoint algorithm is presented and described in detail. The algorithms are illustrated with examples. Finally, results from an implementation of the abstract interpreter are presented.
 
We investigate the notion of “semicomputability,” intended to generalize the notion of recursive enumerability of relations to abstract structures. Two characterizations are considered and shown to be equivalent: one in terms of “partial computable functions” (for a suitable notion of computability over abstract structures) and one in terms of definability by means of Horn programs over such structures. This leads to the formulation of a “Generalized Church-Turing Thesis” for definability of relations on abstract structures.
 
The type concept of the logic programming language PROTOS-L supports sorts, subsort relationships, and parametric polymorphism. Due to the order-sortedness, types are also present at run time, replacing parts of the deduction process required in an unsorted version by efficient type computations. Together with the polymorphism, most of the flexibility of untyped logic programming carries over the order-sorted approach. The operational semantics of PROTOS-L is based on polymorphic order-sorted resolution. Starting from an abstract specification, we show how this operational semantics can be implemented efficiently by an extension of the Warren Abstract machine, and give a detailed description of all instructions and low-level procedures responsible for type handling. Since the extension leaves the WAM's AND/OR structure unchanged, it allows for all WAM optimizations like last call optimization, environment trimming, etc. Moreover, the extension is orthogonal in the sense that any program part not exploiting the facilities of computing with subtypes is executed with almost the same efficiency as on the original WAM.
 
A PROLOG compiler specializes the code for unification between calls and clause heads as they appear in the program. This code could be further specialized, yielding more efficient code, if more precise information about possible values for actual arguments were available. This paper addresses the problem of gathering such information. It develops a method for obtaining descriptions of possible values of program variables. The method is based upon a framework for abstract interpretation. The descriptions can be regarded as extended modes or a kind of type information. An important issue in the method is the treatment of free variables and the sharing of free variables between different values of program variables.
 
We extend the theory of Prolog to provide a framework for the study of Prolog compilation technology. For this purpose, we first demonstrate the semantic equivalence of two Prolog interpreters: a conventional SLD-refutation procedure and one that employs Warren's “last call” optimization. Next, we formally define the Warren Abstract Machine (WAM) and its instruction set and present a Prolog compiler for the WAM. Finally, we prove that the WAM execution of a compiled Prolog program produces the same result as the interpretation of its source.
 
The well-founded semantics has gained wide acceptance partly because it is a skeptical semantics. That is, the well-founded model posits as unknown atoms which are deemed true or false in other formalisms such as stable models. This skepticism makes the well-founded model not only useful in itself, but also suitable as a basis for other forms of non-monotonic reasoning. For instance, since algorithms to compute stable models are intractable, the atoms relevant to such algorithms can be limited to those undefined in the well-founded model. Thus, an engine that efficiently evaluates programs according to the well-founded semantics can be seen as a prerequisite to practical systems for non-monotonic reasoning. This paper describes the architecture of the Warren Abstract Machine (WAM)-based abstract machine underlying the XSB system. This abstract machine, called the SLG-WAM, uses tabling to efficiently compute the well-founded semantics of non-ground normal logic programs in a goal-directed way. To do so, the SLG-WAM requires sophisticated extensions to its core tabling engine for fixed-order stratified programs. A mechanism must be implemented to represent answers that are neither true nor false, and the delay and simplification operations – which serve to break and to resolve cycles through negation, must be implemented. We describe fully these extensions to our tabling engine, and demonstrate the efficiency of our implementation in two ways. First, we present a theorem that bounds the need for delay to those literals which are not dynamically stratified for a fixed-order computation. Second, we present performance results that indicate that the overhead of delay and simplification to Prolog – or tabled – evaluations is minimal.
 
In the research literature, logic programming, as a procedural interpretation of SLD resolution, has largely been associated with developments arising from the interaction of Colmerauer and Kowalski and their colleagues in the early seventies. Around 1967 the Group for Computing Research at the University of Aberdeen designed and implemented a programming system called Absys. It should be interesting to the logic programming community that Absys was a logic programming language in the full current sense of that descriptor, and the first such programming language. This claim is not intended to be aggressive or territorial (indeed, the current PROLOG “phenomenon” is certainly not of our causing and not something to which we would lay claim). Rather, it is hoped that logic programmers might be interested to hear how subsequent developments in what is now called equational programming, and alternative presentations of the unification algorithm, allow Absys to be recognized for what it was.
 
The use of tabling in logic programming allows bottom-up evaluation to be incorporated in a top-down framework, combining advantages of both. At the engine level, tabling also introduces issues not present in pure top-down evaluation, due to the need for subgoals and answers to access tables during resolution. This article describes the design, implementation, and experimental evaluation of data structures and algorithms for high-performance table access. Our approach uses tries as the basis for tables. Tries, a variant of discrimination nets, provide complete discrimination for terms, and permit a lookup and possible insertion to be performed in a single pass through a term. In addition, a novel technique of substitution factoring is proposed. When substitution factoring is used, the access cost for answers is proportional to the size of the answer substitution, rather than to the size of the answer itself. Answer tries can be implemented both as interpreted structures and as compiled WAM-like code. When they are compiled, the speed of computing substitutions through answer tries is competitive with the speed of unit facts compiled or asserted as WAM code. Because answer tries can also be created an order of magnitude more quickly than asserted code, they form a promising alternative for representing certain types of dynamic code, even in Prolog systems without tabling.
 
We represent properties of actions in a logic programming language that uses both classical negation and negation as failure. The method is applicable to temporal projection problems with incomplete information, as well as to reasoning about the past. It is proved to be sound relative to a semantics of action based on states and transition functions. 1 Introduction This paper extends the work of Eshghi and Kowalski [6], Evans [7] and Apt and Bezem [1] on representing properties of actions in logic programming languages with negation as failure. Our goal is to overcome some of the limitations of the earlier work. The existing formalizations of action in logic programming are adequate for only the simplest kind of temporal reasoning---"temporal projection." In a temporal projection problem, we are given a description of the initial state of the world, and use properties of actions to determine what the world will look like after a series of actions is performed. Moreover, the existing ...
 
We address the problem of representing common sense knowledge about action domains in the formalisms of logic programming and default logic. We employ a methodology proposed by Gelfond and Lifschitz which involves first defining a high-level language for representing knowledge about actions, and then specifying a translation from the high-level action language into a general-purpose formalism, such as logic programming. Accordingly, we define a high-level action languageAE, and specify sound and complete translations of portions ofAEinto logic programming and default logic. The languageAEincludes propositions that represent “static causal laws” of the following kind: a fluent formula ψ can be made true by making a fluent formula true (or, more precisely, ψ is caused whenever is caused). Such propositions are more expressive than the state constraints traditionally used to represent background knowledge. Our translations ofAEdomain descriptions into logic programming and default logic are simple, in part because the noncontrapositive nature of causal laws is easily reflected in such rule-based formalisms.
 
We describe a simple declarative languageEfor describing the effects of a series of action occurrences within a narrative.Eis analogous to Gelfond and Lifschitz's LanguageAand its extensions, but is based on a different ontology. The semantics ofEis based on a simple characterisation of persistence which facilitates a modular approach to extending the expressivity of the language. Domain descriptions inAcan be translated to equivalent theories inE. We show how, in the context of reasoning about actions,E's narrative-based ontology may be exploited in order to characterise and synthesise two complementary notions of explanation. According to the first notion, explanation may be partly modelled as the process of suitably extending an apparently inconsistent theory written inEso as to establish consistency, thus providing a natural method, in many cases, to account for conflicting sets of information about the domain. According to the second notion, observations made at later times can sometimes be explained in terms of what is true at earlier times. This enables domains to be given an alternative characterisation in which knowledge arising from observations is appropriately separated from other aspects of the domain. We also describe howEdomains may be implemented as Event Calculus style logic programs, which facilitate automated reasoning both backwards and forwards in time, and which behave correctly even when the knowledge entailed by the domain description is incomplete.
 
Gelfond and Lifschitz introduce a declarative languageAfor describing effects of actions and describe translations of theories in this language into extended logic programs. In this paper we extend the languageAand its translation to allow reasoning about the effects of concurrent actions. The logic programming formalization of situation calculus with concurrent actions presented in the paper is of independent interest and may serve as a test bed for the investigation of various transformations and logic programming inference mechanisms.
 
We study Shapiro's method of bug diagnosis in the theoretical framework of Horn clause logic programming. Within the framework of Clark's semantics (Herbrand's universe with variables, which is more general than the most usual semantics without variables) we extend the scope of fixpoint and declarative semantics of logic programming.
 
There are numerous applications where an agent needs to reason about the beliefs of another agent, as well as about the actions that other agents may take. In [T. Eiter, V.S. Subrahmanian, G. Pick, Heterogeneous Active Agents, I: Semantics, Artificial Intelligence 108(1–2) (1999) 179–255] the concept of an agent program is introduced, and a language within which the operating principles of an agent can be declaratively encoded on top of imperative data structures is defined. In this paper we first introduce certain belief data structures that an agent needs to maintain. Then we introduce the concept of a Meta Agent Program (map), that extends the framework of Refs. [T. Eiter, V.S. Subrahmanian, Heterogeneous Active Agents, II: Algorithms and Complexity, Artificial Intelligence 108(1–2) (1999) 257–307; loc. cit.] so as to allow agents to perform metareasoning. We build a formal semantics for maps, and show how this semantics supports not just beliefs agent may have about agent 's state, but also beliefs about agents 's beliefs about agent 's actions, beliefs about 's beliefs about agent 's state, and so on. Finally, we provide a transansation that takes any map as input and converts it into an agent program such that there is a one–one correspondence between the semantics of the map and the semantics of the resulting agent program. This correspondence allows an implementation of maps to be built on top of an implementation of agent programs.
 
Strategies used in deductive data bases try as far as possible to replace deduction in Horn clause theories TS by evaluation of relational algebra formulas in a set of ground atoms. In this paper we extend the relational algebra in order to take into account incomplete databases where incompleteness is represented by Skolem constants. We first define the notion of the extended model EM, similar to the Herbrand model, which is associated to a given theory TS. Specific satisfiability conditions applied to EM define the link between provability in TS and satisfiability in EM. Then we define an extended relational algebra to compute every ground instance of a given formula. It is shown that this algebra is always sound, and complete for a particular class of formulas which is not too restrictive.
 
This paper describes an algebraic approach to the sharing analysis of logic programs based on an abstract domain of set logic programs. Set logic programs are logic programs in which the terms are sets of variables and unification is based on an associative, commutative, and idempotent equality theory. All of the basic operations required for sharing analyses, as well as their formal justification, are based on simple algebraic properties of set substitutions and set-based atoms. An ordering on set-based syntactic objects, similar to “less general” on concrete syntactic objects, is shown to reflect the notion of “less sharing” information. The (abstract) unification of a pair of set-based terms corresponds to finding their most general ACI1 unifier with respect to this ordering. The unification of a set of equations between set-based terms is defined exactly as in the concrete case, by solving the equations one by one and repeatedly applying their solutions to the remaining equations. We demonstrate that all of the operations in a sharing analysis have natural definitions which are both correct and optimal.
 
Algebraic specifications are generalized to the case of nondeterministic operations by admitting models with set-valued functions (multialgebras). General (in particular, nonconfluent) term-rewriting systems are studied as a specification language for this semantic framework. A calculus for nondeterministic specifications is given which is similar to term rewriting but which employs an additional determinacy predicate. Soundness, ground completeness, and initiality results are given. Small examples illustrate the range of possible applications.
 
The aim of our work is the definition of compositional semantics for modular units over the class of normal logic programs. In this sense, we propose a declarative semantics for normal logic programs in terms of model classes that is monotonic in the sense that , for any programs P and P′ and we show that in the model class associated to every program there is a least model that can be seen as the semantics of the program, which may be built upwards as the least fix point of a continuous immediate consequence operator. In addition, it is proved that this least model is “typical” for the class of models of Clark-Kunen's completion of the program. This means that our semantics is equivalent to Clark-Kunen's completion. Moreover, following the approach defined in a previous paper, it is shown that our semantics constitutes a “specification frame ” equipped with the adequate categorical constructions needed to define compositional and fully abstract (categorical) semantics for a number of program units. In particular, we provide a categorical semantics of arbitrary normal logic program fragments which is compositional and fully abstract with respect to the (standard) union.
 
The Silences of the Archives, the Reknown of the Story. The Martin Guerre affair has been told many times since Jean de Coras and Guillaume Lesueur published their stories in 1561. It is in many ways a perfect intrigue with uncanny resemblance, persuasive deception and a surprizing end when the two Martin stood face to face, memory to memory, before captivated judges and a guilty feeling Bertrande de Rols. The historian wanted to go beyond the known story in order to discover the world of the heroes. This research led to disappointments and surprizes as documents were discovered concerning the environment of Artigat’s inhabitants and bearing directly on the main characters thanks to notarial contracts. Along the way, study of the works of Coras and Lesueur took a new direction. Coming back to the affair a quarter century later did not result in finding new documents (some are perhaps still buried in Spanish archives), but by going back over her tracks, the historian could only be struck by the silences of the archives that refuse to reveal their secrets and, at the same time, by the possible openings they suggest, by the intuition that almost invisible threads link here and there characters and events.
 
Extended unification algorithms are considered for the integration of a functional language into a logic programming language. The extended language is a particular case of logic programming language with equality. A comprehensive survey is given which is structured following the procedural semantics taken for the functional language. This survey includes past works based on evaluation and derivation (as procedural semantics of the functional language) and new algorithms based on surderivation. These algorithms are compared especially regarding their completeness. Also, we discuss issues arising in practice when different surderivation strategies are used, as these influence directly efficiency and especially termination. This leads us to propose an algorithm based on lazy surderivation which compares favorably with the others and endows logic programming with two advanced features of functional programming: automatic coroutining and handling of infinite data structures without extra control.
 
Integrating knowledge from multiple sources is an important aspect of automated reasoning systems. In the first part of this series of papers, we presented a uniform declarative framework, based on annotated logics, for amalgamating multiple knowledge bases when these knowledge bases (possibly) contain inconsistencies, uncertainties, and nonmonotonic modes of negation. We showed that annotated logics may be used, with some modifications, to mediate between different knowledge bases. The multiple knowledge bases are amalgamated by embedding the individual knowledge bases into a lattice. In this paper, we briefly describe an SLD-resolution-based proof procedure that is sound and complete w.r.t. our declarative semantics. We will then develop an OLDT-resolution-based query processing procedure, MULTI_OLDT, that satisfies two important properties: (1) efficient reuse of previous computations is achieved by maintaining a table—we describe the structure of this table, and show that table operations can be efficiently executed, and (2) approximate, interruptable query answering is achieved, i.e., it is possible to obtain an “intermediate, approximate” answer from the QPP by interrupting it at any point in time during its execution. The design of the MULTI_OLDT procedure will include the development of run-time algorithms to incrementally and efficiently update the table.
 
Interval narrowing techniques are a key issue for handling constraints over real numbers in the logic programming framework. However, the standard fixpoint algorithm used for computing an approximation of arc consistency may give rise to cyclic phenomena and hence to problems of slow convergence. Analysis of these cyclic phenomena shows: (1) that a large number of operations carried out during a cycle are unnecessary; (2) that many others could be removed from cycles and performed only once when these cycles have been processed. What is proposed here is a revised interval narrowing algorithm for identifying and simplifying such cyclic phenomena dynamically. These techniques are of particular interest for computing stronger consistencies which are often required for a substantial pruning. Experimental results show that such dynamic optimizations improve performance significantly.
 
We investigate the complexity of derivations from logic programs, and find it closely related to the complexity of computations of alternating Turing machines. In particular, we define three complexity measures over logic programs—goal-size, length, and depth—and show that goal-size is linearly related to alternating space, the product of length and goal-size is linearly related to alternating tree-size, and the product of depth and goal-size is linearly related to alternating time. The bounds obtained are simultaneous. As an application, we obtain a syntactic characterization of Nondeterministic Linear Space and Alternating Linear Space via logic programs.
 
The NRL Protocol Analyzer is a prototype special-purpose verification tool, written in Prolog, that has been developed for the analysis of cryptographic protocols that are used to authenticate principals and services and distribute keys in a network. In this paper we give an overview of how the Analyzer works and describe its achievements so far. We also show how our use of the Prolog language benefited us in the design and implementation of the Analyzer.
 
This paper illustrates the role of a class of “prop”-ositional logic programs in the analysis of complex properties of logic programs. Analyses are performed by abstracting Prolog programs to corresponding “prop”-ositional logic programs which approximate the original programs and have finite meanings. We focus on a groundness analysis which is equivalent to that obtained by abstract interpretation using the domain Prop. The main contribution is in the ease in which a highly efficient implementation of the analysis is obtained. The implementation is bottom-up and provides approximations of a program's success patterns. Goal-dependent information such as call patterns is obtained using a magic-set transformation. A novel compositional approach is applied so that call patterns for arbitrary goals are derived in a precise and efficient way.
 
Annotated logics were introduced in [43] and later studied in [5, 7, 31, 32]. In [32], annotations were extended to allow variables and functions, and it was argued that such logics can be used to provide a formal semantics for rule-based expert systems with uncertainty. In this paper, we continue to investigate the power of this approach. First, we introduce a new semantics for such programs based on ideals of lattices. Subsequently, some proposals for multivalued logic programming [5, 7, 18, 32, 40, 47] as well as some formalisms for temporal reasoning [1, 3, 41] are shown to fit into this framework. As an interesting byproduct of the investigation, we obtain a new result concerning multivalued logic programming: a model theory for Fitting's bilattice-based logic programming, which until now has not been characterized model-theoretically. This is accompanied by a corresponding proof theory.
 
The treatment of negation and negative information in a logic programming environment has turned out to be a major problem. We introduce a relativized version of Reiter's closed world assumption and study it from a logical point of view. In particular, we look at the questions of consistency and conservative extension.
 
Some transformation operations for logic programs, basic for partial deduction, program specialization, and transformation, and for program synthesis from specifications, are studied with respect to the minimal S-model semantics defined in [31, 15–17]. Such a semantics is, in our opinion, more interesting than the usual least Herbrand model one since it captures the program's behavior with respect to computed answers. The S-semantics is also the strongest semantics which is maintained by unrestricted unfolding [31]. For such operations, we single out general applicability conditions, and prove that they guarantee that the minimal S-model semantics of a program is not modified by the transformation. Some sufficient conditions, which are very common in practice and easy to verify, since they are mostly syntactical, are also supplied with simple exemplifications.
 
We present in this paper a unified processing for real, integer, and Boolean constraints based on a general narrowing algorithm which applies to any n-ary relation on R. The basic idea is to define, for every such relation ρ, a narrowing function based on the approximation of ρ by a Cartesian product of intervals whose bounds are floating-point numbers. We then focus on nonconvex relations and establish several properties. The more important of these properties is applied to justify the computation of usual relations defined in terms of intersections of simpler relations. We extend the scope of the narrowing algorithm used in the language BNR-Prolog to integer and disequality constraints, to Boolean constraints, and to relations mixing numerical and Boolean values. As a result, we propose a new Constraint Logic Programming language called CLP(BNR), where BNR stands for Booleans, Naturals, and Reals. In this language, constraints are expressed in a unique structure, allowing the mixing of real numbers, integers, and Booleans. We end with the presentation of several examples showing the advantages of such an approach from the point of view of the expressiveness, and give some preliminary computational results from a prototype.
 
Logic programs are considered as abductive programs with negative literals as abductive hypotheses. A simple framework for semantics of logic programming is introduced based on the notion of acceptable hypotheses. We show that our framework captures, generalizes, and unifies different semantic concepts (e.g., well-founded models, stable models, stationary semantics, etc.) in logic programming. We demonstrate that our framework accommodates in a natural way both the minimalism and maximalism intuitions to semantics of logic programming. Further, we show that Eshghi and Kowalski's procedure is a proof procedure for the abductive semantics. We also give sufficient conditions for the coincidence between different semantics.
 
A general evaluation method for logic programs is presented based on the use of hash or associative (CA: content-addressable) memories for the variable environment and the database. The bindings are stored in the hash or CA memory, and accessed by the variable names and their “contexts.” Another hash or CA memory stores the subterms without variables in the form of “monocopy lists.” The method is an extension of that employed in the H-PROLOG system. Applications of the method are discussed both for serial depth-first evaluation and for heuristic (best-first) concurrent evaluation. In the heuristic evaluation, the processes share the common memories for the environments and the database. Systems employing this method dynamically distinguish local variables from global variables to economize the memory usage, and require no garbage collection cycle for the working memories and the databases.
 
This paper shows that logic programs and attribute grammars are closely related. Constructions are fiven which transform logic programs into semantically equivalent attribute grammars, and vice versa. This opens for application in logic programming of some methods developed for attribute grammars. These results are used to find a sufficient condition under which no infinite term can be created during a computation of a logic program, and to define a nontrivial class of logic programs which can be run without employing unification in its general form.
 
In this paper, it is shown that a three-valued autoepistemic logic provides an elegant unifying framework for some of the major semantics of normal and disjunctive logic programs and logic programs with classical negation, namely, the stable semantics, the well-founded semantics, supported models, Fitting's semantics, Kunen's semantics, the stationary semantics, and answer sets. For the first time, so many semantics are embedded into one logic. The framework extends previous results—by Gelfond, Lifschitz, Marek, Subrahmanian, and Truszczynski —on the relationships between logic programming and Moore's autoepistemic logic. The framework suggests several new semantics for negation-as-failure. In particular, we will introduce the epistemic semantics for disjunctive logic programs. In order to motivate the epistemic semantics, an interesting class of applications called ignorance tests will be formalized; it will be proved that ignorance tests can be defined by means of the epistemic semantics, but not by means of the old semantics for disjunctive programs. The autoepistemic framework provides a formal foundation for an environment that integrates different forms of negation. The role of classical negation and various forms of negation-by-failure in logic programming will be briefly discussed.
 
Top-cited authors
Michael J. Maher
  • UNSW Sydney
Joxan Jaffar
  • National University of Singapore
Stephen Muggleton
  • Imperial College London
Luc De Raedt
  • KU Leuven
Melvin Fitting
  • CUNY Graduate Center