The Journal of Logic Programming

Published by Elsevier
Print ISSN: 0743-1066
Publications
We propose an approach to declarative programming which integrates the functional and relational paradigms by taking possibly non-deterministic lazy functions as the fundamental notion. Classical equational logic does not supply a suitable semantics in a natural way. Therefore, we suggest to view programs as theories in a constructor-based conditional rewriting logic. We present proof calculi and a model theory for this logic, and we prove the existence of free term models which provide an adequate intended semantics for programs. We develop a sound and strongly complete lazy narrowing calculus, which is able to support sharing without the technical overhead of graph rewriting and to identify safe cases for eager variable elimination. Moreover, we give some illustrative programming examples, and we discuss the implementability of our approach.
 
An interactive theorem prover, Isabelle, is under development. In lcf, each inference rule is represented by one function for forwards proof and another (a tactic) for backwards proof. In Isabelle, each inference rule is represented by a Horn clause. Resolution gives both forwards and backwards proof, supporting a large class of logics. Isabelle has been used to prove theorems in Martin-Löf's constructive type theory. Quantifiers pose several difficulties: substitution, bound variables, Skolemization. Isabelle's representation of logical syntax is the typed λ-calculus, requiring higher-order unification. It may have potential for logic programming. Depth-first subgoaling along inference rules constitutes a higher-order PROLOG.
 
This paper presents a partial deduction method in disjunctive logic programming. Partial deduction in normal logic programs is based on unfolding between normal clauses, hence it is not applicable to disjunctive logic programs in general. Then we introduce a new partial deduction technique, called disjunctive partial deduction, which preserves the minimal model semantics of positive disjunctive programs and the stable model semantics of normal disjunctive programs. From the procedural side, disjunctive partial deduction is combined with a bottom-up proof procedure of disjunctive logic programs, and top-down partial deduction is introduced for query optimization. Disjunctive partial deduction is also applied to optimizing abductive logic programs and compiling propositional disjunctive programs.
 
Logic programming provides a model for rule-based reasoning in expert systems. The advantage of this formal model is that it makes available many results from the semantics and proof theory of first-ordet predicate logic. A disadvantage is that in expert systems one often wants to use, instead of the usual two truth values, an entire continuum of “uncertainties” in between. That is, instead of the usual “qualitative” deduction, a form of “quantitative” deduction is required. We present an approach to generalizing the Tarskian semantics of Horn clause rules to justify a form of quantitative deduction. Each clause receives a numerical attenuation factor. Herbrand interpretations, which are subsets of the Herbrand base, are generalized to subsets which are fuzzy in the sense of Zadeh. We show that as result the fixpoint method in the semantics of Horn clause rules can be developed in much the same way for the quantitative case. As for proof theory, the interesting phenomenon is that a proof should be viewed as a two-person game. The value of the game turns out to be the truth value of the atomic formula to be proved, evaluated in the minimal fixpoint of the rule set. The analog of the PROLOG interpreter for quantitative deduction becomes a search of the game tree ( = proof tree) using the alpha-beta heuristic well known in game theory.
 
Deductive database query languages for recursively typed complex objects based on the set and tuple constructs are studied. A fundamental characteristic of such complex objects is that in them, sets may contain members with arbitrarily deep nesting of tuple and set constructs. Relative to mappings from flat relations to flat relations, two extensions of COL in this context (with stratified semantics and inflationary semantics, respectively) are shown to have the expressive power of computable queries. Although the deductive calculus of Bancilhon and Khoshafian has the ability to simulate Turing machines, when restricted to flat input and output its expressive power is characterized by a weak variant of the conjunctive queries.
 
We present a system for generating parsers based directly on the metaphor of parsing as deduction. Parsing algorithms can be represented directly as deduction systems, and a single deduction engine can interpret such deduction systems so as to implement the corresponding parser. The method generalizes easily to parsers for augmented phrase structure formalisms, such as definite-clause grammars and other logic grammar formalisms, and has been used for rapid prototyping of parsing algorithms for a variety of formalisms including variants of tree-adjoining grammars, categorial grammars, and lexicalized context-free grammars.
 
Recently there has been increased interest in logic programming-based default reasoning approaches which are not using negation-as-failure in their object language. Instead, default reasoning is modelled by rules and a priority relation among them. In this paper we compare the expressive power of two approaches in this family of logics: Defeasible Logic, and sceptical Logic Programming without Negation as Failure (LPwNF). Our results show that the former has a strictly stronger expressive power. The difference is caused by the latter logic's failure to capture the idea of teams of rules supporting a specific conclusion.
 
Three orthogonal memoing techniques for deterministic logic programs are introduced and evaluated on the basis of their empirical performance. They share the same basic idea: efficient memoing is achieved by losing information gracefully, i.e., memoing benefits from a form of abstraction.Abstract answers are most general computed answers of deterministic logic programs obtained through repeated applications of a simple clause composition operator. After describing a meta-interpreter returning abstract answers, we derive a class of program transformations that compute abstract answers more efficiently: they are ideal lemmas due to their goal-independent nature. For this reason, their “hit rate” is usually higher than in the case of conventional memoing.Indexing by structural properties of terms is an effective way to speed up the retrieval of lemmas, especially in the case of simple programs using linear recursion.Delphi lemmas add a self-adjusting control mechanism on the amount of memoing. Answers are memoized only by acquiescence of an oracle. We show that random oracles perform surprisingly well as Delphi lemmas tend naturally to cover the “hot spots” of the program.
 
The semantics of PROLOG programs is usually given in terms of the model theory of first-order logic. However, this does not adequately characterizethe computational behavior of PROLOG programs. PROLOG implementations typically use a sequential evaluation strategy based on the textual order of clauses and literals in a program, as well as nonlogical features like cut. In this work we develop a denotational semantics that captures thecomputational behavior of PROLOG. We present a semantics for “cut-free” PROLOG, which is then extended to PROLOG with cut. For each case we develop a congruence proof that relates the semantics to a standard operational interpreter. As an application of our denotational semantics, we show the correctness of some standard “folk” theorems regarding transformations on PROLOG programs.
 
Logic programming with negation has been given a declarative semantics by Clark's completed database (CDB), and one can consider the consequences of the CDB in either two-valued or three-valued logic. Logic programming also has a proof theory given by SLDNF derivations. Assuming the data-dependency condition of strictness, we prove that the two-valued and three-valued semantics are equivalent. Assuming allowedness (a condition on occurrences of variables), we prove that SLDNF is complete for the three-valued semantics. Putting these two results together, we have completeness of SLDNF deductions for strict and allowed databases and queries under the standard two-valued semantics. This improves a theorem of Cavedon and Lloyd, who obtained the same result under the additional assumption of stratifiability.
 
The differential (or seminaive) approach to query evaluation in function free, recursively defined, Horn clauses was recently proposed as an improvement to the naive bottom-up evaluation strategy. In this paper, we extend the approach to efficiently accomodate n recursively defined predicates in the body of a Horn clause.
 
Program analysis based on abstract interpretation has proven very useful in compilation of constraint and logic programming languages. Unfortunately, the traditional goal-dependent framework is inherently imprecise. This is because it handles call and return in such a way that dataflow information may be re-asserted unnecessarily, leading to a loss of precision for many description domains. For a few specific domains, the literature contains proposals to overcome the problem, and some implementations use various unpublished tricks that sometimes avoid the precision loss. The purpose of this paper is to map the landscape of goal-dependent, goal-independent, and combined approaches to generic analysis of logic programs. This includes formalising existing methods and tricks in a way that is independent of specific description domains. Moreover, we suggest new methods for overcoming the loss of precision — altogether eight different semantics are considered and compared. We provide theoretical results determining the relative accuracy of the approaches. These show that two of our new semantics are uniformly more accurate than existing approaches. Experiments that we have performed (for two description domains) with implementations of the eight different approaches enable a discussion of their relative runtime performances. We discuss the expected effect on other domains as well and conclude that our new methods can be trusted to yield significantly more accurate analysis for a small extra implementation effort, without compromising the efficiency of analysis.
 
We present a new and general approach for defining, understanding, and computing logic programming semantics. We consider disjunctive programs for generality, but our results are still interesting if specialized to normal programs. Our framework consists of two parts: (a) a semantical, where semantics are defined in an abstract way as the weakest semantics satisfying certain properties, and (b) a procedural, namely a bottom-up query evaluation method based on operators working on conditional facts. As to (a), we concentrate in this paper on a particular set of abstract properties (the most important being the unfolding or partial evaluation property GPPE) and define a new semantics D-WFS, which extends WFS and GCWA. We also mention that various other semantics, like Fitting's comp3, Schipf's WFSc, Gelfond and lifschitz' STABLE and Ross and Topor's WGCWA (also introduced independently by Rajasekar et al. (A. Rajasekar, J. Lobo, J. Minker, Journal of Automated Reasoning 5 (1989) 293–307)), can be captured in our framework. In (b) we compute for any program P a residual program res(P), and show that res(P) is equivalent to the original program under very general conditions on the semantics (which are satisfied, e.g., by the well-founded, stable, stationary, and static semantics). Many queries with respect to these semantics can already be answered on the basis of the residual program. In fact, res(P) is complete for D-WFS, WFS and GCWA.
 
This paper investigates two fixpoint approaches for minimal model reasoning with disjunctive logic programs P. The first one, called model generation, is based on an operator TPINT defined on sets of Herbrand interpretations whose least fixpoint is logically equivalent to the set of minimal Herbrand models of the program. The second approach, called state generation, uses a fixpoint operation TPs based on hyperresolution. It operates on disjunctive Herbrand states, and its least fixpoint is the set of logical consequences of P, the so-called minimal model state of the program. We establish a useful relationship between hyperresolution by PPs and model generation by TPINT. Then we investigate the problem of continuity of the two operators TPs and TPINT. It is known that the operator TPs is continuous, and so it reaches its least fixpoint in at most ω iterations. On the other hand, the question of whether TPINT is continuous has been open. We show by a counterexample that TPINT is not continuous. Nevertheless, we prove that it converges towards its least fixpoint in at most ω iterations, too, as follows from the relationship that we show exists between hyperresolution and model generation. We define an iterative version of TPINT that computes the perfect model semantics of stratified disjunctive logic programs. On each stratum of the program, this operator converges in at most ω iterations. Model generations for the stable semantics and the partial stable semantics are respectively achieved by using this iterative operator together with the evidential transformation and the 3-S transformation.
 
This paper presents a parallel execution system (PDP: Prolog Distributed Processor) for efficiently supporting both Independent_AND OR parallelism on distributed-memory multiprocessors. The system is composed of a set of workers with a hierarchical structure scheduler. Each worker operates on its own private memory and interprocessor communication is performed only by the passing of messages. The execution model follows a multisequential approach in order to maintain the sequential optimizations. Independent AND_parallelism is exploited following a fork-join approach and OR_parallelism is exploited following a recomputation approach. PDP deals with OR_under_AND parallelism by producing the solutions of a set of parallel goals in a distributed way, that is, by creating a new task for each element of the cross product. This approach has the advantage of avoiding both storing partial solutions and synchronizing workers, resulting in a largely increased performance. Different scheduling policies have been studied, and granularity controls have been introduced for each kind of parallelism. PDP has been implemented on a network of transputers and performance results show that PDP introduces very little overhead into sequential programs, and provides a high speedup for coarse-grain parallel programs.
 
We develop a natural technique for defining functions in logic, i.e. PROLOG, which directly yields lazy evaluation. Its use does not require any change to the PROLOG interpreter. Function definitions run as PROLOG programs and so run very efficiently. It is possible to combine lazy evaluation with nondeterminism and simulate coroutining. It is also possible to handle infinite data structures and implement networks of communicating processes. We analyze this technique and develop a precise definition of lazy evaluation for lists. For further efficiency we show how to preprocess programs and ensure, using logical variables, that values of expressions once generated are remembered for future access. Finally, we show how to translate programs in a simple functional language into programs using this technique.
 
In “Linear time algorithms for testing the satisfiability of propositional Horn formulae” (J. Logic Programming, 1984), Dowling and Gallier have presented two linear-time algorithms for checking the satisfiability of a propositional Horn formula. In this note we show that one of these algorithm, the top-down one, may under particular circumstances not give the correct answer, and we propose a correct version of the algorithm which also runs in linear time.
 
In many modern high-level programming languages, the exact low-level representation of data objects cannot always be predicted at compile-time. Implementations usually get around this problem using descriptors (“tags”) and/or indirect (“boxed”) representations. However, the flexibility so gained can come at the cost of significant performance overheads. The problem is especially acute in dynamically typed languages, where both tagging and boxing are necessary in general. This paper discusses a straightforward approach to using untagged and unboxed values in dynamically typed languages. An implementation of our algorithms allows a dynamically typed language to attain performance close to that of highly optimized C code on a variety of benchmarks (including many floating-point intensive computations) and dramatically reduces heap usage.
 
Tracing by automatic program source instrumentation has major advantages over compiled code instrumentation: it is more portable from one Prolog system to another, it produces traces in terms of the original program, and it can be tailored to specific debugging needs. The main argument usually put forward in favor of compiled code instrumentation is its supposed efficiency. We have compared the performances of two operational low-level Prolog tracers with source instrumentation. We have executed classical Prolog benchmark programs, collecting trace information without displaying it. On average, collecting trace information by program instrumentation is about as fast as using a low-level tracer in one case, and only twice slower in the other. This is a minor penalty to pay, compared to the advantages of the approach. To our knowledge, this is the first time that a quantitative comparison of both approaches is made for any programming language.
 
A well-known problem with PROLOG-style interpreters that perform goal reduction is the possibility of entering an infinite recursion, due to a subgoal being “essentially the same” as one of its ancestors. This is informally called a “loop”. We describe the tortoise-and-hare technique for detecting such loops. This technique has low overhead: a constant amount of time and space per goal reduction step. Therefore it should be practical to incorporate into high-performance interpreters. We discuss the special considerations needed for correct implementation in an interpreter that uses tail-recursion optimization. The issue of what to do when a loop or potential loop has been detected has been investigated elsewhere. We review these results, and conclude that loop detection is probably more useful as a debugging tool than as an extension to the power of the language.
 
We discuss procedural semantics and inference of negated ground atoms in elementary formal system (EFS). EFS is now a logic programming with associative unification. There are two problems on the SLD-resolution when we infer negated atoms. One is existence of infinitely many unifiers for two atoms, even maximally general. This prevents us finding a proper completed definitions of EFS's corresponding to the negation as failure rule. The other problem is existence of infinite derivations. They make it difficult to reject atoms not in the least Herbrand model. In the note, we give solutions for these problems. When we use the SLD-resolution to accept formal languages defined by EFS's, we assume that every refutation begins from a ground goal. Under the assumption we solve the first problem by introducing the variable-bounded EFS, which is powerful enough to define languages. The solution for the second problem is to bound the length of every SLD-derivation. We present it as an algorithm to decide whether a ground atom is in the least Herbrand model or not. We introduce the weakly reducing EFS as a class of EFS's where our algorithm is a complete realization of the closed world assumption.
 
An operational and a minimal model semantics for logic programming modules is introduced. It is shown that this semantics corresponds to the recursion theoretic notion of enumeration operator. Basic operations on modules, such as composition and recursion, are discussed. The adequacy of these operations is established by showing that all logic programming can be done, in principle, by combining certain elementary modules using these basic operations.
 
obj is a declarative language, with mathematical semantics given by order-sorted equational logic and an operational semantics based on order-sorted term rewriting. obj also has user-definable abstract data types with mixfix syntax and a flexible type system that supports overloading and subtypes. In addition, obj has a powerful generic module mechanism, including nonexecutable “theories” as well as executable “objects”, plus “module expressions” that construct whole subsystems. Design and implementation choices for the obj interpreter are described here in detail.
 
This paper discusses issues in a sequential implementation of a subset-equational language, an extension of the equational programming paradigm for efficient treatment of set-valued functions. Subset assertions have the form f(terms)⊇expression, and in general, multiple subset assertions may be used to define a set-valued function f. They incorporate a collect-all capability, so that the meaning of a set-valued function f applied to argument terms is equal to the union of the respective sets defined by the different subset assertions for f. The universe of terms also includes set-valued terms; hence the matching operation between terms is setmatching. The multiple matches arising from set matching effectively serve to iterate over the elements of sets, thus permitting many useful set operations to be stated nonrecursively. The main features of this implementation are: (1) compiling the commonly occurring forms of set patterns using instructions similar to the WAM instructions for PROLOG; (2) avoiding checks for duplicates and construction of intermediate sets in argument positions of functions when they distribute over union in these arguments; and (3) performing last-call optimization for both equational and subset assertions. An implementation of these ideas has been completed, and compiled code for typical program fragments is presented, as well as performance figures for the key optimizations.
 
The problem of unifying pairs of terms with respect to an equational theory (as well as detecting the unsatisfiability of a system of equations) is, in general, undecidable. In this work, we define a framework based on abstract interpretation for the (static) analysis of the unsatisfiability of equation sets. The main idea behind the method is to abstract the process of semantic unification of equation sets based on narrowing. The method consists of building an abstract narrower for equational theories, and executing the sets of equations to be detected for unsatisfiability in the approximated narrower. As an instance of our framework, we define a new analysis whose accuracy is enhanced by some simple loop-checking technique. This analysis can also be actively used for pruning the search tree of an incremental equational constraint solver, and can be integrated with other methods in the literature. Standard methods are shown to be an instance of our framework. To the best of our knowledge, this is the first framework proposed for approximating equational unification.
 
This paper is a contribution to the amalgamation of logic programming (as embodied in PROLOG) and functional programming (as embodied in languages like , , , or in dialects of like or ). We investigate how equational rewriting, which we assume is an adequate model for functional programming, can be performed within the context of logic programming. The equational program plus the standard equality axioms (reflexivity, symmetry, transitivity, and substitutivity) is our standard of correctness: we regard it as a logic specification from which the result of any evaluation must be a logical consequence. Although the standard equality axioms plus the equations formally qualify as a PROLOG program, their use as such is computationally infeasible because the SLD- resolution search space contains many refutations yielding useless answers and many infinite branches. To obtain feasible evaluations conforming to our standard of correctness, we investigate two approaches: the interpretational one and the compilational one. In the interpretational approach we use as logic program the equations themselves, but replace the standard axioms of equality by suitably chosen logical consequences having the property that the PROLOG interpreter mimics equational rewriting without search. In the compilational approach we obtain an efficient PROLOG program by translating the equations to a set of Horn clauses not involving equality and discarding the equality axioms altogether. We prove correctness for both approaches.
 
We show that the well-founded semantics and the stable semantics are equivalent on the class of the order-consistent programs which is a strict superclass of the locally stratified programs class and of the call-consistent programs class.
 
Despite the frequent comment that there is no general agreement on the semantics of logic programs, this paper shows that a number of independently proposed extensions to the stable model semantics coincide: the regular model semantics proposed by You and Yuan, the partial stable model semantics by Saccà and Zaniolo, the preferential semantics by Dung, and a stronger version of the stable class semantics by Baral and Subrahmanian. We show that these equivalent semantics can be characterized simply as selecting a particular kind of stable classes, called normal alternating fixpoints. In addition, we indicate that almost all of the previously proposed semantic frameworks coincide with that of normal alternating fixpoints. Due to its simplicity and naturalness, the framework of normal alternating fixpoints offers great potential in the study of the semantics for various nonmonotonic systems.
 
The fundamental relation between a program P and its specification S is correctness: P satisfies S if and only if P is correct with respect to S. In logic programming, this relationship can be particularly close, since logic can be used to express both specifications and programs. Indeed logic programs are often regarded and used as (executable) specifications themselves. In this paper, we argue that the relation between S and P should be firmly set in the context of the underlying problem domain, which we call a framework F, and we give a model-theoretic view of the correctness relation between specifications and programs in F. We show that the correctness relation between S and P is always well-defined. It thus provides a basis for properly distinguishing between S and P. We use the subset example throughout to illustrate our model-theoretic approach.
 
Resolution has been used as a specialisation operator in several approaches to top-down induction of logic programs. This operator allows the overly general hypothesis to be used as a declarative bias that restricts not only what predicate symbols can be used in produced hypotheses, but also how the predicates can be invoked. The two main strategies for top-down induction of logic programs, Covering and Divide-and-Conquer, are formalised using resolution as a specialisation operator, resulting in two strategies for performing example-guided unfolding. These strategies are compared both theoretically and experimentally. It is shown that the computational cost grows quadratically in the size of the example set for Covering, while it grows linearly for Divide-and-Conquer. This is also demonstrated by experiments, in which the amount of work performed by Covering is up to 30 times the amount of work performed by Divide-and-Conquer. The theoretical analysis shows that the hypothesis space is larger for Covering, and thus more compact hypotheses may be found by this technique than by Divide-and-Conquer. However, it is shown that for each non-recursive hypothesis that can be produced by Covering, there is an equivalent hypothesis (w.r.t. the background predicates) that can be produced by Divide-and-Conquer. A major draw-back of Divide-and-Conquer, in contrast to Covering, is that it is not applicable to learning recursive definitions.
 
This paper presents the implementation and performance results of anand-parallel execution model of logic programs on a shared-memory multiprocessor. The execution model is meant for logic programs with “don't-know nondeterminism”, and handles binding conflicts by dynamically detecting dependencies among literals. The model also incorporates intelligent backtracking at the clause level. Our implementation of this model is based upon the Warren Abstract Machine (WAM); hence it retains most of the efficiency of the WAM for sequential segments of logic programs. Performance results on Sequent Balance 21000 show that on suitable programs, our parallel implementation can achieve linear speedup on dozens of processors. We also present an analysis of different overheads encountered in the implementation of the execution model.
 
A method for parallel execution of logic programs is presented. It uses REDUCE-OR trees instead of AND-OR or SLD trees. The REDUCE-OR trees represent logic-program computations in a manner suitable for parallel interpretation. The REDUCE-ORprocess model is derived from the tree representation by providing a process interpretation of tree development, and devising efficient bookkeeping mechanisms and algorithms. The process model is complete—it produces any particular solution eventually—and extracts full OR parallelism. This is in contrast to most other schemes that extract AND parallelism. It does this by solving the problem of interaction between AND and OR parallelism effectively. An important optimization that effectively controls the apparent overhead in the process model is given. Techniques that trade parallelism for reducing overhead are also described.
 
This paper illustrates the use of a top-down framework to obtain goal independent analyses of logic programs, a task which is usually associated with the bottom-up approach. While it is well known that the bottom-up approach can be used, through the magic set transformation, for goal dependent analysis, it is less known that the top-down approach can be used for goal independent analysis. The paper describes two ways of doing the latter. We show how the results of a goal independent analysis can be used to speed up subsequent goal dependent analyses. However this speed-up may result in a loss of precision. The influence of domain characteristics on this precision is discussed and an experimental evaluation using a generic top-down analyzer is described. Our results provide intuition regarding the cases where a two phase analysis might be worth-while.
 
This paper introduces extended programs and extended goals for logic programming. A clause in an extended program can have an arbitrary first-order formula as its body. Similarly, an extended goal can have an arbitrary first-order formula as its body. The main results of the paper are the soundness of the negation as failure rule and SLDNF-resolution for extended programs and goals. We show how the increased expressibility of extended programs and goals can be easily implemented in any PROLOG system which has a sound implementation of the negation as failure rule. We also show how these ideas can be used to implement first-order logic as a query language in a deductive database system. An application to integrity constraints in deductive database systems is also given.
 
LogiMOO is a BinProlog-based Virtual World running under Netscape and Internet Explorer for distributed group-work over the INTERNET and user-crafted virtual places, virtual objects and agents. LogiMOO is implemented on top of a multi-threaded blackboard-based logic programming system (BinProlog) featuring Linda-style coordination. Remote and local blackboards support transparent distribution of data and processing over TCP/IP links, while threads ensure high-performance local client-server dynamics. Embedding in Netscape provides advanced VRML and HTML frame-based navigation and multi-media support, while LogiMOO handles virtual presence and acts as a very high-level multi-media object broker. User-friendliness is achieved through a controlled English interface written in terms of Assumption Grammars. Its language coverage is extensible in that the user can incorporate new nouns, verbs and adjectives as needed by changes in the world. Immediate evaluation of world knowledge by the parser yields representations which minimize the unknowns allowing us to deal with advanced Natural Language constructs like anaphora and relativization efficiently. We take advantage of the simplicity of our controlled language to provide as well an easy adaptation to other natural languages than English, with English-like representations as a universal interlingua.
 
An extension of PROLOG called N-PROLOG is presented. N-PROLOG allows hypothetical implications in the clauses. For clauses without implication, N-PROLOG acts like PROLOG. Examples are given to show the need for N-PROLOG. N-PROLOG is a self-reflecting language; it is equal to its own metalanguage. N-PROLOG is more suitable for expressing temporal behavior (change in time). Ordinary PROLOG is conceptually weaker than N-PROLOG.
 
We introduce the foundational issues involved in incorporating the NEGATION as FAILURE (NAF) rule into the framework of first-order hereditary Harrop formulae of Miller et al. This is a larger class of formulae than Horn clauses, and so the technicalities are more intricate than in the Horn clause case. As programs may grow during execution in this framework, the role of NAF and the CLOSED WORLD ASSUMPTION (CWA) need some modification, and for this reason we introduce the notion of a completely defined predicate, which may be thought of as a localisation of the CWA. We also show how this notion may be used to define a notion of NAF for a more general class of goals than literals alone. We also show how an extensional notion of universal quantification may be incorporated. This makes our framework somewhat different from that of Miller et al., but not essentially so. We also show how to construct a Kriple-like model for the extended class of programs. This is essentially a denotational semantics for logic programs, in that it provides a mapping from the program to a pair of sets of atoms that denote the success and (finite) failure sets. This is inspired by the work of Miller on the semantics on first-order hereditary Harrop formulae. Note that no restriction on the class of programs is needed in this approach, and that our construction needs no more than ω iterations. This necessitates a slight departure from the standard methods, but the important properties of the construction still hold.
 
The class of logic programs with negation as failure in the head is a subset of the logic of MBNF introduced by Lifschitz and is an extension of the class of extended disjunctive programs. An interesting feature of such programs is that the minimality of answer sets does not hold. This paper considers the class of general extended disjunctive programs (GEDPs) as logic programs with negation as failure in the head. First, we discuss that the class of GEDPs is useful for representing knowledge in various domains in which the principle of minimality is too strong. In particular, the class of abductive programs is properly included in the class of GEDPs. Other applications include the representation of inclusive disjunctions and circumscription with fixed predicates. Secondly, the semantic nature of GEDPs is analyzed by the syntax of programs. In acyclic programs, negation as failure in the head can be shifted to the body without changing the answer sets of the program. On the other hand, supported sets of any program are always preserved by the same transformation. Thirdly, the computational complexity of the class of GEDPs is shown to remain in the same complexity class as normal disjunctive programs. Through the simulation of negation as failure in the head, computation of answer sets and supported sets is realized using any proof procedure for extended or positive disjunctive programs. Finally, a simple translation of GEDPs into autoepistemic logic is presented.
 
Order-sorted feature (OSF) terms provide an adequate representation for objects as flexible records. They are sorted, attributed, possibly nested structures, ordered thanks to a subsort ordering. Sorts definitions offer the functionality of classes imposing structural constraints on objects. These constraints involve variable sorting and equations among feature paths, including self-reference. Formally, sort definitions may be seen as axioms forming an OSF theory. OSF theory unification is the process of normalizing an OSF term taking into account sort definitions, enforcing structural constraints imposed by an OSF theory. It allows objects to inherit, and thus abide by, constraints from their classes. We propose a formal system that logically models record objects with (possibly recursive) class definitions accommodating multiple inheritance. We show that OSF theory unification is undecidable in general. However, we give a set of confluent normalization rules which is complete for detecting the inconsistency of an object with respect to an OSF theory. Furthermore, a subset consisting of all rules but one is confluent and terminating. This yields a practical complete normalization strategy, as well as an effective compilation scheme.
 
An extended logic programming language is presented, that embodies the fundamental form of set designation based on the (nesting) element insertion operator. The kind of sets to be handled is characterized both by adaptation of a suitable Herbrand universe and via axioms. Predicates ϵ and = designating set membership and equality are included in the base language, along with their negative counterparts ∉ and ≠. A unification algorithm that can cope with set terms is developed and proved correct and terminating. It is proved that by incorporating this new algorithm into SLD resolution and providing suitable treatment of ϵ, ≠, and ∉ as constraints, one obtains a correct management of the distinguished set predicates. Restricted universal quantifiers are shown to be programmable directly in the extended language and thus are added to the language as a convenient syntactic extension. A similar solution is shown to be applicable to intensional set-formers, provided either a built-in set collection mechanism or some form of negation in goals and clause bodies is made available.
 
Definite-clause grammars (DCGs) generalize context-free grammars in such a way that Prolog can be used as a parser in the presence of context-sensitive information. Prolog's proof procedure, however, is based on backtracking, which may be a source of inefficiency. Parsers for context-free grammars that use backtracking, for instance, were soon replaced by more efficient methods, such as LR parsers. This suggests incorporating the principles underlying LR parsing into a parser for grammars with context-sensitive information. We present a technique that applies a transformation to the program/grammar by adding leaves to the proof/parse trees and placing the contextual information in such leaves. An inference system is then easily obtained from an LR parser, since only the parts dealing with terminals (which appear at the leaves) must be modified. Although our method is restricted to programs with fixed modes, it may be preferable to DCGs under Prolog for some programs.
 
We suggested in Part I of this study a general logical formalism for Logic Programming based on a four-valued inference. In this paper we give a uniform representation of various semantics for logic programs based on this formalism. The main conclusion from this representation is that the distinction between these semantics can be largely attributed to the difference in their underlying (monotonic) logical systems. Moreover, in most cases the difference can even be reduced to that of the language, that is, to the difference in the logical connectives allowed for representing derivable information.
 
We describe a novel logic, called HiLog, and show that it provides a more suitable basis for logic programming than does traditional predicate logic. HiLog has a higher-order syntax and allows arbitrary terms to appear in places where predicates, functions, and atomic formulas occur in predicate calculus. But its semantics is first-order and admits a sound and complete proof procedure. Applications of HiLog are discussed, including DCG grammars, higher-order and modular logic programming, and deductive databases.
 
This article contains the theoretical foundations of LPTP, a logic program theorem prover that has been implemented in Prolog by the author. LPTP is an interactive theorem prover in which one can prove correctness properties of pure Prolog programs that contain negation and built-in predicates like and . The largest example program that has been verified using LPTP is 635 lines long including its specification. The full formal correctness proof is 13 128 lines long (133 pages). The formal theory underlying LPTP is the inductive extension of pure Prolog programs. This is a first-order theory that contains induction principles corresponding to the definition of the predicates in the program plus appropriate axioms for built-in predicates. The inductive extension allows to express modes and types of predicates. These can then be used to prove termination and correctness properties of programs. The main result of this article is that the inductive extension is an adequate axiomatization of the operational semantics of pure Prolog with built-in predicates.
 
We introduce global SLS-resolution, a procedural semantics for well-founded negation as defined by Van Gelder, Ross, and Schlipf. Global SLS-resolution extends Przymusinski's SLS-resolution and may be applied to all programs, whether locally stratified or not. Global SLS-resolution is defined in terms of global trees, a new data structure representing the dependence of goals on derived negative subgoals. We prove that global SLS-resolution is sound with respect to the well-founded semantics and complete for nonfloundering queries. Although not effective in general, global SLS-resolution is effective for classes of “acrylic” programs and can be augmented with a memoing device to be effective for all function-free programs.
 
Based on the search forest for positive programs as defined by Bol and Degerstedt, we define a tabulation-based framework that is sound and complete (when floundering does not occur) w.r.t. the well-founded semantics. In contrast to SLS-resolution as proposed by Przymusinski and by Ross, a positivistic computation rule is not required. Moreover, unlike SLG-resolution due to Chen and Warren, our proposal relies on tabulation for both positive and negative recursion without losing the clear separation of the search space from search strategies. In particular, the newly proposed search forest is finite for nonfloundering functor-free programs.
 
There are numerous papers concerned with the compile-time derivation of certain run-time properties of logic programs, e.g. mode inferencing, type checking, type synthesis, and properties relevant for and-parallel execution. Most approaches have little in common, they are developed in an ad hoc way, and their correctness is not always obvious. We develop a general framework which is suited to develop complex applications and to prove their correctness. All states which are possible at run time can be represented by an infinite set of proof trees (and trees, SLD derivations). The core idea of our approach is to represent this infinite set of and trees by a finite abstract and-or graph. We present a generic abstract interpretation procedure for the construction of such an abstract and-or graph and formulate conditions which allow us to construct a correct one in finite time.
 
This paper discusses the relationship between tabulation and goal-oriented bottomup evaluation of logic programs. Some differences emerge when one tries to identify features of one evaluation method in the other. We show that to obtain the same effect as tabulation in top-down, one has to perform a careful adornment in programs to be evaluated bottom-up. Furthermore we propose an efficient algorithm to perform subsumption checking over adorned magic facts. Soundness and completeness of the subsumption algorithm are proved. With the aim of substantiating the claimed improvements yield by this proposal several program evaluations are presented. 1 Introduction Much has been said about the relations between goal oriented bottom-up and tabulated top-down evaluation e.g. [Warren92, Tamaki&Sato86, Ullman89, Ramak91]. One example of these relations is the equivalence between magic facts of bottom-up and subgoals in top-down. Another is the equivalence between the facts that can be derived by ...
 
It is well known that propositional formulas form a useful and computationally efficient abstract interpretation for different data-flow analyses of logic programs and, in particular, for groundness analysis. This article gives a complete and precise description of an abstract interpretation, called Prop, composed of a domain of positive, propositional formulas and three operations: abstract unification, least upper bound, and abstract projection. All three abstract operations are known to be correct. They are shown to be optimal in the classical sense. Two alternative stronger notions of optimality of abstract operations are introduced, which characterize very precise analyses. We determine whether the operations of Prop also satisfy these stronger forms of optimality.
 
Most Prolog machines have been based on specialized architectures. Our goal is to start with a general-purpose architecture and determine a minimal set of extensions for high-performance Prolog execution. We have developed both the architecture and optimizing compiler simultaneously, drawing on results of previous implementations. We find that most Prolog-specific operations can be done satisfactorily in software; however, there is a crucial set of features that the architecture must support to achieve the best Prolog performance. In this paper, the costs and benefits of special architectural features and instructions are analyzed. In addition, we study the relationship between the strength of compiler optimization and the benefit of specialized hardware. We demonstrate that our base architecture can be extended to include explicit support for Prolog with modest increase in chip area (13%), and yet attain a significant performance benefit (60–70%). Experiments using optimized code that approximates the output of future optimizing compilers indicate that special hardware support can still provide a performance benefit of 30–35%. The microprocessor described here, the VLSI-BAM, has been fabricated and incorporated into a working test system.
 
Top-cited authors
Michael J. Maher
  • UNSW Sydney
Joxan Jaffar
  • National University of Singapore
Stephen Muggleton
  • Imperial College London
Luc De Raedt
  • KU Leuven
Melvin Fitting
  • CUNY Graduate Center