Article

A Basis for Deductive Database Systems.

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper provides a theoritical basis for deductive database systems. A deductive database consists of closed typed first order logic formulas of the form A ← W, where A is an atom and W is a typed first order formula. A typed first order formula can be used as a query, and a closed typed first order formula can be used as an integrity constraint. Functions are allowed to appear in formulas. Such a deductive database system can be implemented using a PROLOG system. The main results are the soundness of the query evaluation process, the soundness of the implementation of integrity constraints, and a simplification theorem for implementing integrity constraints. A short list of open problems is also presented.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Theoretical investigations of the completion semantics continued, and were highlighted in John Lloyd's [1985Lloyd's [ , 1987 influential Foundations of Logic Programming book, which included results from Keith Clark's [1980] unpublished PhD thesis. Especially important among the later results were the three-valued completion semantics of Fitting [1985] and Kunen [1987], which gives, for example, the truth value undefined to p in the program p ← not p, whose completion is inconsistent in two-valued logic. ...
... Theoretical investigations of the completion semantics continued, and were highlighted in John Lloyd's [1985Lloyd's [ , 1987 influential Foundations of Logic Programming book, which included results from Keith Clark's [1980] unpublished PhD thesis. Especially important among the later results were the three-valued completion semantics of Fitting [1985] and Kunen [1987], which gives, for example, the truth value undefined to p in the program p ← not p, whose completion is inconsistent in two-valued logic. ...
... The requirement that P ∪ ∆ satisfies I is even more problematic. The problem was already the subject of extensive debate in the 1980s in the context of deductive databases D. The alternatives included the interpretation that the constraints I are consistent with D [Kowalski, 1978], that they are consistent with comp(D) [Sadri and Kowalski, 1988], that they are logically implied by comp(D) [Reiter, 1984;Lloyd and Topor, 1985], and that they are epistemic sentences that are true in D [Reiter, 1988]. ...
Chapter
Full-text available
Volume 9, Computational Logic (Joerg Siekmann, editor). In the History of Logic series, edited by Dov Gabbay and John Woods, Elsevier, 2014, pp 523-569.
... Deductive databases can be regarded as an outcome of the marriage between logic programming and relational databases [Llo85,CGT90]. Over the last decade efforts have been made to extend logic programming. ...
... • Syntactically a first order language will be used for the rules and queries as proposed for deductive databases in [Llo85]. The language will contain normal logical connectives and special application dependent combinators. ...
... We adopt the rule based approach from [Llo85] which means that a deductive database becomes a set of rules. Rules are defined to have the form P ← A where P is an atom and A is a formula. ...
Article
Full-text available
There are many areas where the reasoning capabilities of pure DATALOG¬ are not sucient. In the following a framework is presented that extends DATALOG¬ in a natural way by employing lattice-valued logic. The framework consists of a model theoretic semantic for lattice-valued deductive database and it will be shown how it is possible to generalize the notion of allowedness. The framework is conservative in the sense that the semantic and the notion of al- lowedness coincides with the one of DATALOG¬ in the boolean case. In the end of the paper applications of the framework to temporal, quantitative and paraconsistent databases will be given.
... An extensive amount of research addresses the task of improving integrity checking such that it takes advantage of the fact that a database was consistent prior to any particular update and only veri es the relevant parts of a database. Some references to this line of work during the 80's are 9,17,19,53,55,71], with an overview o ered in 11]. A clear exposition of the main issues in update propagation, can be found in 38], while 14] compares the e ciency of some major strategies on a range of examples. ...
... Also, from now on, mgu (A; B) represents an idempotent, most general uni er of the set fA;B 0 g, where B 0 is obtained from B by standardising apart. This small technical point was overlooked in 53,55], meaning that the results therein are incorrect and one should use the mgu instead of the plain mgu. A small discussion of this point is provided below in the proof of Lemma 2.7. ...
... Note that this is not the case if we use just the mgu without standardising apart. As already pointed out this has been overlooked in 53,55]. Take for instance L 0 p = p(a; Y; b; X) and = fX=Y;Y=Xg. ...
Article
Integrity constraints are useful for the specification of deductive databases, as well as for inductive and abductive logic programs. Verifying integrity constraints upon updates is a major efficiency bottleneck and specialised methods have been developed to speedup this task. They can, however, still incur a considerable overhead. In this paper we propose a solution to this problem by using partial evaluation to pre-compile the integrity checking for certain update patterns. The idea being, that a lot of the integrity checking can already be performed given an update pattern without knowing the actual, concrete update. In order to achieve the pre-compilation, we write the specialised integrity checking as a meta-interpreter in logic programming. This meta-interpreter incorporates the knowledge that the integrity constraints were not violated prior to a given update. By partially evaluating this meta-interpreter for certain transaction patterns, using a partial evaluation technique presented in earlier work, we are able to automatically obtain very efficient specialised update procedures, executing faster than other integrity checking procedures proposed in the literature.
... Consistency and entailment approaches were applied to closed databases (see [40,33] for consistency and [27] for entailment), where ICs are satisfied if the Clark's completion [14] of A ∪ S is consistent with (resp. it entails) S. Such approaches, however, are applicable only to Prolog-like databases for which Clark's completion is defined-that is, they are not applicable to databases containing disjunction or existential quantification [38]and are therefore not applicable to most description logics. ...
... ICs were given formal semantics in first-order databases using the consistency [24] and the entailment [37] approaches, and these approaches were also applied to closed Prolog-like databases [40,33,27]. Reiter conducted a comprehensive study of the nature of ICs [38], in which he argued against all these approaches. ...
Article
Despite similarities between the Web Ontology Language (OWL) and schema languages traditionally used in relational databases, systems based on these languages exhibit quite different behavior in practice. The schema statements in relational databases are usually interpreted as integrity constraints and are used to check whether the data is structured according to the schema. OWL allows for axioms that resemble integrity constraints; however, these axioms are interpreted under the standard first-order semantics and not as checks. This often leads to confusion and is inappropriate in certain data-centric applications. To explain the source of this confusion, in this paper we compare OWL and relational databases w.r.t. their schema languages and basic computational problems. Based on this comparison, we extend OWL with integrity constraints that capture the intuition behind similar statements in relational databases. We show that, if the integrity constraints are satisfied, they need not be considered while answering a broad range of positive queries. Finally, we discuss several algorithms for checking integrity constraint satisfaction, each of which is suitable to different types of OWL knowledge bases.
... For that such a risk analysis works fully automated, there is a need for storing existing knowledge in a machine processable manner. For that we use logic programming [56], providing the necessary formalisms using facts to store such information in a deductive database system [57]. Further, using rules such knowledge can be matched against an input model for identify potential flaws (for a detailed discussion see our related work [46]). ...
Conference Paper
Full-text available
Today's ongoing trend towards intense usage of web service based applications in daily business and everybody's daily life poses new challenges for security testing. Additionally, such applications mostly not execute in their own runtime environment but instead are deployed in some data center, run alongside multiple other applications, and serve different purposes for sundry user domains with diverging security requirements. As a consequence, security testing also has to adapt to be able to meet the necessary requirements for each application in its domain and its specific security requirements. In addition, security testing needs to be feasible for both service providers and consumers. In our paper we identify drawbacks of existing security testing approaches and provide directions for meeting emerging challenges in future security testing approaches. We also introduce and describe the idea of language-oriented security testing, a novel testing approach building upon domain-specific languages and domain knowledge to meet future requirements in security testing.
... Note that the process of integrity checking involving a post-test consists in: executing the update, checking the post-test and, if it fails to hold, correcting the database by performing a repair action, i.e., a rollback and optionally a modification of the update which won't violate integrity. Well-known post-test-based approaches are described in [76] [57] [46] [77] [55]. The second kind of approach to deal with integrity checking incrementally is to determine an integrity theory Σ to be evaluated in the old state, i.e., to predict without actually executing the update whether the new, updated state will be consistent with the integrity constraints. ...
Article
Integrity checking is a crucial issue, as databases change their instance all the time and therefore need to be checked continuously and rapidly. Decades of research have produced a plethora of methods for checking integrity constraints of a database in an incremental manner. However, not much has been said about when to check integrity. In this paper, we study the differences and similarities between checking integrity before an update (a.k.a. pre-test) or after (a.k.a. post-test) in order to assess the respective convenience and properties.
... These facts, rules and constraints are expressed in a logical language, that of first-order logic. A lot offormal work in this area has been done by Nicolas, Lloyd and others (see [17J, [18], [19J, [22J, [23]). ...
... This extension is motivated by database applications of logic programming. For these applications, a number of studies [5,17,21,22] have investigated the nature of integrity constraints in logic programming and the development of efficient integrity checking methods. In all of these approaches integrity constraints are viewed as properties which a database or program must satisfy as it changes over the course of time. ...
Chapter
Full-text available
The linguistic style in which legislation is normally written has many similarities with the language of logic programming. However, examples of legal language taken from the British Nationality Act 1981, the University of Michigan lease termination clause, and the London Underground emergency notice suggest several ways in which the basic model of logic programming could usefully be extended. These extensions include the introduction of types, relative clauses, both ordinary negation and negation by failure, integrity constraints, metalevcl reasoning and procedural notation. In addition to the resemblance between legislation and programs, the law has other important similarities with computing. It needs for example to validate legislation against social and political specifications, and it needs to organise, develop, maintain and reuse large and complex bodies of legal codes and procedures. Such parallels between computing and law suggest that it might be possible to transfer useful results and techniques in both directions between these different fields. One possibility explored in this paper is that the linguistic structures of an appropriately extended logic programming language might indicate ways in which the language of legislation itself could be made simpler and clearer.
... Here, we focus on the case for the algebra of finite trees (or Herbrand domain) and finite number of function symbols. The Free Equality Theory is axiomatized by the usual equality axioms, the set of axioms E * (see [3]) and the Domain Closure Axiom (DCA), which was first defined in [14] for finite Herbrand domains and extended in [9] for infinite ones. FET L was shown to be decidable in [10] (see also [6]) and, besides, it is also a well-known result that FET L is non-elementary (see [7,16]). ...
Article
Full-text available
Most well-known algorithms for equational solving are based on quantifier elimination. This technique iteratively eliminates the innermost block of existential/universal quantifiers from prenex formulas whose matrices are in some normal form (mostly DNF). Traditionally used notions of normal form satisfy that every constraint (in normal form) different from false is trivially satisfiable. Hence, they are called solved forms. However, the manipulation of such constraints require hard transformations, especially due to the use of the distributive and the explosion rules, which increase the number of constraints at intermediate stages of the solving process. On the contrary, quasi-solved forms allow for simpler transformations by means of a more compact representation of solutions, but their satisfiability test is not so trivial. Nevertheless, the total cost of checking satisfiability and manipulating constrains using quasi-solved forms is cheaper than using simpler solved forms. Therefore, they are suitable for improving the efficiency of constraint solving procedures. In this paper, we present a notion of quasi-solved form that provides a good trade-off between the cost of checking satisfiability and the effort required to manipulate constraints. In particular, our new quasi-solved form has been carefully designed for efficiently handling conjunction and negation, which are the main Boolean operations necessary to keep matrices of formulas in normal form.
... In particular, first order logic has a well-developed theory and can be used as a uniform language for data, programs, queries, views, and integrity constraints. One promising approach to implementing deductive database systems is to use a PROLOG system as the query evaluator [2,7,9101115,16]. This approach places some restrictions on the class of formulas that can be used in the database. ...
Article
This paper is the third in a series providing a theoretical basis for deductive database systems. A deductive database consists of closed typed first order logic formulas of the form A ← W, where A is an atom and W is a typed first order formula. A typed first order formula can be used as a query, and a closed typed first order formula can be used as an integrity constraint. Functions are allowed to appear in formulas. Such a deductive database system can be implemented using a PROLOG system. The main results of this paper are concerned with the nonfloundering and completeness of query evaluation. We also introduce an alternative query evaluation process and show that corresponding versions of the earlier results can be obtained. Finally, we summarize the results of the three papers and discuss the attractive properties of the deductive database system approach based on first order logic.
... For other forms of integrity constraints, that cannot be brought into the desired denial form by classical equivalence transformations we can apply transfomations analogous to the Lloyd Topor transformations [39] using new auxiliary predicates in the program of our theory. These constraints would be transformed to de®ne a new (arity zero) predicate flse and we would add at the end of any goal the extra condition of not false i.e., of false à . ...
Article
Full-text available
This paper presents the framework of Abductive Constraint Logic Programming (ACLP), which integrates Abductive Logic Programming (ALP) and Constraint Logic Programming (CLP). In ACLP, the task of abduction is supported and enhanced by its non-trivial integration with constraint solving. This integration of constraint solving into abductive reasoning facilitates a general form of constructive abduction and enables the application of abduction to computationally demanding problems. The paper studies the formal declarative and operational semantics of the ACLP framework together with its application to various problems. The general characteristics of the computation of ACLP and of its application to problems are also discussed. Empirical results based on an implementation of the ACLP framework on top of the CLP language of ECLiPSe show that ACLP is computationally viable, with performance comparable to the underlying CLP framework on which it is built. In addition, our experiments show the natural ability for ACLP to accommodate easily and in a robust way new or changing requirements of the original problem. ACLP thus combines the advantages of modularity and flexibility of the high-level representation afforded by abduction together with the computational effectiveness of low-level specialised constraint solving.
... It requires the input formula to be in clausal form, i.e., a conjunction of disjunctions of unquantified literals. For general first-order formulae, a transformation to clausal form (e.g., [69]) includes Skolemization, which eliminates quantifiers and possibly introduces new constant symbols and new function symbols. ...
Article
In this paper we show how tree decomposition can be applied to reasoning with first-order and propositional logic theories. Our motivation is two-fold. First, we are concerned with how to reason effectively with multiple knowledge bases that have overlap in content. Second, we are concerned with improving the efficiency of reasoning over a set of logical axioms by partitioning the set with respect to some detectable structure, and reasoning over individual partitions either locally or in a distributed fashion. To this end, we provide algorithms for partitioning and reasoning with related logical axioms in propositional and first-order logic.Many of the reasoning algorithms we present are based on the idea of passing messages between partitions. We present algorithms for both forward (data-driven) and backward (query-driven) message passing. Different partitions may have different associated reasoning procedures. We characterize a class of reasoning procedures that ensures completeness and soundness of our message-passing algorithms. We further provide a specialized algorithm for propositional satisfiability checking with partitions. Craig's interpolation theorem serves as a key to proving soundness and completeness of all of these algorithms. An analysis of these algorithms emphasizes parameters of the partitionings that influence the efficiency of computation. We provide a greedy algorithm that automatically decomposes a set of logical axioms into partitions, following this analysis.
Article
This dissertation investigates a new approach to query languages inspired by structural recursion and by the categorical notion of a monad. A language based on these principles has been designed and studied. It is found to have the strength of several widely known relational languages but without their weaknesses. This language and its various extensions are shown to exhibit a conservative extension property, which indicates that the depth of nesting of collections in intermediate data has no effect on their expressive power. These languages also exhibit the finite-cofiniteness property on many classes of queries. These two properties provide easy answers to several hitherto unresolved conjectures on query languages that are more realistic than the flat relational algebra. A useful rewrite system has been derived from the equational theory of monads. It forms the core of a source-to-source optimizer capable of performing filter promotion, code motion, and loop fusion. Scanning routines and printing routines are considered as part of optimization process. An operational semantics that is a blending of eager evaluation and lazy evaluation is suggested in conjunction with these input-output routines. This strategy leads to a reduction in space consumption and a faster response time while preserving good total time performance. Additional optimization rules have been systematically introduced to cache and index small relations, to map monad operations to several classical join operators, to cache large intermediate relations, and to push monad operations to external servers. A query system Kleisli and a high-level query language CPL for it have been built on top of the functional language ML. Many of my theoretical and practical contributions have been physically realized in Kleisli and CPL. In addition, I have explored the idea of open system in my implementation. Dynamic extension of the system with new primitives, cost functions, optimization rules, scanners, and writers are fully supported. As a consequence, my system can be easily connected to external data sources. In particular, it has been successfully applied to integrate several genetic data sources which include relational databases, structured files, as well as data generated by special application programs.
Chapter
A meta-logic statement over a language is characterised by a formula in typed ω-order λ-calculus. The head of the λ-term corresponding to a meta-logic statement is always a constant when it is in normal form. It is shown that both deductive database statements and integrity constraints, particularly those involving aggregate operations, can be represented conveniently in meta-logic statements. A special case of Huet’s unification algorithm is considered on the domain of meta-logic terms and a proof procedure based on this unification is presented to answer database queries and to verify integrity constraints. Although, the main focus is to handle deductive databases problems, the results of this paper deal indirectly with a metalevel extension of first-order logic programming.
Article
In order to extend the ability to handle incomplete information in a definite deductive database, a Horn clause-based system representing incomplete information as incomplete constants is proposed. By using the notion of incomplete constants the deductive database system handles incomplete information in the form of sets of possible values, thereby giving more information than null values. The resulting system extends Horn logic to express a restricted form of indefiniteness. Although a deductive database with this kind of incomplete information is, in fact, a subset of an indefinite deductive database system, it represents indefiniteness in terms of value incompleteness, and therefore it can make use of the existing Horn logic computation rules. The inference rules for such a system are presented, its model theory discussed, and a model theory of indefiniteness proposed. The theory is consistent with minimal model theory and extends its expressive power.
Article
Full-text available
The present prolegomena consist, as all indeed do, in a critical discussion serving to introduce and interpret the extended works that follow in this book. As a result, the book is not a mere collection of excellent papers in their own specialty, but provides also the basics of the motivation, background history, important themes, bridges to other areas, and a common technical platform of the principal formalisms and approaches, augmented with examples. In the introduction we whet the reader's interest in the field of logic programming and non--monotonic reasoning with the promises it offers and with its outstanding problems too. There follows a brief historical background to logic programming, from its inception to actuality, and its relationship to non--monotonic formalisms, stressing its semantical and procedural aspects. The next couple of sections provide motivating examples and an overview of the main semantics paradigms for normal programs (stable models and wel...
Article
Arithmetic circuits arise in the context of weighted logic programming languages, such as Datalog with aggregation, or Dyna. A weighted logic program defines a generalized arithmetic circuit – the weighted version of a proof forest, with nodes having arbitrary rather than boolean values. In this paper, we focus on finite circuits. We present a flexible algorithm for efficiently querying node values as they change under updates to the circuit’s inputs. Unlike traditional algorithms, ours is agnostic about which nodes are tabled (materialized), and can vary smoothly between the traditional strategies of forward and backward chaining. Our algorithm is designed to admit future generalizations, including cyclic and infinite circuits and propagation of delta updates.
Article
The field of deductive databases is considered to have started at a workshop in Toulouse, France. At that workshop, Gallaire, Minker and Nicolas stated that logic and databases was a field in its own right (see [174]). This was the first time that this designation was made. The impetus for this started approximately twenty three years ago in 1976 when I visited Gallaire and Nicolas in Toulouse, France, which culminated in the Toulouse workshop in 1977. Ray Reiter was an attendee at the workshop and contributed two seminal articles to the book that resulted from the workshop. In this article I provide an assessment as to what has been achieved since the field started as a distinct discipline. I review developments in the field, assess contributions, consider the status of implementations of deductive databases and discuss future work needed in deductive databases. As noted in [298], the use of logic and deduction in databases started in the late 1960s. Prominent among developments was work by Levien and Maron and Kuhns Green and Raphael [199] were the first to realize the importance of the Robinson Resolution Principle [369] for databases. Early uses of logic databses are reported upon in [299] and are not covered here. Detailed descriptions of many of the accomplishments made in the 1960s can be found in [311].
Article
A general game player automatically learns to play arbitrary new games solely by being told their rules. For this purpose games are specified in the general Game Description Language (GDL), a variant of Datalog with function symbols that uses a few game-specific keywords. A recent extension of basic GDL allows the description of nondeterministic games with any number of players who may have incomplete, asymmetric information. In this paper, we analyse the epistemic structure and expressiveness of this language in terms of modal epistemic logic and prove two main results: (1) The operational semantics of GDL entails that the situation at any stage of a game can be characterised by a multi-agent epistemic (i.e., S5-) model; (2) GDL is sufficiently expressive to model any situation that can be described by a (finite) multi-agent epistemic model.
Conference Paper
The paper proposes a new approach to verification of integrity constraints in relational databases. According to the relational database paradigm, integrity constraints express certain conditions that should be preserved by all instances of a given database. Usually these conditions are checked dynamically, when the database is updated. A static verification of integrity constraints, based on a technique of elimination of the second-order quantifiers is proposed and investigated in the current paper. The static approach allows one to verify whether given constraints have been preserved already during the database design phase. This results in better system performance, because no runtime checking is required when committing a statically verified transaction to the database.
Article
A general game player is a system that can play previously unknown games given nothing but their rules. Many of the existing successful approaches to general game playing require to generate some form of game-specific knowledge, but when current systems establish knowledge they rely on the approximate method of playing random sample matches rather than formally proving knowledge. In this paper, we present a theoretically founded and practically viable method for automatically verifying properties of games whose rules are given in the general Game Description Language (GDL). We introduce a simple formal language to describe game-specific knowledge as state sequence invariants, and we provide a proof theory for verifying these invariants with the help of Answer Set Programming. We prove the correctness of this method against the formal semantics for GDL, and we report on extensive experiments with a practical implementation of this proof system, which show that our method of formally proving knowledge is viable for the practice of general game playing.
Article
This paper studies the class of deductive databases for which the result of query evaluation is independent of the domains of variables in database clauses. We first describe the class of domain-independent formulas in proof-theoretic terms. A new recursive subclass of ‘allowed’ formulas is defined and its properties studied. It is shown that every allowed formula is domain-independent and that every domain-independent formula has an (almost) equivalent allowed formula.We then motivate and define the new concept of a domain-independent (deductive) database. We define a recursive class of ‘allowed’ databases, prove that two important subclasses of the allowed databases are domain-independent and that every domain-independent database has an equivalent allowed database (in a certain sense), describe when domain-closure axioms are unnecessary, and show that a natural query evaluation process for allowed formulas and databases has desirable properties.
Article
Les differentes theories a base logique (ou " semantiques formelles ") elaborees pour la description des langues peuvent-elles etre identifiees a des systemes de representation des connaissances. Problemes critiques auxquels est confronte tout systeme de comprehension automatique du langage. Introduction a la theorie de la representation du discours
Article
In this paper, we propose an efficient technique to statically manage integrity constraints in object- oriented database programming languages. We place ourselves in the context of a simplified database programming language, close to O , in which we assume that updates are undertaken by means of meth- ods. An important issue when dealing with constraints is that of efficiency. A na¨ ive management of such constraints can cause a severe floundering of the overall system. Our basic as sumption is that the run- time checking of constraints is too costly to be undertaken systematically. Therefore, methods that are always safe with respect to integrity constraints should be proven so at compile time. The run-time checks should only concern the remaining methods. To that purpose, we propose a new approach, based on the use of predicate transformers combined with automatic theorem proving techniques, to prove the invari- ance of integrity constraints under complex methods. We then describe the current implementation of our prototype, and report some experiments that have been performed with it on non trivial examples. The counterpart of the problem of program verification is that of program correction. Static analysis techniques can also be applied to solve that problem. We present a systematic approach to undertake the automatic correction of potentially unsafe methods. However, the advantages of the latter technique are not as clear as those of program verification. We will therefore discuss some arguments for and against the use of method correction. Though our work is developed in the context of object-oriented database programming languages, it can easily be applied to the problem of static verification and correction of object-oriented languages providing pre and post-conditions such as Eiffel.
Article
The by now standard perspective on databases, especially deductive databases, is that they can be specified by sets of first order sentences. As such, they can be said to be claims about the truths of some external world; the database is a representation of that world. Virtually all approaches to database query evaluation treat queries as first order formulas, usually with free variables whose bindings resulting from the evaluation phase define the answers to the query. Following Levesque [8, 9], we argue that, for greater expressiveness, queries should be formulas in an epistemic modal logic. Queries, in other words, should be permitted to address aspects of the external world as represented by the database, as well as aspects of the database itself, i.e. aspects of what the database knows about that external world. We shall also argue that integrity constraints are best viewed as sentences about what the database knows, not, as is usually the case, as first order sentences about the external world. On this view, integrity constraints are modal sentences and hence are formally identical to a strict subset of the permissible database queries. Integrity maintenance then becomes formally identical to query evaluation for a certain class of database queries. We formalize these notions in Levesque’s language KFOPCE, and define the concepts of an answer to a query and of a database satisfying its integrity constraints. We also show that Levesque’s axiomatization of KFOPCE provides a suitable logic for reasoning about queries and integrity constraints. Next, we show how to do query evaluation and integrity maintenance for a restricted, but sizable class of queries/constraints. An interesting feature of this class of queries/constraints is that Prolog’s negation as failure mechanism serves to reduce query evaluation to first order theorem proving. This provides independent confirmation that negation as failure is really an epistemic operator in disguise. Finally, we provide sufficient conditions for the completeness of this query evaluator.
Article
This report is the result of the third phase of the technology project Quality of Expert Systems, carried out under the commission of the Ministery of Defence, Director Defence Research and Development. Participants in the project are TNO Physics and Electronics Laboratory (FEL-TNO), University of Limburg (RL) and the Research Institute for Knowledge Systems (RIKS). This report contains the results of an investigation into algorithms for integrity control of knowledgebases, with specific interest in the computational efficiency. Algorithms for preserving consistency and integrity of th knowledgebase arre being compared. Another subject in this report is the way in which a mapping can be made between specifications made with E(xtended)NIAM and an executable Prolog program, in order to establish a formal determination of the consistency and integrity of knowledgebase specifications. Keywords: Netherlands, Translations.
Article
This particular study investigates the application of Prolog and the associated technology of deductive databases to the realm of modern algebra. A relatively small, yet diverse, collection of information was chosen for the feasibility study. The 56 non-abelian simple groups of order less than one million have been studied in depth. In print are several tables of information such as minimal generating pairs, presentations, character tables, and maximal subgroups. The information is very heterogeneous in nature, involving formulae, tables, lists, arbitrary precision integers, character strings, irrational numbers, and rules for deducing information from the given facts.While very much a feasibility study, the work to date demonstrates that the Prolog deductive database technology is appropriate. More primitive data types, such as irrational numbers, infinite precision numbers, and tables would improve the efficiency of Prolog in this domain. More work categorizing user queries and incorporating the necessary facts and rules to answer them is required.
Article
Full-text available
Classical treatment of consistency violations is to back out a database operation or transaction. In applications with large numbers of fairly complex consistency constraints this clearly is an unsatisfactory solution. Instead, if a violation is detected the user should be given a diagnosis of the constraints that failed, a line of reasoning on the cause that could have led to the violation, and suggestions for a repair. The problem is particularly complicated in a deductive database system where failures may be due to an inferred condition rather than simply a stored fact, but the repair can only be applied to the underlying facts. The paper presents a system which provides automated support in such situations. It concentrates on the concepts and ideas underlying the approach and an appropriate system architecture and user guidance, and sketches some of the heuristics used to gain in performance.
Article
Logic programming emerged in the early 1970s from a convergence of work in the fields of automated theorem proving (q.v.), artificial intelligence (q.v.), and formal languages (q.v.).
Article
Full-text available
This paper surveys a variety of deductive database theories. Such theories differ from one another in the set of axioms and metarules that they allow and use. The following theories are discussed: relational, Horn, and stratified in the text; protected, disjunctive, typed, extended Horn, and normal in the appendix. Connections with programming in terms of the declarative, fixpoint, and procedural semantics are explained. Negation is treated in several different ways: closed world, completed database, and negation as failure. For each theory examples are given and implementation issues are considered.
Article
Full-text available
r esum e. Une mani ere d'optimiser la v eriication des contraintes d'int egrit e est de pouvoir d etecter, a la compilation, quelles contraintes ne seront jamais viol ees apr es avoir ex ecut e une transaction donn ee. Une premi ere m ethode consiste a exploiter de l'information relative aux types des objets manipul es par la transaction. Cette m ethode, combin ee a de simples techniques de compilation, permet de r eduire de faa con acceptable le nombre de contraintes a v eri-er eeectivement. Cependant, l'information relative aux types des objets n'est qu'une information statique ne prenant en consid eration ni le fait que la base de donn ees est coh erente avant d'ex ecuter la transaction ni la s emantique op eratoire de la transaction. Nous pr esentons, dans cet article une m ethode qui exploite le fait que la base est coh erente et qui capture une s emantique (partielle) d'ex ecution des transactions. Ce type de m ethode s'apparente a des techniques d'interpr etation abstraite. abstract. One way to optimize the integrity constraint checking is to detect, at com-pile time, the constraints which will not be violated after the execution of a transaction, in order to reduce the number of constraints to check. A rst method, using the type information of the objects manipulated by a transaction combined with simple compi-lation techniques, allows to reduce the number of constraints to be checked. However, with this method, some transactions are detected as potentially violating a constraint, while they will not. Although typing information is useful, it is not suucient enough to reduce as much as possible the set of constraints to be checked. In this paper, we present a method based on abstract interpretation techniques, which allows to detect more precisely the constraints to be checked after the execution of a transaction. In addition to type information, this method takes into account the state of the objects manipulated by a transaction, and in particular, captures a partial semantics of the transaction (describing the way the objects are modiied). mots-cl es : Bases de donn ees orient ees objet, Langages de programmation pour bases de donn ees, Contraintes d'int egrit e, Optimisation, Jeux de tests, Analyse de programme, Interpr etation abstraite.
Article
Full-text available
From 2003 to 2007 164 samples of fruit juices and 70 samples of baby food were tested for patulin. In 38% of apple juice samples quantifiable contents of patulin were found. The percentage of positive samples of grape juice was 24%. Patulin was not detected in 97% of the samples of baby food. Exposure estimation was conducted for consumers with mean and high fruit juice consumption. Estimated patulin intake through apple and grape juice consumption was 0.7–6.2 μg/d for preschoolers and adults. For infants and young children an exposure of 0.2–0.8 μg/d was calculated. Patulin intake of all Austrian population groups was well below the maximum tolerable daily intake of 0.4 μg/d bw/d. Therefore exposure of Austrian population to patulin due to food consumption seemed to be low during the investigation period.
Article
This paper is concerned with dynamic aspects of knowledge representation. A model for information processing is introduced which is based on acyclic directed graphs with starting points. This approach is combined with ideas from non-monotonic reasoning.
Chapter
In this paper a theoretical framework for efficiently checking the consistency of deductive databases is provided and proven to be correct. Our method is based on focussing on the relevant parts of the database by reasoning forwards from the updates of a transaction, and using this knowledge about real or just possible implicit updates for simplifying the consistency constraints in question. Opposite to the algorithms by Kowalski/Sadri and Lloyd/Topor, we are neither committed to determine the exact set of implicit updates nor to determine a fairly large superset of it by only considering the head literals of deductive rule clauses. Rather, our algorithm unifies these two approaches by allowing to choose any of the above or even intermediate strategies for any step of reasoning forwards. This flexibility renders possible the integration of statistical data and knowledge about access paths into the checking process. Second, deductive rules are organized into a graph to avoid searching for applicable rules in the proof procedure. This graph resembles a connection graph, however, a new method of interpreting it avoids the introduction of new clauses and links.
Conference Paper
This paper describes the role of logic programming in the Fifth Generation Computer Project. We started the project with the conjecture that logic programming is the bridge connecting knowledge information processing and parallel computer architecture. Four years have passed since we started the project, and now we can say that this conjecture has been substantially confirmed. The paper gives an overview of the developments which have reinforced this foundational conjecture and how our bridge is being realized.
Article
An extension of the class of logic programs for which the Negation as Failure rule is equivalent to the Closed World Assumption is presented. This extension includes programs on infinite domains and with recursive definitions. For this class of programs with ground (positive or negative) queries the Query Evaluation Process is complete. A characterization of this class of programs is given, and a decision algorithm, which recognizes a subclass of these programs, is presented. The algorithm is based on showing the correspondence between a program and a primitive recursive function.
Chapter
Full-text available
The paper gives a completeness result of SLDNF-resolution for a large class of logic programs. The characteristics of this class (structured programs) are mostly related to the possibility to decide always if a ground atom is or not a logical consequence of a program (i.e. they are related with total functions). Another characteristic of structured programs is that they allow to compute only ground substitutions. Most of the known completeness results (for example those for hierarchical programs) are special cases of this result. The class of structured programs is large enough to allow to write general programs with recursive definitions.
Chapter
The functional programming language LML (for Logical Meta-Language) is presented. Like most trendy representatives of its category, LML is a higher-order, pattern-matched, polymorphically-typed, non-strict functional language. Its distinctive feature is the presence of a data type of theories, whose objects represent logic programs. Theories are equipped with operators for querying them, to obtain sets of answers, and combining them together in various different ways, to build more complex ones. Thus, from one perspective, LML may be viewed as yet another approach to the integration of functional and logic programming, aiming at amalgamating within a single, coherent framework the expressive power of both paradigms. From another perspective, however, LML may be viewed as a programming language for the construction of knowledge-based systems, in which chunks of knowledge are represented as logic theories. According to this perspective, the functional layer acts as a meta-level framework for the underlying logic programming component: theories are ordinary data values, which can be manipulated by suitable operators. The operators map theories into theories by acting upon their representations according to given formal semantics. This is the most novel aspect of the language, and provides mechanisms for a) the description of dynamic evolution of theories, and b) a modular approach to knowledge-based systems development. The paper presents the basic ideas underlying the design of the language, together with anexample of its use.
Article
A problem with current database systems is the limitation placed on the type of data which may be represented and manipulated within such systems. In an attempt to broaden this to a wider class of data (i.e. rules as well as facts) and a more powerful set of manipulations, the concept of a deductive database was introduced. However, for the sake of efficiency the type of rule which is allowed in a deductive database is restricted in form. This paper surveys a number of attempts to move towards less restrictive forms of rules in deductive databases which allow indefinite and negative data to be handled.
Article
This paper introduces a new class of deductive databases (connected databases) for which SLDNF-resolution never flounders and always computes ground answers. The class of connected databases properly includes that of allowed databases. Moreover the definition of connected databases enables evaluable predicates to be included in a uniform way. An algorithm is described which, for each predicate defined in a normal database, derives a propositional formula (groundness formula) describing dependencies between the arguments of that predicate. Groundness formulae are used to determine whether a database is connected. They are also used to identify goals for which SLDNF-resolution will never flounder and will always compute ground answers on a connected database.
Article
The inadequacy of the Closed World Assumption in dealing with indefinite information has led to the Generalised Closed World Assumption and the Extended Generalised Closed World Assumption. However, these approaches have serious shortcomings. In this paper, the Indefinite Closed World Assumption is proposed to overcome the shortcomings. It provides a desirable and simple logical interpretation of a database containing indefinite information.
Article
A transformation technique is introduced which, given the Horn-clause definition of a set of predicates pi, synthesizes the definitions of new predicate p̃i which can be used, under a suitable refutation procedure, to compute the finite failure set of pi. This technique exhibits some computational advantages, such as the possibility of computing nonground negative goals still preserving the capability of producing answers. The refutation procedure, named SLDN refutation, is proved sound and complete with respect to the completed program.
Article
We prove the completeness of SLDNF resolution and negation as failure for stratified, normal programs and normal goals, under the conditions of strickness and allowedness. In particular, this result settles positively a conjecture of Apt, Blair, and Walker.
Article
This paper investigates the class of acyclic programs, programs with the usual hierarchical condition imposed on ground instances of atoms rather than predicate symbols. The acyclic condition seems to naturally capture much of the recursion that occurs in practice and many programs that arise in practical programming satisfy this condition. We prove completeness of SLDNF-resolution for the acyclic programs and discuss several other desirable properties exhibited by programs belonging to this class.
Article
We prove the correctness of a simplification method for checking static integrity constraints in stratified deductive databases.
Article
A method is presented for checking integrity constraints in a deductive database in which verification of the integrity constraints in the updated database is reduced to the process of constructing paths from update literals to the heads of integrity constraints. In constructing such paths, the method takes advantage of the assumption that the database satisfies the integrity constraints prior to the update. If such a path can be constructed, the integrity of the database has been violated. Correctness of the method has been proved for checking integrity constraints in stratified deductive databases. An explanation of how this method may be realised efficiently in Prolog is given.
Article
We consider a class of constraint logic programs including negation that can be executed bottom up without constraint solving, by replacing constraints with tests and assignments. We show how to optimize the bottom-up evaluation of queries for such programs using transformations based on analysis obtained using abstract interpretation. Although the paper concentrates on a class of efficiently executable programs, the optimizations we describe are correct and applicable for arbitrary constraint logic programs. Our approach generalizes earlier work on constraint propagation.
Article
An update of a consistent database can influence the integrity of the database. The available integrity checking methods in deductive databases are often criticized for their lack of efficiency. The main goal of this paper is to present a new integrity checking method which does not have some of the disadvantages of existing methods. The main advantage of the proposed method is that the integrity check is performed primarily at compile time. In order to demonstrate the improvement in efficiency of the proposed method it was compared both fundamentally and experimentally with existing methods.
Conference Paper
A query language for heterogeneous information resource management systems is presented and discussed. The language focusses on query formation, assertion expression and function construction required for a generalized information resource base. The SYNTHESIS approach concentrates on the development of query languages for object-oriented databases enriched by deductive capabilities. The SYNTHESIS project aims at supporting a generalized homogeneous representation and integrated use of different computerized forms of data, programs and other sources of knowledge that constitute an organization's information resource.
Conference Paper
Full-text available
A query evaluation process for a logic data base comprising a set of clauses is described. It is essentially a Horn clause theorem prover augmented with a special inference rule for dealing with negation. This is the negation as failure inference rule whereby ~ P can be inferred if every possible proof of P fails. The chief advantage of the query evaluator described is the effeciency with which it can be implemented. Moreover, we show that the negation as failure rule only allows us to conclude negated facts that could be inferred from the axioms of the completed data base, a data base of relation definitions and equality schemas that we consider is implicitly given by the data base of clauses. We also show that when the clause data base and the queries satisfy certain constraints, which still leaves us with a data base more general than a conventional relational data base, the query evaluation process will find every answer that is a logical consequence of the completed data base.
Article
This paper introduces extended programs and extended goals for logic programming. A clause in an extended program can have an arbitrary first-order formula as its body. Similarly, an extended goal can have an arbitrary first-order formula as its body. The main results of the paper are the soundness of the negation as failure rule and SLDNF-resolution for extended programs and goals. We show how the increased expressibility of extended programs and goals can be easily implemented in any PROLOG system which has a sound implementation of the negation as failure rule. We also show how these ideas can be used to implement first-order logic as a query language in a deductive database system. An application to integrity constraints in deductive database systems is also given.
Conference Paper
The principal concern of this paper is the design of a retrieval system which combines current techniques for query evaluation on relational data bases with a deductive component in such a way that the interface between the two is both clean and natural. The result is an approach to deductive retrieval which appears to be feasible for data bases with very large extensions (i.e. specific facts) and comparatively small intensions (i.e. general facts). More specifically, a suitably designed theorem prover “sweeps through” the intensional data base, extracting all information relevant to a given query. This theorem prover never looks at the extensional data base. The end result of this sweep is a set of queries, each of which is extensionally evaluated. The union of answers returned from each of these queries is the set of answers to the original query. One consequence of this decomposition into an intensional and extensional processor is that the latter may be realized by a conventional data base management system. Another is that the intensional data base can be compiled using a theorem prover as a once-only compiler. This paper is essentially an impressionistic survey of some results which are rigorously treated elsewhere. As such, no proofs are given for the theorems stated, and the basic system design is illustrated by means of an extended example.
Conference Paper
Insofar as database theory can be said to owe a debt to logic, the currency on loan is model theoretic in the sense that a database can be viewed as a particular kind of first order interpretation, and query evaluation is a process of truth functional evaluation of first order formulae with respect, to this interpretation. It is this model theoretic paradigm which leads, for example, to many valued propositional logics for databases with null values. In this chapter I argue that a proof theoretic view of databases is possible, and indeed much morefiruitful. Specifically, I show how relational databases can be seen as special theories of gist order logic, namely theories incorporating the fiollowing assumptions: 1. The domain closure assumption. The individuals occurring in the database are all and only the existing individuals.
Article
The purpose of this paper is to show that logic provides a convenient formalism for studying classical database problems. There are two main parts to the paper, devoted respectively to conventional databases and deductive databases. In the first part, we focus on query languages, integrity modeling and maintenance, query optimization, and data dependencies. The second part deals mainly with the representation and manipulation of deduced facts and incomplete information.
Article
The use of logic as a single tool for formalizing and implementing different aspects of database systems in a uniform manner is discussed. The discussion focuses on relational databases with deductive capabilities and very high-level querying and defining features. The computational interpretation of logic is briefly reviewed, and then several pros and cons concerning the description of data, programs, queries, and language parser in terms of logic programs are examined. The inadequacies are discussed, and it is shown that they can be overcome by the introduction of convenient extensions into logic programming. Finally, an experimental database query system with a natural language front end, implemented in PROLOG, is presented as an illustration of these concepts. A description of the latter from the user's point of view and a sample consultation session in Spanish are included.
Article
When an "updating" operation occurs on the current state of a data base, one has to ensure the new state obeys the integrity constraints. So, some of them have to be evaluated on this new state. The evaluation of an integrity constraint can be time consuming, but one can improve such an evaluation by taking advantage from the fact that the integrity constraint is satisfied in the current state. Indeed, it is then possible to derive a simplified form of this integrity constraint which is sufficient to evaluate in the new state in order to determine whether the initial constraint is still satisfied in this new state. The purpose of this paper is to present a simplification method yielding such simplified forms for integrity constraints. These simplified forms depend on the nature of the updating operation which is the cause of the state change. The operations of inserting, deleting, updating a tuple in a relation as well as transactions of such operations are considered. The proposed method is based on syntactical criteria and is validated through first order logic. Examples are treated and some aspects of the method application are discussed.
Article
Clark shows that a query evaluation procedure for data base deductions using a “negation as failure” inference rule can be regarded as making deductions from the “completed data base” (CDB) obtained by replacing the “if” clauses of the data base by “iff” clauses. We show they can also be regarded as deductions from the “closed world assumption” (CWA) of Reiter. Usually these deduction systems are incomplete and the CDB and CWA differ; one may be consistent and the other not, and when both are separately consistent they may be incompatible. However when the data base is Horn and definite they are compatible and when Clark's query evaluation procedure is complete for ground literals they are essentially equivalent. When the query evaluation procedure is not complete it may lack some basic closure properties. The conditions given by Clark for the completeness of his query evaluation procedure are not quite sufficient, and when made so are rather restrictive.
Book
This is a survey of the theory of logic programs with classical negation andnegation as failure. The semantics of this class of programs is based on the notionof an answer set. The operation of Prolog is described in terms of proof search inthe SLDNF calculus.1 IntroductionIn this survey, we view logic programming as a method for representing declarativeknowledge. To put the subject in a proper perspective, we briefly discuss here two otherapproaches to knowledge representation and...
Article
This paper gives an overall account of a prototype natural language question answering system, called Chat-80. Chat-80 has been designed to be both efficient and easily adaptable to a variety of applications. The system is implemented entirely in Prolog, a programming language based on logic. With the aid of a logic-based grammar formalism called extraposition grammars, Chat-80 translates English questions into the Prolog subset of logic. The resulting logical expression is then transformed by a planning algorithm into efficient Prolog, cf. "query optimisation" in a relational database. Finally, the Prolog form is executed to yield the answer. On a domain of world geography, most questions within the English subset are answered in well under one second, including relatively complex queries
Foundation of Logic Programming, Symbolic Computation Series
  • J W Lloyd
Lloyd, J. W., Foundation of Logic Programming, Symbolic Computation Series, Springer, 1984.
Towards a Logical Reconstruction of Relational Database Theory On Conceptual Modelling: Perspectives from Artificial Intelligence, Databases and Programming Languages
  • R Reiter
Reiter, R., Towards a Logical Reconstruction of Relational Database Theory, in: M. L. Brodie et al (eds.), On Conceptual Modelling: Perspectives from Artificial Intelligence, Databases and Programming Languages, Springer, 1984, pp. 191-233.