Article

A New Method for Integrity Constraint Checking in Deductive Database.

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

An update of a consistent database can influence the integrity of the database. The available integrity checking methods in deductive databases are often criticized for their lack of efficiency. The main goal of this paper is to present a new integrity checking method which does not have some of the disadvantages of existing methods. The main advantage of the proposed method is that the integrity check is performed primarily at compile time. In order to demonstrate the improvement in efficiency of the proposed method it was compared both fundamentally and experimentally with existing methods.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... We first consider the tests presented in [153], which were initially proposed in [67]; the method of the so-called inconsistency indicators of [153] was shown to run more efficiently than previous methods, namely [147,122,69] and naive constraint checking (i.e., with no simplification). We show that, on their examples, we obtain a much better performance (all obtained simplifications are indeed ideal). ...
... We first consider the tests presented in [153], which were initially proposed in [67]; the method of the so-called inconsistency indicators of [153] was shown to run more efficiently than previous methods, namely [147,122,69] and naive constraint checking (i.e., with no simplification). We show that, on their examples, we obtain a much better performance (all obtained simplifications are indeed ideal). ...
... The distribution of facts in the initial database considered in [153] is as follows: 177 f ather facts, 229 husband facts, 620 occup facts and 59 sponsor facts. We considered additions of tuples to the f ather and husband relations. ...
... A classification of redundant evaluations in existing methods is given. On th~ basis of this analysis an improvement ofthe method based on inconsistency rules, which I have developed at an earlier stage (see [26]), is proposed. Compared to existing methods it follows a completely different approach. ...
... This will be illustrated by an example. For, more information about this way of integrity checking see [26]. ...
... In this subsection two existing classes of methods for checking the consistency of databases are described. For an comparison of several of these methods ( [1], [2], [4], [6], [8], [10], [12], [15], [17], [20], [24], [25], [26]) we refer to [5], [9], [26]. ...
... The symbolic simplifications shown here were obtained with an implementation of the simplification procedure [12]. We first consider the tests presented in [19] where the method of the so-called inconsistency indicators (II) was shown to run more efficiently than previous methods, namely [18, 11] and naive constraint checking (i.e., with no simplification ). We show that, on their examples, we obtain better performance (indeed, ideal simplifications). ...
... The distribution of facts in the initial database considered in [19] is as follows: 177 f ather facts, 229 husband facts, 620 occup facts and 59 sponsor facts. We considered additions of tuples to the f ather and husband relations. ...
... In both tests, the performance worsens very quickly with the II method, whereas it basically remains constant with our approach. The last example of [19] refers to the following schema S 2 : ...
Conference Paper
Integrity checking is an essential means for the preservation of the intended semantics of a deductive database. Incrementality is the only feasible approach to checking and can be obtained with respect to given update patterns by exploiting query optimization techniques. By reducing the problem to query containment, we show that no procedure exists that always returns the best incremental test (aka simplification of integrity constraints), and this according to any reasonable criterion measuring the checking eort. In spite of this theoretical limitation, we develop an eective procedure allowing general parametric updates that, for given database classes, returns ideal simplifications and also applies to recursive databases. Finally, we point out the improvements with respect to previous methods based on an experimental evaluation.
... As pointed out in [BGL96], this "is a fundamental idea underlying many algorithms for the automatic maintenance of integrity constraints". Indeed, the idea of simplifying the test, by assuming that the database state is consistent before the update, is exploited by most of the existing techniques for integrity checking (e.g., [Qia88,Law95,BGL96,Sel95]). Having made this assumption, it is possible to significantly improve the efficiency of the test. ...
... [Nic82,HI85,Qia88]), deductive databases (e.g. [BDM93,Sel95]), or object-oriented databases [Law95]. As pointed out in [BGL96], a common idea underlying these methods is that efficiency improvement is obtained from the assumption that the constraint holds prior to the update. ...
Article
In this thesis we have developed a verification theory and a tool for the automated analysis of assertions about object-oriented database schemas. The approach is inspired by the work of [SS89] in which a theorem prover is used to automate the verification of invariants for transactions on a relational database. The work presented in this thesis deals with an object-oriented database and it discusses applications other than the analysis of database transaction safety. An important difference with the work of [SS89] is that we have used a general purpose higher-order logic (HOL) theorem prover, namely the Isabelle theorem prover [Pau94, Isa], rather than implementing our own specialized prover. Much previous research, including the work of [SS89], concerns fully automatic techniques (i.e., without the possibility of further interaction). These techniques are inheritly limited in scope ([BGL96]). The presented approach, combines automatic and interactive proof, where Isabelle’s automatic proof facilities are exploited to minimize the user’s effort to discharge proof obligations. The results demonstrate that today’s prover technology can indeed help in practical verification issues that arise in the design of databases.
... There is a long tradition of methods devoted to the problem of integrity constraint checking in the deductive database field [24][41][54] [55]. Some of the approaches do not focus on the problem of integrity checking itself but on the related problem of materialized view maintenance. ...
... Pre-test-based methods are, e.g., [48,50,78,54,55,25,37,66,24,64,65,59,60,22,61,58,23,26], including a few industrial attempts, e.g., [16,3]. Other methods provide simplifications that may require the availability of both the old and the new state, assuming that the database keeps track of the old state before committing an update, [82,83]. In [46], an adaptation of subsumption checking (called partial subsumption) is used to generate simplification as the "difference" (called residual ) between an integrity constraint and a clause representing an update. ...
Article
Integrity checking is a crucial issue, as databases change their instance all the time and therefore need to be checked continuously and rapidly. Decades of research have produced a plethora of methods for checking integrity constraints of a database in an incremental manner. However, not much has been said about when to check integrity. In this paper, we study the differences and similarities between checking integrity before an update (a.k.a. pre-test) or after (a.k.a. post-test) in order to assess the respective convenience and properties.
... A significant amount of work has been devoted to the area of integrity checking to define the order in which derived predicates should be evaluated to optimize the test of whether a transaction violates an integrity constraint [Ple93,Sel95]. In this sense, several graphs that define this order have been proposed. ...
Chapter
Full-text available
Two different approaches have been traditionally considered for dealing with the process of integrity constraints enforcement: integrity checking and integrity maintenance. However, while previous research in the first approach has mainly addressed efficiency issues, research in the second approach has been mainly concentrated in being able to generate all possible repairs that falsify an integrity constraint violation. In this paper we address efficiency issues during the process of integrity maintenance. In this sense, we propose a technique which improves efficiency of existing methods by defining the order in which maintenance of integrity constraints should be performed. Moreover, we use also this technique for being able to handle in an integrated way the integrity constraints enforcement approaches mentioned above.
... 16] compares the e ciency of some major strategies on a range of examples. Finally, recent contributions can be found in, among others, 15,23,45,84,46]. ...
Conference Paper
Full-text available
Integrity constraints are useful for the specification of deductive databases, as well as for inductive and abductive logic programs. Verifying integrity constraints upon updates is a major efficiency bottleneck and specialised methods have been developed to speedup this task. They can, however, still incur a considerable overhead. In this paper we propose a solution to this problem by using partial evaluation to pre-compile the integrity checking for certain update patterns. The idea being, that a lot of the integrity checking can already be performed given an update pattern without knowing the actual, concrete update. In order to achieve the pre-compilation, we write the specialised integrity checking as a meta-interpreter in logic programming. This meta-interpreter incorporates the knowledge that the integrity constraints were not violated prior to a given update. By partially evaluating this meta-interpreter for certain transaction patterns, using a partial evaluation technique presented in earlier work, we are able to automatically obtain very efficient specialised update procedures, executing faster than other integrity checking procedures proposed in the literature.
... A significant amount of work has been devoted to the area of integrity checking to define the order in which derived predicates should be evaluated to optimize the test of whether a transaction violates an integrity constraint [Ple93,Sel95]. In this sense, several graphs that define this order have been proposed. ...
Article
Full-text available
Two different approaches have been traditionally considered for dealing with the process of integrity constraints enforcement: integrity checking and integrity maintenance. However, while previous research in the first approach has mainly addressed efficiency issues, research in the second approach has been mainly concentrated in being able to generate all possible repairs that falsify an integrity constraint violation. In this paper we address efficiency issues during the process of integrity maintenance. In this sense, we propose a technique which improves efficiency of existing methods by defining the order in which maintenance of integrity constraints should be performed. Moreover, we use also this technique for being able to handle in an integrated way the integrity constraints enforcement approaches mentioned above. KEYWORDS: deductive database, updating, integrity checking, integrity maintenance - 1 - 1. Introduction Database updating has attracted a lot of research during l...
Article
Deductive databases intellectualize relational databases by providing complex inference ability and they are competitive with current commercial relational databases. The deductive system presented in this paper is a unified logic/database system where programs may be expressed declaratively, in a form close to first-order logic, and at the same time efficiently access very large data bases. The implementation is based on a logic-based language compilation. A Prolog-like program is translated into Warren Abstract Machine (WAM) instructions by a compiler, and the resulting WAM code is executed by an emulator. The inference engine and the relational database are tightly coupled, i.e. external records are retrieved from the underlying database and unified with Prolog-like terms as and when required. The tight coupling of a logic-based language and a relational DBMS achieved a satisfactory performance.
Article
Over the years, there has been a confluence of concepts, tools and techniques from two diverse areas: artificial intelligence (AI) and database (DB) systems. There are several ways to integrate expert systems (ES) and database systems. This paper surveys the related literature and classifies the integrated systems into four classes: enhanced DB, enhanced ES, coupling of existing ES and DB, and expert database system. A new loose coupling approach (Simple Coupler) based on predefined SQLs is then proposed.Its system architecture and system operations are described. The Simple Coupler is compared with the DIFEAD (Dictionary Interface for Expert Systems and Databases) approach and the commercial ES shell approach on the criteria of the independence of the DB, ES, the complexities and future expansibility. This approach has great practical values because of its simplicity. Finally, this paper reports on two prototypes coupling existing systems: accounting DB & financial ES and medical history database & medical diagnosis system, respectively, in the PC window environment.
Article
Known methods for checking integrity constraints in deductive databases do not eliminate all aspects of redundancy in integrity checking. By making the redundancy aspects of integrity constraint checking explicit, independently from any chosen method, it is possible to develop a new method that is optimal with respect to the classified redundancy aspects. We distinguish three types of redundancy and propose an integrity checking method based on revised inconsistency rules.
Article
Integrity constraints are useful for the specification of deductive databases, as well as for inductive and abductive logic programs. Verifying integrity constraints upon updates is a major efficiency bottleneck and specialised methods have been developed to speedup this task. They can, however, still incur a considerable overhead. In this paper we propose a solution to this problem by using partial evaluation to pre-compile the integrity checking for certain update patterns. The idea being, that a lot of the integrity checking can already be performed given an update pattern without knowing the actual, concrete update. In order to achieve the pre-compilation, we write the specialised integrity checking as a meta-interpreter in logic programming. This meta-interpreter incorporates the knowledge that the integrity constraints were not violated prior to a given update. By partially evaluating this meta-interpreter for certain transaction patterns, using a partial evaluation technique presented in earlier work, we are able to automatically obtain very efficient specialised update procedures, executing faster than other integrity checking procedures proposed in the literature.
Article
We deal with view updating and integrity constraint maintenance. View updating is concerned with translating a request to update derived facts into updates of the underlying base facts. Integrity constraint maintenance is aimed to perform the necessary repairs to guarantee that a set of base fact updates does not violate database consistency. We define a method that deals with these problems in an integrated way and we show that it is sound and complete. Soundness ensures that our method obtains only correct solutions while completeness guarantees that we obtain all valid minimal solutions. We also propose set of techniques to provide an efficient implementation of our method.
Article
Without proper simplification techniques, database integrity checking can be prohibitively time consuming. Several methods have been developed for producing simplified incremental checks for each update but none until now of sufficient quality and generality for providing a true practical impact, and the present paper is an attempt to fill this gap. On the theoretical side, a general characterization is introduced of the problem of simplification of integrity constraints and a natural definition is given of what it means for a simplification procedure to be ideal. We prove that ideality of simplification is strictly related to query containment; in fact, an ideal simplification pro-cedure can only exist in database languages for which query containment is decidable. However, simplifications that do not qualify as ideal may also be relevant for practical purposes. We present a concrete approach based on transformation operators that apply to integrity constraints written in a rich DATALOG-like language with negation. The resulting procedure produces, at design-time, simplified constraints for parametric transaction patterns, which can then be instantiated and checked for consistency at run-time. These tests take place before the execution of the update, so that only consistency-preserving updates are eventually given to the database. The extension to more expressive languages and the application of the framework to other contexts, such as data integration and concurrent database systems, are also discussed. Our experiments show that the simplifications obtained with our method may give rise to much better performance than with previous methods and that further improvements are achieved by checking consistency before executing the update.
Article
Full-text available
We propose an extension of the SLDNF proof procedure for checking integrity constraints in deductive databases. To achieve the effect of the simplification methods investigated by Nicolas [1982], Lloyd, Sonenberg, and Topor [1986], and Decker [1986], we use clauses corresponding to the updates as top clauses for the search space. This builds in the assumption that the database satisfied its integrity constraints prior to the transaction, and that, therefore, any violation of the constraints in the updated database must involve at least one of the updates in the transaction. Different simplification methods can be simulated by different strategies for literal selection and search.
Article
Full-text available
Classical treatment of consistency violations is to back out a database operation or transaction. In applications with large numbers of fairly complex consistency constraints this clearly is an unsatisfactory solution. Instead, if a violation is detected the user should be given a diagnosis of the constraints that failed, a line of reasoning on the cause that could have led to the violation, and suggestions for a repair. The problem is particularly complicated in a deductive database system where failures may be due to an inferred condition rather than simply a stored fact, but the repair can only be applied to the underlying facts. The paper presents a system which provides automated support in such situations. It concentrates on the concepts and ideas underlying the approach and an appropriate system architecture and user guidance, and sketches some of the heuristics used to gain in performance.
Conference Paper
Full-text available
Constraint validation has bcc?n difficult to imple- ment efficiently. The major reason for this difficulty lies in the state-dependent nature of integrity constraints and the rt~quiremcnt of both high-level spc&fication and cfficirnt runtimc cnforccmcnt. In this paper, we pro- pose a constraint reformulation approach to rfficicnt constraint validation. We also demonstrate how this knowledge-basrd constraint rcfornmlation can be natu- rally accomplished in the gcncral framework of problem reformulation with the technique of antecedent deriva- tion. We formalize thr reformulation of an integrity constraint as a tree-starch process where the search space is thtr set of all semantic-equivalent alternatives of the original constraint. We also develop control strate- gies and mcta-level rules for carrying out the search c?fficicntly. The major contribution of this work is a new promising approach to cfficirnt constraint valida- tiun and a general framework to accomplish it.
Conference Paper
Full-text available
In this paper we describe the considerations that led us to the design of LDL and nrovide an overview of the features of-this language. L$L is designed to combine the flexibility of logic programming with the high per- formance of the relational database technology. The design offers an improved mode of control over the ex- isting logic programming languages together with an en- riched repertoire of data objects and constructs, includ- ing: sets, updates and negation. These advantages are realized by means of a compilation technology.
Conference Paper
Full-text available
We describe the theory and implementation of a general theorem-proving technique for checking integrity of deductive databases recently proposed by Sadri and Kowalski. The method uses an extension of the SLDNF proof procedure and achieves the effect of the simplification algorithms of Nicolas, Lloyd, Topor et al, and Decker by reasoning forwards from the update and thus focusing on the relevant parts of the database and the relevant constraints.
Conference Paper
Full-text available
In order to faithfully describe real-life applications, knowledge bases have to manage general integrity constraints. In this article, we analyse methods for an efficient verification of integrity constraints in updated knowledge bases. These methods rely on the satisfaction of the integrity constraints before the update for simplifying their evaluation in the updated knowledge base. During the last few years, an increasing amount of publications has been devoted to various aspects of this problem. Since they use distinct formalisms and different terminologies, they are difficult to compare. Moreover, it is often complex to recognize commonalities and to find out whether techniques described in different articles are in principle different. A first part of this report aims at giving a comprehensive state-of-the-art in integrity verification. It describes integrity constraint verification techniques in a common formalism. A second part of this report is devoted to comparing several proposals. The differences and similarities between various methods are investigated.
Article
Full-text available
Integrity maintenance methods have been defined for preventing updates from violating integrity constraints. Depending on the update, the full check for constraint satisfac- tion is reduced to checking certain instances of some relevant constraints only. In the first part of the paper new ideas are proposed for enhancing the eflciency of such a method. The second part is devoted to checking constraint satisjiability, i.e., whether a database exists in which all constraints are simultaneously satisfied. A satisfiability checking method is presented that employs integrity maintenance techniques. Simple Prolog programs are given that serve both as specifications as well as a basis for an efficient implementation.
Article
Datalog is extended to incorporate single-valued “data functions”, which correspond to attributes in semantic models, and which may be base (user-specified) or derived (computed). Both conventional and stratified datalog are considered. Under the extension, a datalog program may not be consistent, because a derived function symbol may evaluate to something which is not a function. Consistency is shown to be undecidable, and is decidable in a number of restricted cases. A syntactic restriction, panwise consistency , is shown to guarantee consistency. The framework developed here can also be used to incorporate single-valued data functions into the Complex Object Language (COL), which supports deductive capabilities, complex database objects, and set-valued data functions. There is a natural correspondence between the extended datalog introduced here, and the usual datalog with functional dependencies. For families Φ and Γ of dependencies and a family of datalog programs Λ, the Φ-Γ implication problem for Λ asks, given sets F ⊆ Φ and G ⊆ Γ and a program P in Λ, whether for all inputs I, I @@@@ F implies P(I) @@@@ G. The FD-FD implication problem is undecidable for datalog, and the TGD-EGD implication problem is decidable for stratified datalog. Also, the Ø-MVD problem is undecidable (and hence also the MVD-preservation problem).
Conference Paper
For a deductive database we present an algorithm to efficiently compute the changes in virtual predicates induced by updates. We first define different classes of potential changes introduced by updates. These definitions are expressed as rules, and merged into the rules defining the database views. This enables the system to determine changes induced by an update with minimum redundancy. Moreover the evaluation of the merged rules mirror the evaluation of the rules defining the views: as a result no new evaluation machinery is needed, and any optimizations applied to the rules defining the views are inherited by the merged rules. The method is introduced by giving a detailed analysis of the difference between two states. We describe a mechanism compiling the original rules into a format amenable to a standard query evaluator. The algorithm is applied to the integrity checking problem. The integrity constraints are boolean views, defined by rules, and their validity after an update is checked by computing the induced changes on predicates defined by these rules.
Article
We consider the problem of updating a knowledge base, where a knowledge base is realised as a (logic) program. In a previous paper, we presented procedures for deleting an atom from a normal program and inserting an atom into a normal program, concentrating particularly on the case when negative literals appear in the bodies of program clauses. We also proved various properties of the procedures including their correctness. Here we present mutually recursive versions of the update procedures and prove their correctness and other properties. We then generalise the procedures so that we can update an (arbitrary) program with an (arbitrary) formula. The correctness of the update procedures for programs is also proved.
Chapter
In this paper a theoretical framework for efficiently checking the consistency of deductive databases is provided and proven to be correct. Our method is based on focussing on the relevant parts of the database by reasoning forwards from the updates of a transaction, and using this knowledge about real or just possible implicit updates for simplifying the consistency constraints in question. Opposite to the algorithms by Kowalski/Sadri and Lloyd/Topor, we are neither committed to determine the exact set of implicit updates nor to determine a fairly large superset of it by only considering the head literals of deductive rule clauses. Rather, our algorithm unifies these two approaches by allowing to choose any of the above or even intermediate strategies for any step of reasoning forwards. This flexibility renders possible the integration of statistical data and knowledge about access paths into the checking process. Second, deductive rules are organized into a graph to avoid searching for applicable rules in the proof procedure. This graph resembles a connection graph, however, a new method of interpreting it avoids the introduction of new clauses and links.
Article
We consider the problem of updating a knowledge base, where a knowledge base is realised as a (logic) program. In a previous paper A. Guessoum and J. W. Lloyd [New Generation Comput. 8(1), 71-89 (1990; Zbl 0705.68096)] we presented procedures for deleting an atom from a normal program and inserting an atom into a normal program, concentrating particulary on the case when negative literals appear in the bodies of program clauses. We also proved various properties of the procedures including their correctness. Here we present mutually recursive versions of the update procedures and prove their correctness and other properties. We then generalise the procedures so that we can update an (arbitrary) program with an (arbitrary) formula. The correctness of the update procedures for programs is also proved.
Article
In a deductive (or relational) database, integrity constraints are data dependencies which database states are compelled to obey. Different ways of expressing integrity constraints were proposed in several papers, e.g. tuple calculus, closed first-order formula, clause, etc. In this paper, we propose a special form of closed first-order formula, called IC-formula, which uses the Prolog nested not-predicate to express integrity constraints. The IC-formulas are more expressive than other existing ways of expressing integrity constraints. The soundness and completeness of the method for verifying IC-formulas in a deductive database is proved. The full checking of the IC-formulas of a deductive database can be implemented easily by Prolog systems. Methods for doing incremental integrity constraint checking for the operations of inserting, deleting, and modifying a tuple in a relational or deductive database are presented. The concept of a key of a relation is also used to further simplify incremental integrity constraint checking. These incremental integrity constraint checking methods can be implemented easily by Prolog systems.
Article
We consider logic databases as logic programs and suggest how to deal with the problem of integrity constraint checking. Two methods for integrity constraint handling are presented. The first one is based on a metalevel consistency proof and is particularly suitable for an existing database which has to be checked for some integrity constraints. The second method is based on a transformation of the logic program which represents the database into a logic program which satisfies the given integrity constraints. This method is specially suggested for databases that have to be built specifying, separately, which are the deductive rules and the facts and which are the integrity constraints on a specific relation. Different tools providing for the two mechanisms are proposed for a flexible logic database management system.
Article
This paper provides a theoritical basis for deductive database systems. A deductive database consists of closed typed first order logic formulas of the form A ← W, where A is an atom and W is a typed first order formula. A typed first order formula can be used as a query, and a closed typed first order formula can be used as an integrity constraint. Functions are allowed to appear in formulas. Such a deductive database system can be implemented using a PROLOG system. The main results are the soundness of the query evaluation process, the soundness of the implementation of integrity constraints, and a simplification theorem for implementing integrity constraints. A short list of open problems is also presented.
Article
This paper is the third in a series providing a theoretical basis for deductive database systems. A deductive database consists of closed typed first order logic formulas of the form A ← W, where A is an atom and W is a typed first order formula. A typed first order formula can be used as a query, and a closed typed first order formula can be used as an integrity constraint. Functions are allowed to appear in formulas. Such a deductive database system can be implemented using a PROLOG system. The main results of this paper are concerned with the nonfloundering and completeness of query evaluation. We also introduce an alternative query evaluation process and show that corresponding versions of the earlier results can be obtained. Finally, we summarize the results of the three papers and discuss the attractive properties of the deductive database system approach based on first order logic.
Article
We prove the correctness of a simplification method for checking static integrity constraints in stratified deductive databases.
Article
A method is presented for checking integrity constraints in a deductive database in which verification of the integrity constraints in the updated database is reduced to the process of constructing paths from update literals to the heads of integrity constraints. In constructing such paths, the method takes advantage of the assumption that the database satisfies the integrity constraints prior to the update. If such a path can be constructed, the integrity of the database has been violated. Correctness of the method has been proved for checking integrity constraints in stratified deductive databases. An explanation of how this method may be realised efficiently in Prolog is given.
Conference Paper
Rule-goal graphs are the central data structures used in the NAIL′ system, a knowledge-base system being developed at Stanford University They are constructed while testing the applicability of capture rules, and traversed while generating ICODE to evaluate queries. Generating rule-goal graphs may be reduced to the problem of ordering subgoals. This paper gives an algorithm for generating rule-goal graphs efficiently, in time polynomial in the size of the rules if the arity of recursive predicates is bounded. The graphs generated may be suboptimal for some purposes, but the algorithm will always find a rule-goal graph if one exists. The algorithm has been implemented in Cprolog, and is currently being used to generate rule-goal graphs for the NAIL′ system
Conference Paper
The problem of pushing projections in recursive rules has received little attention. The objective of this paper is to motivate this problem and present some (partial) solutions. We consider programs with function-free rules, also known as Datalog programs. After formally defining existential subqueries, we present a syntactic criterion for detecting them and then consider optimization in three areas 1) We identify the existential subqueries and make them explicit by rewriting the rules. This, in effect, automatically captures some aspects of Prolog's cut operator that are appropriate to the bottom-up model of computation 2) We eliminate argument positions in recursive rules by “pushing projections” 3) We observe that “pushing projections” in rules also has the effect of making some rules (even recursive rules) redundant and try to (identify and) discard them
Conference Paper
One of the important means of specifying the se- mantics about data is via integrity constraints. Expe- rience has shown that the conventional database ap- proach to integrity constraint enforcement is not suc- cessful. In this paper, we demonstrate the feasibility and power of a knowledge-based approach to the effi- ciency problem of constraint validation. We propose a transformational mechanism which exploits knowledge about the application domain and database organiza- tion to reformulate integrity constraints into semanti- cally equivalent ones from which efficient code can be generated. database. This verification process can be performed using theorem proving techniques. However, integrity constraints are intrinsically state-dependent and have to be validated against the database extension when- ever state transition happens. This leads to a third validation function in which the key challenge is the ef- ficiency of validation. Finally, the constraint manager has to make decision on what to do when an invalid re- quest for changing database state is encountered. This may include either rejecting the request or making some further state changes to get another valid state.
Conference Paper
This paper is concerned with the use of general laws in data bases. It consists of two main parts respectively devoted to state laws and to transition laws. Some of the state laws are used as derivation rules while others are used as integrity rules. Integrity rules as well as derivation rules can be treated in many ways which are presented. For each such method, the actions to be undertaken when querying, adding, suppressing and updating information in the data base are studied. For transition laws, a formalism is proposed which enables them to be handled in the same way as integrity rules stemming from state laws. The self-consistency of transition laws is also discussed.
Conference Paper
Datalog is extended to incorporate single-valued “data functions”, which correspond to attributes in semantic models, and which may be base (user-specified) or derived (computed). Both conventional and stratified datalog are considered. Under the extension, a datalog program may not be consistent, because a derived function symbol may evaluate to something which is not a function. Consistency is shown to be undecidable, and is decidable in a number of restricted cases. A syntactic restriction, panwise consistency, is shown to guarantee consistency. The framework developed here can also be used to incorporate single-valued data functions into the Complex Object Language (COL), which supports deductive capabilities, complex database objects, and set-valued data functions.There is a natural correspondence between the extended datalog introduced here, and the usual datalog with functional dependencies. For families &PHgr; and &Ggr; of dependencies and a family of datalog programs &Lgr;, the &PHgr;-&Ggr; implication problem for &Lgr; asks, given sets F ⊆ &PHgr; and G ⊆ &Ggr; and a program P in &Lgr;, whether for all inputs I, I @@@@ F implies P(I) @@@@ G. The FD-FD implication problem is undecidable for datalog, and the TGD-EGD implication problem is decidable for stratified datalog. Also, the Ø-MVD problem is undecidable (and hence also the MVD-preservation problem).
Conference Paper
We describe the design decisions made for the NAIL! (not another implementation of logic!) system, an advanced form of DBMS where queries may involve a large collection of Prolog-like rules used for query interpretation. A discussion of the ways NAIL! semantics differs from Prolog is followed by an exposition of the principal ideas in the system design. These points include the partition of predicates into strongly connected components to represent the structure of recursions and the “capture rule” organization for selecting query processing strategies. Other ideas include the way distinctions between bound and free arguments are capitalized upon and the persistence of previously discovered facts about the way to handle certain queries. We also survey the recent work on the processing of recursively defined queries conducted by the NAIL! group and others with similar computational models.
Article
The purpose of this paper is to show that logic provides a convenient formalism for studying classical database problems. There are two main parts to the paper, devoted respectively to conventional databases and deductive databases. In the first part, we focus on query languages, integrity modeling and maintenance, query optimization, and data dependencies. The second part deals mainly with the representation and manipulation of deduced facts and incomplete information.
Article
The problems encountered in developing expert database systems (EDSs) are described. It is suggested that what is needed is an EDS shell that is general-purpose and thus alleviates much of the ad hoc tedium associated with constructing present EDSs, and overcomes performance and semantic problems stemming from the present mismatch in the front and back ends of these systems. A language called LDL (logic data language) is described that was developed for this purpose. The discussion is restricted to LDL features with an immediate bearing on the main theme, i.e. tightly coupled systems
Article
Given a program P and two sets IC and IC 0 of integrity constraints, IC uniformly implies IC 0 in P if whenever an (input) database I defined on predicates of P satisfies IC then the (output) database computed by P from I satisfies IC 0 . P preserves IC if IC uniformly implies IC in P . We consider only definite DATALOG programs. We show that testing preservation of downward-closed integrity constraints can be simplified to the case of single-rule programs that are computed in a "nonrecursive manner". Specially, we present an efficient test algorithm when constraints are equality-generating dependencies. We also show that the uniform implication problem can be similarly simplified if IC is preserved and all given constraints are downward-closed. Our results generalize dependency-preserving database schemes by considering mappings defined by arbitrary definite DATALOG programs and more general integrity constraints. Key Words. databases, deductive databases, DATALOG programs, in...
Article
The enforcement of semantic integrity constraints in data and knowledge bases constitutes a major performance bottleneck. Integrity constraint simplification methods aim at reducing the complexity of formula evaluation at run-time. This paper proposes such a simplification method for large and semantically rich knowledge bases. Structural, temporal and assertional knowledge in the form of deductive rules and integrity constraints, is represented in Telos, a hybrid language for knowledge representation. A compilation method performs a number of syntactic, semantic and temporal transformations to integrity constraints and deductive rules, and organizes simplified forms in a dependence graph that allows for efficient computation of implicit updates. Precomputation of potential implicit updates at compile time is possible by computing the dependence graph transitive closure. To account for dynamic changes to the dependence graph by updates of constraints and rules, we propose efficient alg...
On the efficient computation of the difference between consecutive database states Deductive and Object-Oriented Databases
  • V Kfichenhoff
V. Kfichenhoff, On the efficient computation of the difference between consecutive database states, in: C. Delobel et al., eds., Deductive and Object-Oriented Databases; Proc. Second Int. Conf. DOOD'91, vol. 566 of Lecture Notes in Computer Science (Munich, Germany, Dec. 1991) 478-502.
Large Deductive Databases with constraints
  • Wüethrich
On updates and inconsistency repairing in knowledge bases
  • Wüethrich