ArticlePDF Available

The generalized completeness of Horn predicate logic as a programming language

Authors:
... The import of that result is that in general, it is impossible to effectively query the unique stable model of such programs. Marek et al. [33] constructed finite predicate logic programs whose stable models could code up the paths through any infinitely Vol. 30 branching recursive tree so that the problem of deciding whether a finite predicate logic program has a stable model is Σ 1 1 -complete. For such reasons, researchers have focused on finite predicate logic programs without function symbols. ...
... , a n , ¬b 1 , . . . , ¬b m (1) where c, a 1 , . . . , a n , b 1 , . . . ...
... Let ω = {0, 1, 2, . . .} denote the set of natural numbers and let [x, y] denote the standard pairing function 1 2 (x 2 + 2xy + y 2 + 3x + y) and, for n ≥ 2, we let [x 0 , . . . , ...
Article
We study the recognition problem in the metaprogramming of finite normal predicate logic programs. That is, let $\mathcal{L}$ be a computable first-order predicate language with infinitely many constant symbols and infinitely many $n$-ary predicate symbols and $n$-ary functions symbols for all $n \geq 1$. Then we can effectively list all the finite normal predicate logic programs $Q_0,Q_1,\ldots $ over $\mathcal{L}$. Given some property $\mathcal{P}$ of finite normal predicate logic programs over $\mathcal{L}$, we define the index set $I_{\mathcal{P}}$ to be the set of indices $e$ such that $Q_e$ has property $\mathcal{P}$. We classify the complexity of the index set $I_{\mathcal{P}}$ within the arithmetic hierarchy for various natural properties of finite predicate logic programs. For example, we determine the complexity of the index sets relative to all finite predicate logic programs and relative to certain special classes of finite predicate logic programs of properties such as (i) having no stable models, (ii) having no recursive stable models, (iii) having at least one stable model, (iv) having at least one recursive stable model, (v) having exactly $c$ stable models for any given positive integer $c$, (vi) having exactly $c$ recursive stable models for any given positive integer $c$, (vii) having only finitely many stable models, (viii) having only finitely many recursive stable models, (ix) having infinitely many stable models and (x) having infinitely many recursive stable models.
Article
In this paper, we discuss the copy complexity of Horn formulas with respect to unit resolution. A Horn formula is a boolean formula in conjunctive normal form (CNF) with at most one positive literal per clause. Horn formulas find applications in a number of domains, such as program verification (abstract interpretation) and logic programming (answer set programming). Quantified Horn clauses are used extensively in temporal verification of universal properties. Resolution is one of the oldest proof systems (refutation systems) for the boolean satisfiability problem (SAT), when the input is presented in conjunctive normal form (CNF). It is both sound and complete, although inefficient, when compared to other stronger proof systems for boolean formulas. Despite its inefficiency, the simple nature of resolution makes it an integral part of several theorem provers. Unit resolution is a restricted form of resolution in which each resolution step needs to use a clause with only one literal (unit literal clause). While not complete for general CNF formulas, unit resolution is complete for Horn formulas. Read-once resolution is a form of resolution in which each clause (input or derived) may be used in at most one resolution step. As with unit resolution, read-once resolution is incomplete in general and complete for Horn clauses. This paper focuses on a combination of unit resolution and read-once resolution called unit read-once resolution. Unit read-once resolution is incomplete for Horn clauses. In this paper, we study the copy complexity problem in Horn formulas with respect to unit read-once resolution. Briefly, the copy complexity of a formula with respect to unit read-once resolution, is the smallest number k, such that replicating each clause k times guarantees the existence of a unit read-once resolution refutation (UROR). This paper focuses on two problems related to the copy complexity of Horn formulas with respect to unit read-once resolution. We first relate the copy complexity of Horn formulas with respect to unit read-once resolution to the copy complexity of the corresponding Horn constraint system with respect to the addition rule. We also examine a form of copy complexity in which we permit replication of derived clauses, in addition to the input clauses. Finally, we provide a polynomial time algorithm for the problem of checking if a 2-CNF formula has a UROR.
Article
Unification complexity of Horn clause programs is introduced, and its complexity is investigated for various classes of universal Horn formulas. A faithful simulation theorem is proved which associates with every k-tape Turing machine a Horn clause program requiring exactly as many unification steps as the Turing machine. From this it follows that Horn clause programs are computationally complete even in the case of bi-Horn (=Krom) formulas, and that the unification complexity of Horn clause programs is not recursively bounded. The faithful simulation theorem is also used to give a new interpretation to hierarchy theorems in the context of logic programming.
Thesis
Full-text available
In dieser Arbeit wurde an der über Google Dialogflow gesteuerten Entwicklungsumgebung für logische Programmierung "Speech and Logic IDE" (SLIDE) geforscht. Die Anwendung wurde von Dialogflow zu der Bibliothek Snips NLU überführt, damit ohne Internetanbindung gearbeitet werden kann. Als Hauptteil der Arbeit wurden die logischen Konzepte Variablen, Rekursion und Listen in die Anwendung implementiert. Es wurde eine Benennungsvorschrift eingeführt, die die Anwendung von starren Strukturen löst und es durch rekursive Verarbeitung erlaubt, beliebig komplexe Strukturen zu modellieren. Die Anwendung wurde anschließend im Rahmen der Sekundarstufe I betrachtet. Die behandelten Fragen waren: "Kann SLIDE genutzt werden, um SuS der Sekundarstufe I Wissen zu vermitteln?", "Kann SLIDE genutzt werden, um SuS der Sekundarstufe I die Konzepte Fakten und Regeln zu vermitteln?", "Kann SLIDE genutzt werden, um SuS der Sekundarstufe I die Konzepte Variablen, Rekursion und Listen zu vermitteln?", "Kann SLIDE genutzt werden, um SuS der Sekundarstufe I Wissen außerhalb der mathematischen Domäne zu vermitteln?" Dazu wurden zwei Unterrichtsbeispiele konzipiert, die sich im Deutschunterricht mit Grammatik und Lyrik auseinandersetzen, zwei Themen des niedersächsischen Kerncurriculums aus der Sekundarstufe I. Bei der Unterrichtsgestaltung wurde besonderes Augenmerk auf die neu eingeführten Konzepte gesetzt. Das zweite Unterrichtsbeispiel wurde im Rahmen einer Zusammenarbeit mit dem Projekthaus Zukunft MINT der Hochschule Hannover zweimalig mit unterschiedlichen 10. Klassen (IGS und Gymnasium) durchgeführt. Die theoretischen Ergebnisse der Arbeit zeigen, dass alle Fragen mit "Ja" beantwortet werden können. In der neuen Version von SLIDE ist es möglich die neuen Konzepte zu modellieren und es ist möglich Unterrichtsbeispiele zu konzipieren, die dieses Wissen vermitteln und sich auf Inhalte des Kerncurriculums beziehen. Die Ergebnisse der Feldexperimente in Form von Fragebögen fallen weniger aussagekräftig aus, da sich die SuS bereits am Ende der Sekundarstufe I befanden und die konzipierten Inhalte somit eine Wiederholung darstellten. Weiter muss anerkannt werden, dass viele Faktoren bei der Befragung nicht berücksichtigt werden konnten. Deswegen können aus den praktischen Versuchen keine umfassenden Schlüsse gezogen werden, eine optimistische Betrachtung zeigt ein generelles Interesse der Anwendung seitens der SuS. Die Erfahrungen legen nahe die Unterrichtsinhalte auf mehrere Unterrichtseinheiten aufzuteilen, damit die Teilnehmer mit Vorwissen an die neuen Konzepte herantreten und sich auf sie konzentrieren können.
Chapter
This chapter provides a brief introduction to two main semantics of logic programs with negation, the stable-model semantics of Gelfond and Lifschitz, and the well-founded semantics of Van Gelder, Ross, and Schlipf. We present definitions, introduce basic results, and relate the two semantics to each other. We restrict attention to the syntax of normal logic programs and focus on classical results. However, throughout the chapter and in concluding remarks we briefly discuss generalizations of the syntax and extensions of the semantics, and mention several recent developments.
Article
Logic programming has been introduced as programming in the Horn clause subset of first-order logic. This view breaks down for the negation as failure inference rule. To overcome the problem, one line of research has been to view a logic program as a set of iff-definitions. A second approach was to identify a unique canonical, preferred, or intended model among the models of the program and to appeal to common sense to validate the choice of such model. Another line of research developed the view of logic programming as a nonmonotonic reasoning formalism strongly related to Default Logic and Autoepistemic Logic. These competing approaches have resulted in some confusion about the declarative meaning of logic programming. This paper investigates the problem and proposes an alternative epistemological foundation for the canonical model approach, which is not based on common sense but on a solid mathematical information principle. The thesis is developed that logic programming can be understood as a natural and general logic of inductive definitions. In particular, logic programs with negation represent nonmonotone inductive definitions. It is argued that this thesis results in an alternative justification of the well-founded model as the unique intended model of the logic program. In addition, it equips logic programs with an easy-to-comprehend meaning that corresponds very well with the intuitions of programmers.
Conference Paper
It has been acknowledged that emerging Web applications require features that are not available in standard rule languages like Datalog or Answer Set Programming (ASP), e.g., they are not powerful enough to deal with anonymous values (objects that are not explicitly mentioned in the data but whose existence is implied by the background knowledge). In this paper, we introduce a new rule language based on ASP extended with function symbols, which can be used to reason about anonymous values. In particular, we define binary frontier-guarded programs (BFG programs) that allow for disjunction, function symbols, and negation under the stable model semantics. In order to ensure decidability, BFG programs are syntactically restricted by allowing at most binary predicates and by requiring rules to be frontier-guarded. BFG programs are expressive enough to simulate ontologies expressed in popular Description Logics (DLs), capture their recent non-monotonic extensions, and can simulate conjunctive query answering over many standard DLs. We provide an elegant automata-based algorithm to reason in BFG programs, which yields a 3ExpTime upper bound for reasoning tasks like deciding consistency or cautious entailment. Due to existing results, these problems are known to be 2ExpTime-hard.
Article
We present the class FDNC of logic programs that allows for function symbols (F), disjunction (D), nonmonotonic negation under the answer set semantics (N), and constraints (C), while still retaining the decidability of the standard reasoning tasks. Thanks to these features, FDNC programs are a powerful formalism for rule-based modeling of applications with potentially infinite processes and objects, and which allows also for common-sense reasoning in this context. This is evidenced, for instance, by tasks in reasoning about actions and planning: brave and open queries over FDNC programs capture the well-known problems of plan existence and secure (conformant) plan existence, respectively, in transition-based actions domains. As for reasoning from FDNC programs, we show that consistency checking and brave/cautious reasoning tasks are ExpTime-complete in general, but have lower complexity under syntactic restrictions that give rise to a family of program classes. Furthermore, we also determine the complexity of open queries (i.e., with answer variables), for which deciding non-empty answers is shown to be ExpSpace -complete under cautious entailment. Furthermore, we present algorithms for all reasoning tasks that are worst-case optimal. The majority of them resorts to a finite representation of the stable models of an FDNC program that employs maximal founded sets of knots, which are labeled trees of depth at most 1 from which each stable model can be reconstructed. Due to this property, reasoning over FDNC programs can in many cases be reduced to reasoning from knots. Once the knot-representation for a program is derived (which can be done off-line), several reasoning tasks are not more expensive than in the function-free case, and some are even feasible in polynomial time. This knowledge compilation technique paves the way to potentially more efficient online reasoning methods not only for FDNC, but also for other formalisms.
Article
Throughout the centuries the great themes of pure mathematics, which were conceived without thought of usefulness, have been transformed to essential tools for scientific understanding. This lecture is devoted to the theme that this transformation is now happening to mathematical logic, and that a subject of applied logic is emerging akin in its range and power to classical applied mathematics. This adds further weight to the argument that the investment of intellectual and material capital in mathematical research pays a rich dividend.
ResearchGate has not been able to resolve any references for this publication.