# Linear Tabulated Resolution Based on Prolog Control Strategy

**ABSTRACT** Infinite loops and redundant computations are long recognized open problems in Prolog. Two ways have been explored to resolve these problems: loop checking and tabling. Loop checking can cut infinite loops, but it cannot be both sound and complete even for function-free logic programs. Tabling seems to be an effective way to resolve infinite loops and redundant computations. However, existing tabulated resolutions, such as OLDT-resolution, SLG-resolution, and Tabulated SLS-resolution, are non-linear because they rely on the solution-lookup mode in formulating tabling. The principal disadvantage of non-linear resolutions is that they cannot be implemented using a simple stack-based memory structure like that in Prolog. Moreover, some strictly sequential operators such as cuts may not be handled as easily as in Prolog. In this paper, we propose a hybrid method to resolve infinite loops and redundant computations. We combine the ideas of loop checking and tabling to establish a linear tabul...

**0**Bookmarks

**·**

**63**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**ABox abduction is an important reasoning facility in Description Logics DLs. It finds all minimal sets of ABox axioms, called abductive solutions, which should be added to a background ontology to enforce entailment of an observation which is a specified set of ABox axioms. However, ABox abduction is far from practical by now because there lack feasible methods working in finite time for expressive DLs. To pave a way to practical ABox abduction, this paper proposes a new problem for ABox abduction and a new method for computing abductive solutions accordingly. The proposed problem guarantees finite number of abductive solutions. The proposed method works in finite time for a very expressive DL,, which underpins the W3C standard language OWL 2, and guarantees soundness and conditional completeness of computed results. Experimental results on benchmark ontologies show that the method is feasible and can scale to large ABoxes.International journal on Semantic Web and information systems 04/2012; 8(2):1-33. · 0.25 Impact Factor - SourceAvailable from: Son Thanh Cao
##### Dataset: Query-Subquery Nets

- SourceAvailable from: sci.brooklyn.cuny.edu[Show abstract] [Hide abstract]

**ABSTRACT:**Summary Tabling is a technique that can get rid of innite loops and redundant computations in the execution of recursive logic programs. The main idea of tabling is to memorize the answers to subgoals and use the answers to resolve their variant descendents. Tabling helps narrow the gap between declarative and procedural readings of logic programs. It not only is useful in the problem domains that motivated its birth, such as program analysis, parsing, deductive database, and theorem proving, but also has been found essential in several other problem domains such as model checking, learning, and data mining. Early resolution mechanisms proposed for tabling such as OLDT rely on suspension and resumption of subgoals to compute xp oints. Recently, a new resolution framework called linear tabling, envisioned by the proposer and several other researchers, has received considerable attention because of its simplicity, ease of implementation, and good space eciency . The idea of linear tabling is to use depth-rst iterative deepening rather than suspension to compute xp oints. Linear tabling is still immature compared with OLDT and a great of potential remains to be exploited. The objective of this project is to qualitatively and quantitatively analyze possible strategies and propose eectiv e optimization techniques to make it sustainable to large applications such as natural language and data mining

Page 1

arXiv:cs/0003046v1 [cs.AI] 9 Mar 2000

Linear Tabulated Resolution Based on Prolog Control Strategy

Yi-Dong Shen∗

Department of Computer Science, Chongqing University, Chongqing 400044, P.R.China

Email: ydshen@cs.ualberta.ca

Li-Yan Yuan and Jia-Huai You

Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada T6G 2H1

Email: {yuan, you}@cs.ualberta.ca

Neng-Fa Zhou

Department of Computer and Information Science, Brooklyn College

The City University of New York, New York, NY 11210-2889, USA

Email: zhou@sci.brooklyn.cuny.edu

Abstract

Infinite loops and redundant computations are long recognized open problems in Prolog. Two

ways have been explored to resolve these problems: loop checking and tabling. Loop checking

can cut infinite loops, but it cannot be both sound and complete even for function-free logic

programs. Tabling seems to be an effective way to resolve infinite loops and redundant com-

putations. However, existing tabulated resolutions, such as OLDT-resolution, SLG-resolution,

and Tabulated SLS-resolution, are non-linear because they rely on the solution-lookup mode in

formulating tabling. The principal disadvantage of non-linear resolutions is that they cannot be

implemented using a simple stack-based memory structure like that in Prolog. Moreover, some

strictly sequential operators such as cuts may not be handled as easily as in Prolog.

In this paper, we propose a hybrid method to resolve infinite loops and redundant com-

putations. We combine the ideas of loop checking and tabling to establish a linear tabulated

resolution called TP-resolution. TP-resolution has two distinctive features: (1) It makes lin-

ear tabulated derivations in the same way as Prolog except that infinite loops are broken and

redundant computations are reduced. It handles cuts as effectively as Prolog. (2) It is sound

and complete for positive logic programs with the bounded-term-size property. The underlying

algorithm can be implemented by an extension to any existing Prolog abstract machines such

as WAM or ATOAM.

Keywords: Tabling, loop checking, resolution, Prolog.

∗Work performed during a visit at Department of Computing Science, University of Alberta, Canada.

1

Page 2

1Introduction

While Prolog has many distinct advantages, it suffers from some serious problems, among the best-

known of which are infinite loops and redundant computations. Infinite loops cause users (especially

less skilled users) to lose confidence in writing terminating Prolog programs, whereas redundant

computations greatly reduce the efficiency of Prolog. The existing approaches to resolving these

problems can be classified into two categories: loop checking and tabling.

Loop checking is a direct way to cut infinite loops. It locates nodes at which SLD-derivations

step into a loop and prunes them from SLD-trees. Informally, an SLD-derivation G0⇒C1,θ1G1⇒ ...

⇒Ci,θiGi⇒ ... ⇒Ck,θkGk⇒ ... is said to step into a loop at a node Nklabeled with a goal Gk

if there is a node Ni(0 ≤ i < k) labeled with a goal Giin the derivation such that Giand Gk

are sufficiently similar. Many loop checking mechanisms have been presented in the literature (e.g.

[2, 7, 8, 14, 16, 18, 20]). However, no loop checking mechanism can be both (weakly) sound and

complete because the loop checking problem itself is undecidable in general even for function-free

logic programs [2].

The main idea of tabling is that during top-down query evaluation, we store intermediate

results of some subgoals and look them up to solve variants of the subgoals that occur later. Since

no variant subgoals will be recomputed by applying the same set of program clauses, infinite loops

can be avoided. As a result, termination can be guaranteed for bounded-term-size programs and

redundant computations substantially reduced [4, 6, 17, 20, 22].

There are many ways to formulate tabling, each leading to a tabulated resolution (e.g. OLDT-

resolution [17], SLG-resolution [6], Tabulated SLS-resolution [4], etc.). However, although existing

tabulated resolutions differ in one aspect or another, all of them rely on the so called solution-lookup

mode. That is, all nodes in a search tree/forest are partitioned into two subsets, solution nodes

and lookup nodes; solution nodes produce child nodes using program clauses, whereas lookup nodes

produce child nodes using answers in tables.

Our investigation shows that the principal disadvantage of the solution-lookup mode is that it

makes tabulated resolutions non-linear. Let G0⇒C1,θ1G1⇒ ... ⇒Ci,θiGibe the current derivation

with Gibeing the latest generated goal. A tabulated resolution is said to be linear1if it makes the

next derivation step either by expanding Giby resolving a subgoal in Giagainst a program clause

or a tabled answer, which yields Gi⇒Ci+1,θi+1Gi+1, or by expanding Gi−1via backtracking. It

is due to such non-linearity that the underlying tabulated resolutions cannot be implemented in

the same way as SLD-resolution (Prolog) using a simple stack-based memory structure. Moreover,

some strictly sequential operators such as cuts (!) may not be handled as easily as in Prolog. For

instance, in the well-known tabulated resolution system XSB, clauses like

p(.) ← ...,t(.),!,...

1The concept of “linear” here is different from the one used for SL-resolution [9].

2

Page 3

where t(.) is a tabled subgoal, are not allowed because the tabled predicate t occurs in the scope

of a cut [11, 13].

The objective of our research is to establish a hybrid approach to resolving infinite loops and

redundant computations and develop a linear tabulated Prolog system. In this paper, we establish a

theoretical framework for such a system, focusing on a linear tabulated resolution − TP-resolution

for positive logic programs (TP for Tabulated Prolog).

Remark 1.1 In this paper we will use the prefix TP to name some key concepts such as TP-

strategy, TP-tree, TP-derivation and TP-resolution, in contrast to the standard Prolog control

strategy, Prolog-tree (i.e. SLD-tree generated under Prolog-strategy), Prolog-derivation and Prolog-

resolution (i.e. SLD-resolution controlled by Prolog-strategy), respectively.

In TP-resolution, each node in a search tree can act not only as a solution node but also

as a lookup node, regardless of when and where it is generated. In fact, we do not distinguish

between solution and lookup nodes in TP-resolution.This shows an essential difference from

existing tabulated resolutions using the solution-lookup mode. The main idea is as follows: for any

selected tabled subgoal A at a node Nilabeled with a goal Gi, it always first uses an answer I in

a table to generate a child node Ni+1(Niacts as a lookup node), which is labeled by the resolvent

of Giand I; if no new answers are available in the table, it resolves against program clauses to

produce child nodes (Nithen acts as a solution node). The order in which answers in a table are

used is based on first-generated-first-use and the order in which program clauses are applied is from

top to bottom except for the case where the derivation steps into a loop at Ni. In such a case, the

subgoal A skips the clause that is being used by its closest ancestor subgoal that is a variant of A.

Like OLDT-resolution, TP-resolution is sound and complete for positive logic programs with the

bounded-term-size property.

The plan of this paper is as follows. In Section 2 we present a typical example to illustrate

the main idea of TP-resolution and its key differences from existing tabulated resolutions. In

Section 3, we formally define TP-resolution. In Section 3.1 we discuss how to represent tables and

how to operate on tables. In Section 3.2 we first introduce the so called PMF mode for resolving

tabled subgoals with program clauses, which lays the basis for a linear tabulated resolution. We

then define a tabulated control strategy called TP-strategy, which enhances Prolog-strategy with

proper policies for the selection of answers in tables. Next we present a constructive definition (an

algorithm) of a TP-tree based on TP-strategy. Finally, based on TP-trees we define TP-derivations

and TP-resolution.

Section 4 is devoted to showing some major characteristics of TP-resolution, including its

termination property and soundness and completeness. We also discuss in detail how TP-resolution

deals with the cut operator.

We assume familiarity with the basic concepts of logic programming, as presented in [10]. Here

3

Page 4

and throughout, variables begin with a capital letter, and predicates, functions and constants with

a lower case letter. By?E we denote a list/tuple (E1,...,Em) of elements. Let?X = (X1,...,Xm)

be a list of variables and?I = (I1,...,Im) a list of terms.

{X1/I1,...,Xm/Im}. By p(.) we refer to any atom with the predicate p and by p(?X) to an atom

p(.) that contains the list?X of distinct variables. For instance, if p(?X) = p(W,a,f(Y ),W), then

?X = (W,Y ). Let G =← A1,...,Am be a goal and B a subgoal. By G + B we denote the goal

← A1,...,Am,B. By a variant of an atom (resp. a subgoal or a term) A we mean an atom (resp.

a subgoal or a term) A′that is the same as A up to variable renaming.2Let V be a set of atoms

By?X/?I we denote a substitution

(resp. subgoals or terms) that are variants of each other; then they are called variant atoms (resp.

variant subgoals or variant terms). Moreover, clauses with the same head predicate p are numbered

sequentially, with Cpireferring to its i-th clause (i > 0). Finally, unless otherwise stated, by a

(logic) program we refer to a positive logic program with a finite set of clauses.

2An Illustrative Example

We use the following simple program to illustrate the basic idea of the TP approach. For conve-

nience of presentation, we choose OLDT-resolution [17] for a side-by-side comparison (other typical

tabulated resolutions, such as SLG-resolution [6] and Tabulated SLS-resolution [4], have similar

effects).

P1: reach(X,Y ) ← reach(X,Z),edge(Z,Y ).

reach(X,X).

Cr1

Cr2

Cr3

reach(X,d).

edge(a,b).Ce1

Ce2

edge(d,e).

Let G0 =← reach(a,X) be the query (top goal).

loop right after the application of the first clause Cr1. We now show how it works using OLDT-

resolution (under the depth-first control strategy). Starting from the root node N0labeled with the

goal ← reach(a,X), the application of the clause Cr1gives a child node N1labeled with the goal

← reach(a,Z),edge(Z,X) (see Figure 1). Since the subgoal reach(a,Z) is a variant of reach(a,X)

Then Prolog will step into an infinite

that occurred earlier, it is suspended to wait for reach(a,X) to produce answers. N0 and N1

(resp. reach(a,X) and reach(a,Z)) are then called solution and lookup nodes (resp. subgoals),

respectively. So the derivation goes back to N0and resolves reach(a,X) with the second clause

Cr2, which gives a sibling node N2labeled with an empty clause 2. Since reach(a,a) is an answer

to the subgoal reach(a,X), it is memorized in a table, say TB(reach(a,X)). The derivation then

jumps back to N1and uses the answer reach(a,a) in the table to resolve with the lookup subgoal

2By this definition, A is a variant of itself.

4

Page 5

reach(a,Z), which gives a new node N3labeled with ← edge(a,X). Next, the node N4labeled

with 2 is derived from N3by resolving the subgoal edge(a,X) with the clause Ce1. So the answer

reach(a,b) is added to the table TB(reach(a,X)). After these steps, the OLDT-derivation evolves

into a tree as depicted in Figure 1, which is clearly not linear.

?

Ce1

?

?

PPPP

Pq

Z = a (Get reach(a,a) from the table)

Cr1

Cr2

(Add reach(a,b) to the table)

N4: 2

(Add reach(a,a) to the table)

N0: ← reach(a,X)

N1: ← reach(a,Z),edge(Z,X)

N2: 2

N3: ← edge(a,X)

Figure 1: OLDT-derivation.

We now explain how TP-resolution works. Starting from the root node N0labeled with the

goal ← reach(a,X) we apply the clause Cr1to derive a child node N1 labeled with the goal

← reach(a,Z),edge(Z,X) (see Figure 2). As the subgoal reach(a,Z) is a variant of reach(a,X)

and the latter is an ancestor of the former (i.e., the derivation steps into a loop at N1[14]), we

choose Cr2, the clause from the backtracking point of the subgoal reach(a,X), to resolve with

reach(a,Z), which gives a child node N2labeled with ← edge(a,X). Since reach(a,a) is an answer

to the subgoal reach(a,Z), it is memorized in a table TB(reach(a,X)). We then resolve the subgoal

edge(a,X) against the clause Ce1, which gives the leaf N3labeled with 2. So the answer reach(a,b)

to the subgoal reach(a,X) is added to the table TB(reach(a,X)). After these steps, we get a path

as shown in Figure 2, which is clearly linear.

?

Ce1

?

?

Cr2(Add reach(a,a) to the table)

Cr1

(Add reach(a,b) to the table)

N3: 2

N0: ← reach(a,X)

N1: ← reach(a,Z),edge(Z,X)

N2: ← edge(a,X)

Figure 2: TP-derivation.

Now consider backtracking. Remember that after the above derivation steps, the table TB(

reach(a,X)) consists of two answers, reach(a,a) and reach(a,b). For the OLDT approach, it first

backtracks to N3and then to N1(Figure 1). Since the subgoal reach(a,Z) has used the first answer

in the table before, it resolves with the second, reach(a,b), which gives a new node labeled with the

goal ← edge(b,X). Obviously, this goal will fail, so it backtracks to N1again. This time no new

5

Page 6

answers in the table are available to the subgoal reach(a,Z), so it is suspended and the derivation

goes to the solution node N0. The third clause Cr3is then selected to resolve with the subgoal

reach(a,X), yielding a new answer reach(a,d), which is added to the table. The derivation then

goes back to N1where the new answer is used in the same way as described before.

The TP approach does backtracking in the same way as the OLDT approach except for

the following key differences: (1) Because we do not distinguish between solution and lookup

nodes/subgoals, when no new answers in the table are available to the subgoal reach(a,Z) at N1,

we backtrack the subgoal by resolving it against the next clause Cr3. This guarantees that TP-

derivations are always linear. (2) Since there is a loop between N0and N1, before failing the subgoal

reach(a,X) at N0via backtracking we need to be sure that the subgoal has got its complete set of

answers. This is achieved by performing answer iteration via the loop. That is, we regenerate the

loop to see if any new answers can be derived until we reach a fixpoint. Figure 3 shows the first

part of TP-resolution, where the following answers to G0are derived: X = a, X = b, X = d and

X = e. Figure 4 shows the part of answer iteration. Since no new answer is derived during the

iteration (i.e. no answer is added to any tables), we fail the subgoal reach(a,X) at N0.

??

?

?

?

?

Get reach(a,b)

?

?

?

?

?

?9

QQQ

s

?

?

?=

XXXXXXXXXXX

Xz

Ce1

Ce2

N0: ← reach(a,X)

Cr1

(Add reach(a,e))

N1: ← reach(a,Z),edge(Z,X)

Cr2(Add reach(a,a))

N7: ← edge(e,X)

Cr3(Add reach(a,d))

N5: ← edge(d,X)

N4: ← edge(b,X)

Get reach(a,e)

N2: ← edge(a,X)

(Add reach(a,b))

N6: 2

N3: 2

Figure 3: TP-derivations of P1∪ {G0}.

??

?

?

?

?

Get reach(a,b)

?

?

?

?

?

?9

QQQ

s

?

?

?=

XXXXXXXXXXX

Xz

Ce1

Ce2

N0: ← reach(a,X)

Cr1

N8: ← reach(a,Z),edge(Z,X)

N14: ← edge(e,X)

Get reach(a,d)

N12: ← edge(d,X)

N11: ← edge(b,X)

Get reach(a,e)

N9: ← edge(a,X)

N13: 2

Get reach(a,a)

N10: 2

Figure 4: Answer iteration via a loop.

Remark 2.1 From the above illustration, we see that in OLDT-resolution, solution nodes are

those at which the left-most subgoals are generated earliest among all their variant subgoals. In

SLG-resolution, however, solution nodes are roots of trees in a search forest, each labeled by a

special clause of the form A ← A [5]. In Tabulated SLS-resolution, any root of a tree in a forest is

6

Page 7

itself labeled by an instance, say A ← B1,...,Bn(n ≥ 0), of a program clause and no nodes in the

tree will produce child nodes using program clauses [3]. However, for any atom A we can assume

a virtual super-root labeled with A ← A, which takes all the roots in the forest labeled by A ← ...

as its child nodes. In this sense, the search forest in Tabulated SLS-resolution is the same as that

in SLG-resolution for positive logic programs. Therefore, we can consider all virtual super-roots as

solution nodes.

3 TP-Resolution

This section formally defines the TP approach to tabulated resolution, mainly including the rep-

resentation of tables, the strategy for controlling tabulated derivations (TP-strategy), and the

algorithm for making tabulated derivations based on the control strategy (TP-trees).

3.1Tabled Predicates and Tables

Predicates in a program P are classified as tabled predicates and non-tabled predicates. The classi-

fication is made based on a dependency graph [1]. Informally, for any predicates p and q, there is

an edge p → q in a dependency graph GP if there is a clause in P of the form p(.) ← ...,q(.),...

Then a predicate p is to be tabled if GP contains a cycle with a node p.

Any atom/subgoal with a tabled predicate is called a tabled atom/subgoal. During tabulated

resolution, we will create a table for each tabled subgoal, A. Apparently, the table must contain A

(as an index) and have space to store intermediate answers of A. Note that in our tabling approach,

any tabled subgoal can act both as a solution subgoal and as a lookup subgoal, so a table can be

viewed as a blackboard on which a set of variant subgoals will read and write answers. In order to

guarantee not losing answers for any tabled subgoals (i.e. the table should contain all answers that

A is supposed to have by applying its related clauses), while avoiding redundant computations (i.e.

after a clause has been used by A, it should not be re-used by any other variant subgoal A′), a

third component is needed in the table that keeps the status of the clauses related to A. Therefore,

after a clause Cihas been used by A, we change its status. Then when evaluating a new subgoal

A′that is a variant of A, Ciwill be ignored because all answers of A derived via Cihave already

been stored in the table. For any clause whose head is a tabled atom, its status can be “no longer

available” or “still available.” We say that Ci is “no longer available” to A if all answers of A

through the application of Cihave already been stored in the table of A. Otherwise, we say Ciis

“still available” to A. Finally, we need a flag variable COMP in the table to indicate if all answers

through the application of all clauses related to A have been completely stored in the table. This

leads to the following.

Definition 3.1 Let P be a logic program and p(?X) a tabled subgoal. Let P contain exactly

7

Page 8

M clauses, Cp1,...,CpM, with a head p(.). A table for p(?X), denoted TB(p(?X)), is a four-tuple

(p(?X),T,C,COMP), where

1. T consists of tuples that are instances of?X, each?I of which represents an answer, p(?X)?X/?I,

to the subgoal.

2. C is a vector of M elements, with C[i] = 0 (resp. = 1) representing that the status of Cpi

w.r.t. p(?X) is “no longer available” (resp. “still available”).

3. COMP ∈ {0,1}, with COMP = 1 indicating that the answers of p(?X) have been completed.

For convenience, we use TB(p(?X)) → answer tuple[i] to refer to the i-th answer tuple in T,

TB(p(?X)) → clause status[i] to the status of Cpiw.r.t. p(?X), and TB(p(?X)) → COMP to the

flag COMP.

Example 3.1 Let P be a logic program that contains exactly three clauses, Cp1,Cp2and Cp3, with

a head p(.). The table

TB(p(X,Y )) : (p(X,Y ), {(a,b),(b,a),(b,c)},(1,0,0),0)

represents that there are three answers to p(X,Y ), namely p(a,b), p(b,a) and p(b,c), and that Cp2

and Cp3have already been used by p(X,Y ) (or its variant subgoals) and Cp1is still available to

p(X,Y ). Obviously, the answers of p(X,Y ) have not yet been completed. The table

TB(p(a,b)) : (p(a,b), {()},(0,1,1),1)

shows that p(a,b) has been proved true after applying Cp1. Note that since p(a,b) contains no

variables, its answer is a 0-ary tuple. Finally, the table

TB(p(a,X)) : (p(a,X), {},(0,0,0),1)

represents that p(a,X) has no answer at all.

Before introducing operations on tables, we define the structure of nodes used in TP-resolution.

Definition 3.2 Let P be a logic program and Gia goal ← p(?X),A2,...,Am. By “register a node

Niwith Gi” we do the following: (1) label Niwith Gi, i.e. Ni:← p(?X),A2,...,Am; and (2) create

the following structure for Ni:

• answer ptr, a pointer that points to an answer tuple in TB(p(?X)).

• clause ptr, a pointer that points to a clause in P with a head p(.).

• clause SUSP (initially =0), a flag used for the update of clause status.

• node LOOP (initially =0), a flag showing if Niis a loop node.

• node ITER (initially =0), a flag showing if Niis an iteration node.

• node ANC (initially =−1), a flag showing if Nihas any ancestor variant subgoals.

8

Page 9

For any field F in the structure of Ni, we refer to it by Ni → F. The meaning of Ni →

answer ptr and Ni→ clause ptr is obvious. The remaining fields will be defined by Definition 3.8

followed by the procedure nodetype update(.). We are now ready to define operations on tables.

Definition 3.3 Let P be a logic program with M clauses with a head p(.) and Nia node labeled by

a goal ← p(?X),...,Am. Let NEW be a global flag variable used for answer iteration (see Algorithm

2 for details). We have the following basic operations on a table.

1. create(p(?X)). Create a table TB(p(?X)) : (p(?X),T,C,COMP), with T = {}, COMP = 0,

and C[j] = 1 for all 1 ≤ j ≤ M.

2. memo(p(?X),?I), where?I is an instance of?X. When?I is not in TB(p(?X)), add it to the end

of the table, set NEW = 1, and if?I is a variant of?X, set TB(p(?X)) → COMP = 1.

3. lookup(Ni,?Ii). Fetch the next answer tuple in TB(p(?X)), which is pointed by Ni→ answer ptr,

into?Ii. If there is no next tuple,?Ii= null.

4. memo look(Ni,p(?X),?I,θi). It is a compact operator, which combines memo(.) and lookup(.).

That is, it first performs memo(p(?X),?I) and then gets the next answer tuple?F from TB(p(?X)),

which together with?X forms a substitution θi=?X/?F. If there is no next tuple, θi= null.

First, the procedure create(p(?X)) is called only when the subgoal p(?X) occurs the first time

and no variant subgoals occurred before. Therefore, up to the time when we call create(p(?X)), no

clauses with a head p(.) in P have been selected by any variant subgoals of p(?X), so their status

should be set to 1. Second, whenever an answer p(?I) of p(?X) is derived, we call the procedure

memo(p(?X),?I). If the answer is new, it is appended to the end of the table. The flag NEW is

then set to 1, showing that a new answer has been derived. If the new tuple?I is a variant of

?X, which means that p(?X) is true for any instances of?X, the answers of p(?X) are completed so

TB(p(?X)) → COMP is set to 1. Finally, lookup(Ni,?Ii) is used to fetch an answer tuple from the

table for the subgoal p(?X) at Ni.

memo(.) and lookup(.) can be used independently.They can also be used in pairs, i.e.

memo(.) immediately followed by lookup(.). In the latter case, it would be more convenient to

use memo look(.).

3.2TP-Strategy and TP-Trees

In this subsection, we introduce the tabulated control strategy and the way to make tabulated

derivations based on this strategy. We begin by discussing how to resolve subgoals with program

clauses and answers in tables.

Let Ni be a node labeled by a goal Gi =← A1,...,Am with A1 = p(?X) a tabled subgoal.

Consider evaluating A1 using a program clause Cp = A ← B1,...,Bn (n ≥ 0), where A1θ =

9

Page 10

Aθ.3

If we use SLD-resolution, we would obtain a new node labeled with the goal Gi+1 =←

(B1,...,Bn,A2,...,Am)θ, where the mgu θ is consumed by all Ajs (j > 1), although the proof of

A1θ has not yet been completed (produced). In order to avoid such kind of pre-consumption, we

propose the so called PMF (for Prove-Memorize-Fetch) mode for resolving tabled subgoals with

clauses. That is, we first prove (B1,...,Bn)θ. If it is true with an mgu θ1, which means A1θθ1is

true, we memorize the answer A1θθ1in the table TB(A1) if it is new. We then fetch an answer

from TB(A1) to apply to the remaining subgoals of Gi. Obviously modifying SLD-resolution by

the PMF mode preserves the original answers to Gi. Moreover, since only new answers are added

to TB(A1), all repeated answers of A1will be precluded to apply to the remaining subgoals of Gi,

so that redundant computations are avoided.

The PMF mode can readily be realized by using the two table procedures, memo(.) and

lookup(.), or using the compact operator memo look(.). That is, after resolving the subgoal A1

with the clause Cp, Nigets a child node Ni+1labeled with the goal

Gi+1=← (B1,...,Bn)θ,memo look(Ni,p(?X),?Xθ,θi),A2,...,Am.

Note that the application of θ is blocked by the subgoal memo look(.) because the consumption

(fetch) must follow the production (prove and memorize). We now explain how it works.

Assume that after some resolution steps from Ni+1we reach a node Nkthat is labeled by the

goal Gk=← memo look(Ni,p(?X),?Xθθ1,θi),A2,...,Am. This means that (B1,...,Bn)θ has been

proved true with the mgu θ1. That is, A1θθ1is an answer of A1. By the left-most computation

rule, memo look(Ni,p(?X),?Xθθ1,θi) is executed, which adds to the table TB(A1) the answer tuple

?Xθθ1if it is new, gets from TB(A1) the next tuple?I, and then sets θi=?X/?I. Since A1θiis an

answer to the subgoal A1of Gi, the mgu θineeds to be applied to the remaining Ajs of Gi. We

distinguish between two cases.

(1) From A2to Am, Aj= memo look(Nf,B, ,θf) is the first subgoal of the form memo look(.).

According to the PMF mode, there must be a node Nf, which occurred earlier than Ni,

labeled with a goal Gf =← B,Aj+1,...,Am such that B is a tabled subgoal and Aj =

memo look(Nf,B, ,θf) resulted from resolving B with a program clause. This means that

the proof of B is now reduced to the proof of (A2,...,Aj−1)θi. Therefore, by the PMF mode

θishould be applied to the subgoals A2until Aj. That is, Nkhas a child node Nk+1labeled

with a goal Gk+1=← (A2,...,Aj)θi,Aj+1,...,Am.

(2) For no j ≥ 2 Aj is of the form memo look(.). This means that no Aj is a descendant of

any tabled subgoal, so the mgu θi should be applied to all the Ajs. That is, Gk+1 =←

(A2,...,Am)θi.

Note that by Definition 3.3 the atom p(?X) in memo(p(?X), ) and memo look( ,p(?X), , )

is merely used to index the table TB(p(?X)), so it cannot be instantiated during the resolution.

3Here and throughout, we assume that Cp has been standardized apart to share no variables with Gi.

10

Page 11

That is, for any mgu θ, memo(p(?X),?I)θ = memo(p(?X),?Iθ) and memo look(Ni,p(?X),?I,θi)θ =

memo look(Ni,p(?X),?Iθ,θi)

The above discussion shows how to resolve the tabled subgoal A1 at Niagainst a program

clause using the PMF mode. The same principle can be applied to resolve A1with an answer tuple

?I in TB(A1) and to resolve A1with a program clause when A1is a non-tabled subgoal. Therefore,

we have the following definition of resolvents for TP-resolution.

Definition 3.4 Let Nibe a node labeled by a goal Gi=← A1,...,Am(m ≥ 1).

1. If A1is memo look(Nh,p(?X),?I,θh), then the resolvent of Giand θh(θh?= null) is the goal

Gi+1=← (A2,...,Ak)θh,Ak+1,...,Am, where Ak(k > 1) is the left-most subgoal of the form

memo look(.).

Otherwise, let A1= p(?X) and Cpbe a program clause A ← B1,...,Bnwith Aθ = A1θ.

2. If A1is a non-tabled subgoal, the resolvent of Giand Cpis the goal Gi+1=← (B1,...,Bn,

A2,...,Ak)θ, Ak+1,...,Am, where Akis the left-most subgoal of the form memo look(.).

3. If A1 is a tabled subgoal, the resolvent of Gi and Cp is the goal Gi+1 =← (B1,...,Bn)θ,

memo look(Ni,p(?X),?Xθ,θi),A2,...,Am.

4. If A1is a tabled subgoal, let?I (?I ?= null) be an answer tuple in TB(A1), then the resolvent

of Gi and?I is the goal Gi+1 =← (A2,...,Ak)?X/?I,Ak+1,...,Am, where Akis the left-most

subgoal of the form memo look(.).

We now discuss tabulated control strategies. Recall that Prolog implements SLD-resolution

by sequentially searching an SLD-tree using the Prolog control strategy (Prolog-strategy, for short):

Depth-first (for goal selection) + Left-most (for subgoal selection) + Top-down (for clause

selection) + Last-first (for backtracking). Let “register a node Ni with Gi” be as defined by

Definition 3.2 except that the structure of Nionly contains the pointer clause ptr. Let return(?Z)

be a procedure that returns?Z when?Z ?= () and YES otherwise. Then the way that Prolog makes

SLD-derivations based on Prolog-strategy can be formulated as follows.

Definition 3.5 (Algorithm 1) Let P be a logic program and G0a top goal with the list?Y of

variables. The Prolog-tree TG0of P ∪ {G0} is constructed by recursively performing the following

steps until the answer NO is returned.

1. (Root node) Register the root N0with G0+ return(?Y ) and goto 2.

2. (Node expansion) Let Ni be the latest registered node labeled by Gi =← A1,...,Am (i ≥

0,m > 0). Register Ni+1as a child of Niwith Gi+1if Gi+1can be obtained as follows.

11

Page 12

• Case 1: A1 is return(.). Execute the procedure return(.), set Gi+1 = 2 (an empty

clause), and goto 3 with N = Ni.

• Case 2: A1is an atom. Get a program clause A ← B1,...,Bn(top-down via the pointer

Ni → clause ptr) such that A1θ = Aθ. If no such a clause exists, then goto 3 with

N = Ni; else set Gi+1=← (B1,...Bn,A2,...,Am)θ and goto 2.

3. (Backtracking) If N is the root, then return NO; else goto 2 with its parent node as the latest

registered node.

Let STG0be the SLD-tree of P ∪ {G0} via the left-most computation rule.4

prove that when P has the bounded-term-size property [19] and STG0contains no infinite loops,

Algorithm 1 is sound and complete in that TG0= STG0. Moreover, Algorithm 1 has the following

distinct advantages: (1) since SLD-resolution is linear, Algorithm 1 can be efficiently implemented

It is easy to

using a simple stack-based memory structure; (2) due to its linearity and regular sequentiality,

some useful control mechanisms, such as the well-known cut operator !, can be used to heuristically

reduce search space. Unfortunately, Algorithm 1 suffers from two serious problems. One is that it

is easy to get into infinite loops even for very simple programs such as P = {p(X) ← p(X)}, which

makes it incomplete in many cases. The second problem is that it unnecessarily re-applies the same

set of clauses to variant subgoals such as in the query ← p(X),p(Y ), which leads to unacceptable

performance.

As tabling has a distinct advantage of resolving infinite loops and redundant derivations, one

interesting question then arises: Can we enhance Algorithm 1 with tabling, making it free from

infinite loops and redundant computations while preserving the above two advantages? In the rest

of this subsection, we give a constructive answer to this question. We first discuss how to enhance

Prolog-strategy with tabling.

Observe that in a tabling system, we will have both program clauses and tables. For conve-

nience, we refer to answer tuples in tables as tabled facts. Therefore, in addition to the existing

policies in Prolog-strategy, we need to have the following two additional policies: (1) when both

program clauses and tabled facts are available, first use tabled facts (i.e. Table-first for program

and table selection); (2) when there are more than one tabled fact available, first use the one that

is earliest memorized. Since we always add new answers to the end of tables (see Definition 3.3

for memo(.)), policy (2) amounts to saying Top-down selection for tabled facts. This leads to the

following control strategy for tabulated derivations.

Definition 3.6 By TP-strategy we mean: Depth-first (for goal selection) + Left-most (for subgoal

selection) + Table-first (for program and table selection) + Top-down (for the selection of tabled

facts and program clauses) + Last-first (for backtracking).

4In [17], it is called an OLD-tree.

12

Page 13

Our goal is to extend Algorithm 1 to make linear tabulated derivations based on TP-strategy.

To this end, we need to review a few concepts concerning loop checking.

Definition 3.7 ([14] with slight modification) An ancestor list ALAof pairs (N,B) is associ-

ated with each tabled subgoal A at a node Niin a tree (see the TP-tree below), which is defined

recursively as follows.

1. If A is at the root, then ALA= {}.

2. If A inherits a subgoal A′(by copying or instantiation) from its parent node, then ALA=

ALA′.

3. Let A be in the resolvent of a subgoal B at Nfagainst a clause B′← A1,...,Anwith Bθ = B′θ

(i.e. A = Aiθ for some 1 ≤ i ≤ n). If B is a tabled subgoal, ALA= ALB∪{(Nf,B)}; otherwise

ALA= {}.

We see that for any tabled subgoals A and A′, if A is in the ancestor list of A′, i.e. ( ,A) ∈ ALA′,

the proof of A needs the proof of A′. Particularly, if ( ,A) ∈ ALA′ and A′is a variant of A, the

derivation goes into a loop. This leads to the following.

Definition 3.8 Let Giat Niand Gkat Nkbe two goals in a derivation and Aiand Akbe the

left-most subgoals of Giand Gk, respectively. We say Ai(resp. Ni) is an ancestor subgoal of Ak

(resp. an ancestor node of Nk) if (Ni,Ai) ∈ ALAk. If Aiis both an ancestor subgoal and a variant,

i.e. an ancestor variant subgoal, of Ak, we say the derivation goes into a loop, denoted L(Ni,Nk).

Then, Nkand all its ancestor nodes involved in the loop are called loop nodes. Niis also called the

top loop node of the loop. Finally, a loop node is called an iteration node if by the time the node

is about to fail through backtracking, it is the top loop node of all loops containing the node that

were generated before.

Example 3.2 Figure 5 shows four loops, L1, ..., L4, with N1, ..., N4 their respective top loop

nodes. We see that only N1and N4are iteration nodes.

Information about the types and ancestors of nodes is the basis on which we make tabulated

resolution. Such information is kept in the structure of each node Ni(see Definition 3.2). The

flag Ni→ node LOOP = 1 shows that Niis a loop node. The flag Ni→ node ITER = 1 shows

that Niis an (candidate) iteration node. Let A1= p(?X) be the left-most subgoal at Ni. The flag

Ni→ node ANC = −1 represents that it is unknown whether A1has any ancestor variant subgoal;

Ni→ node ANC = 0 shows that A1has no ancestor variant subgoal; and Ni→ node ANC = j

(j > 0) indicates that A1has ancestor variant subgoals and that Cpjis the clause that is being used

13

Page 14

d

d

d

dd

d

..............

d

d

d

d

?

?

QQQ

s

?

?

?=

QQQ

s

QQQ

s

..............

XXXX X

z

?

?

? ?

. . . . . . . . . . . . . . . . . . . . .

?

?

?

?

?)

................................

N1

N2

N3

N4

L1

L3

L4

L2

Figure 5: Loops, top loop nodes and iteration nodes.

by its closest ancestor variant subgoal (i.e., let Ahat Nhbe the closest ancestor variant subgoal of

A1, then Ni→ node ANC = j represents that Niis derived from Nhvia Cpj).

Once a loop, say L(N1,Nm), of the form

(N1:← A1,...) →Cpj,θ1(N2:← A2,...) → ... → (Nm:← Am,...)

occurs, where all Nis (i < m) are ancestor nodes of Nmand A1= p(?X) is the closest ancestor

variant subgoal of Am, we update the flags of all nodes, N1,...,Nm, involved in the loop by calling

the following procedure.

Procedure nodetype update(L(N1,Nm))

(1) For all i > 1 set Ni→ node LOOP = 1 and Ni→ node ITER = 0.

(2) If N1→ node LOOP = 0, set N1→ node LOOP = 1 and N1→ node ITER = 1.

(3) Set Nm→ node ANC = j.

(4) For all i < m set Ni→ clause SUSP = 1.

Point (1) is straightforward, where since N1is the top loop node of L(N1,Nm), all the remaining

nodes in the loop cannot be an iteration node (see Definition 3.8).

If N1→ node LOOP = 0, meaning that N1is not involved in any loop that occurred before,

N1is considered as a candidate iteration node (point (2)). A candidate iteration node becomes an

iteration node if the node keeps its candidacy by the time it is about to fail through backtracking

(by that time it must be the top loop node of all previously generated loops containing it).

Since A1is the closest ancestor variant subgoal of Amand Cpjis the clause that is being used

by A1, we set the flag Nm→ node ANC = j (point (3)).

As mentioned in Section 2, during TP-resolution when a loop L(N1,Nm) occurs, where the

left-most subgoal A1= p(?X) at N1is the closest ancestor variant subgoal of the left-most subgoal

14

Page 15

Amat Nm, Amwill skip the clause Cpjthat is being used by A1. In order to ensure that such a skip

will not lead to loss of answers to A1, we will do answer iteration before failing N1via backtracking

until we reach a fixpoint of answers. Answer iteration is done by regenerating L(N1,Nm). This

requires keeping the status of all clauses being used by the loop nodes to “still available” during

backtracking. Point (4) is used for such a purpose. After the flag Ni→ clause SUSP is set to 1,

which indicates that Niis currently involved in a loop, the status of the clause being currently used

by Niwill not be set to “no longer available” when backtracking on Ni(see Case B3 of Algorithm

2).

Remark 3.1 We do answer iteration only at iteration nodes because they are the top nodes of all

loops involving them. If we did answer iteration at a non-iteration loop node N, we would have to

do it again at some top loop node Ntopof N, in order to reach a fixpoint at Ntop(see Figure 5).

This would certainly lead to more redundant computations.

We are now in a position to define the TP-tree, which is constructed based on the TP-strategy

using the following algorithm.

Definition 3.9 (Algorithm 2) Let P be a logic program and G0a top goal with the list?Y of

variables. The TP-tree TPG0of P ∪ {G0} is constructed by recursively performing the following

steps until the answer NO is returned.

1. (Root node) Register the root N0with G0+ return(?Y ), set NEW = 0, and goto 2.

2. (Node expansion) Let Nibe the latest registered node labeled by Gi=← A1,...,Am(m > 0).

Register Ni+1as a child of Niwith Gi+1if Gi+1can be obtained as follows.

• Case 1: A1 is return(.). Execute the procedure return(.), set Gi+1 = 2 (an empty

clause), and goto 3 with N = Ni.

• Case 2: A1is memo look(Nh,p(?X),?I,θh). Execute the procedure.5If θh= null then

goto 3 with N = Ni; else set Gi+1to the resolvent of Giand θhand goto 2.

• Case 3: A1is a non-tabled subgoal. Get a clause C whose head is unifiable with A1.6

If no such a clause exists then goto 3 with N = Ni; else set Gi+1to the resolvent of Gi

and C and goto 2.

• Case 4: A1= p(?X) is a tabled subgoal. Get an instance?I of?X from the table TB(A1).

If?I ?= null then set Gi+1 to the resolvent of Gi and?I and goto 2.

TB(A1) → COMP = 1 then goto 3 with N = Ni; else

Otherwise, if

5See Definition 3.3, where the flags NEW and TB(p(?X)) → COMP will be updated.

6Here and throughout, clauses and answers in tables are selected top-down via the pointers Ni → clause ptr and

Ni → answer ptr, respectively.

15

#### View other sources

#### Hide other sources

- Available from Li-Yan Yuan · May 31, 2014
- Available from arxiv.org