# Active logic semantics for a single agent in a static world

**ABSTRACT** For some time we have been developing, and have had significant practical success with, a time-sensitive, contradiction-tolerant logical reasoning engine called the active logic machine (ALMA). The current paper details a semantics for a general version of the underlying logical formalism, active logic. Central to active logic are special rules controlling the inheritance of beliefs in general (and of beliefs about the current time in particular), very tight controls on what can be derived from direct contradictions (P&¬P), and mechanisms allowing an agent to represent and reason about its own beliefs and past reasoning. Furthermore, inspired by the notion that until an agent notices that a set of beliefs is contradictory, that set seems consistent (and the agent therefore reasons with it as if it were consistent), we introduce an “apperception function” that represents an agent's limited awareness of its own beliefs, and serves to modify inconsistent belief sets so as to yield consistent sets. Using these ideas, we introduce a new definition of logical consequence in the context of active logic, as well as a new definition of soundness such that, when reasoning with consistent premises, all classically sound rules remain sound in our new sense. However, not everything that is classically sound remains sound in our sense, for by classical definitions, all rules with contradictory premises are vacuously sound, whereas in active logic not everything follows from a contradiction.

**0**Bookmarks

**·**

**67**Views

- Citations (44)
- Cited In (0)

- [Show abstract] [Hide abstract]

**ABSTRACT:**A formalism for reasoning about actions is proposed that is based on a temporal logic. It allows a much wider range of actions to be described than with previous approaches such as the situation calculus. This formalism is then used to characterize the different types of events, processes, actions, and properties that can be described in simple English sentences. In addressing this problem, we consider actions that involve non-activity as well as actions that can only be defined in terms of the beliefs and intentions of the actors. Finally, a framework for planning in a dynamic world with external events and multiple agents is suggested.Artif. Intell. 01/1984; 23:123-154. - SourceAvailable from: citeseerx.ist.psu.edu[Show abstract] [Hide abstract]

**ABSTRACT:**Reasoning with limited computational resources (such as time or memory) is an important problem, in particular in knowledge-intensive embedded systems. Classical logic is usually considered inappropriate for this purpose as no guarantees regarding deadlines can be made. One of the more interesting approaches to address this problem is built around the concept of active logics. Although a step in the right direction, active logics are just a preliminary attempt towards nding an acceptable solution. Our work is based on the assumption that Labeled Deductive Sys- tems oer appropriate metamathematical methodology to study the problem. As a rst step, we have reformulated a pair of active logics systems, namely the memory model and its formalized simplication, the step logic, as Labeled Deductive Systems. This paper presents our motivation behind this project, followed by an overview of the investigations on meta-reasoning relevant to this work, and introduces in some reasonable detail the MM system. - SourceAvailable from: psu.edu[Show abstract] [Hide abstract]

**ABSTRACT:**This paper presents and discusses several methods for reasoning from inconsistent knowledge bases. A so-called argued consequence relation, taking into account the existence of consistent arguments in favour of a conclusion and the absence of consistent arguments in favour of its contrary, is particularly investigated. Flat knowledge bases, i.e., without any priority between their elements, are studied under different inconsistency-tolerant consequence relations, namely the so-called argumentative, free, universal, existential, cardinality-based, and paraconsistent consequence relations. The syntax-sensitivity of these consequence relations is studied. A companion paper is devoted to the case where priorities exist between the pieces of information in the knowledge base. Key words: inconsistency, argumentation, nonmonotonic reasoning, syntaxsensitivity. * Some of the results contained in this paper were presented at the Ninth Conference on Uncertainty in Artificial Intelligence (UAI'...Studia Logica 07/1997; · 0.34 Impact Factor

Page 1

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINTAID:2334 /FLA[m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.1(1-19)

Artificial Intelligence ••• (••••) •••–•••

www.elsevier.com/locate/artint

Active logic semantics for a single agent in a static world

Michael L. Andersona,d,∗, Walid Gomaab,e, John Grantb,c, Don Perlisa,b

aInstitute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA

bDepartment of Computer Science, University of Maryland, College Park, MD 20742, USA

cDepartment of Mathematics, Towson University, Towson, MD 21252, USA

dDepartment of Psychology, Franklin & Marshall College, Lancaster, PA 17604, USA

eDepartment of Computer and Systems Engineering, Alexandria University, Alexandria, Egypt

Received 7 December 2006; received in revised form 14 November 2007; accepted 16 November 2007

Abstract

For some time we have been developing, and have had significant practical success with, a time-sensitive, contradiction-tolerant

logical reasoning engine called the active logic machine (ALMA). The current paper details a semantics for a general version of the

underlying logical formalism, active logic. Central to active logic are special rules controlling the inheritance of beliefs in general

(and of beliefs about the current time in particular), very tight controls on what can be derived from direct contradictions (P&¬P),

and mechanisms allowing an agent to represent and reason about its own beliefs and past reasoning. Furthermore, inspired by the

notion that until an agent notices that a set of beliefs is contradictory, that set seems consistent (and the agent therefore reasons

with it as if it were consistent), we introduce an “apperception function” that represents an agent’s limited awareness of its own

beliefs, and serves to modify inconsistent belief sets so as to yield consistent sets. Using these ideas, we introduce a new definition

of logical consequence in the context of active logic, as well as a new definition of soundness such that, when reasoning with

consistent premises, all classically sound rules remain sound in our new sense. However, not everything that is classically sound

remains sound in our sense, for by classical definitions, all rules with contradictory premises are vacuously sound, whereas in active

logic not everything follows from a contradiction.

© 2007 Elsevier B.V. All rights reserved.

Keywords: Logic; Active logic; Nonmonotonic logic; Paraconsistent logic; Semantics; Soundness; Brittleness; Autonomous agents; Time

1. Introduction

Real agents have some important characteristics that we need to take into account when thinking about how they

might actually reason logically: (a) their reasoning takes time, meaning that agents always have only a limited, evolv-

ing awareness of the consequences of their own beliefs,1and (b) their knowledge is imperfect, meaning that some

of their beliefs will need to be modified or retracted, and they will inevitably face direct contradictions and other in-

*Corresponding author at: Department of Psychology, Franklin & Marshall College, P.O. Box 3003, Lancaster, PA 17604-3003, USA.

E-mail addresses: michael.anderson@fandm.edu (M.L. Anderson), wgomaa@alex.edu.eg (W. Gomaa), jgrant@towson.edu (J. Grant),

perlis@cs.umd.edu (D. Perlis).

1Levesque’s distinction between explicit and implicit beliefs [29] points to this same issue; however, our approach is precisely to model the

evolving awareness itself, rather than trying to model the full set of (implicit) consequences of a given belief set.

0004-3702/$ – see front matter © 2007 Elsevier B.V. All rights reserved.

doi:10.1016/j.artint.2007.11.005

Page 2

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINT

2

AID:2334 /FLA [m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.2(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

consistencies. Indeed, real agents will not only often find their beliefs contradicted by experience, but will sometimes

find that their beliefs have been internally inconsistent for some time, although they are only now in a position to

notice this inconsistency, having derived a certain set of consequences that makes it apparent. The challenge from the

standpoint of classical logical formalisms is that, if an agent’s knowledge base can be inconsistent, then according to

classical logic, it is permissible to derive any formula from it.

This fact about classical logics is commonly known by the Latin phrase ex contradictione quodlibet: from a con-

tradiction everything follows. However, Graham Priest has coined the somewhat more vivid term explosive logics: a

logic is explosive iff for all formulas A and B, (A&¬A) |= B. Priest defines a paraconsistent logic precisely as one

which is not explosive [40–42]. Now, clearly real agents cannot tolerate the promiscuity of belief resulting from ex-

plosive logics, and must somehow maintain control over their reasoning, watching for and dealing with contradictions

as they arise. The reasoning of real agents, that is, must be paraconsistent. But what sort of paraconsistent logic might

agents usefully employ, what methods might agents use to control inference and deal with contradictions, and how

can these logics (and methods) be modeled in terms of truth and consequence in structures?

In the current paper we are primarily interested in the last of these questions. For some time we have been devel-

oping, and have had significant practical success with a time-sensitive, contradiction-tolerant logical reasoning engine

called the active logic machine (ALMA) [46]. Because ALMA was designed with the above challenges in mind,

its underlying formalism, active logic [17,18,33,34], includes special rules controlling the inheritance of beliefs in

general (and of beliefs about the current time in particular), very tight controls on what can be derived from direct

contradictions (P&¬P), and mechanisms allowing an agent to represent and reason about its own beliefs and past

reasoning.

Here we offer a semantics for a general version of active logic. We hope and expect it will be of interest as a specific

model of formal reasoning for real-world agents that have to face both the relentlessness of time, and the inevitability

of contradictions.

In Sections 2–6 we will introduce the formal semantics for active logic, discuss a new definition of the conse-

quence relation, and give examples of sound and unsound active logic inferences. This will be followed by some more

informal discussion of the various properties of active logic (Section 7), a comparison of active logic with related

approaches (Section 8), and a discussion of the practical issues involved with the use of active logic in real-world

agents (Sections 9 and 10).

2. A semantics for real-world reasoning

In this section we propose a semantics for a time-sensitive, contradiction-tolerant reasoning formalism, incorporat-

ing the basic features of active logic.

2.1. Starting assumptions

In order to make the problem tractable for our first specification of the semantics, we will work under the following

assumptions concerning the agent, the world (i.e., everything apart from the agent), and their interactions:

• There is only one agent a.

• The agent starts its life at time t = 0 (t ∈ N) and runs indefinitely.

• The world is stationary for t ? 0. Thus, changes occur only in the beliefs of the agent a.

Given these assumptions, there is one and only one true complete theory of the world; however, given that the

agent’s beliefs evolve over time, there is a different true complete theory of the agent for each time t.

2.2. The language L

In order to express theories about such an agent-and-world, we define a sorted first-order language L. We define

it in two parts: the language Lw, a propositional language in which will be expressed facts about the world, and the

language La, a first-order language used to express facts about the agent, including the agent’s beliefs, for instance that

the agent’s time is now t, that the agent believes P, or that the agent discovered a contradiction in its beliefs at a given

Page 3

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINTAID:2334 /FLA[m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.3(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

3

time. We write SnKto mean the set of sentences of any language K. We are using the complete set of connectives

{¬,→} from whichotherconnectives,suchas ∧, and ∨,can bederived.We assumethat doublenegationsare removed

from formulas. For the sentence symbols the subscripts are used to indicate different propositional sentences and, for

a fixed subscript, the superscripts are used to indicate different apperceptions (see Section 4) of the agent of the same

proposition. The superscript 0 is used for the original sentence symbol (without superscript).

Definition 1. Let Lwbe a propositional language consisting of the following symbols:

• a set S of sentence symbols (propositional or sentential variables) S = {Sj

numbers)

• the propositional connectives ¬ and →

• left and right parentheses ( and )

i: i,j ∈ N} (N is the set of natural

SnLwis the set of sentences of Lwformed in the usual way. These represent the propositional beliefs of the agent

about the world. For instance S0

1might mean “John is happy”. For later use we assume there is a fixed lexicographic

ordering for the sentences in SnLw.

Definition 2. Let σ,θ ∈ SnLw. We say that {σ,θ} is a direct contradiction if one of the following holds: either θ is the

formula σ preceded by a negation, or σ is the formula θ preceded by a negation, that is θ = ¬σ or σ = ¬θ.

Before giving the definition of the language La, we remark the following:

i. In its current version Lais a restricted form of first-order logic that is essentially propositional. In future work we

intend to extend it to the full power of first-order logic.

ii. Lacontains a belief predicate that captures the fact that the agent believed a certain proposition at some time t.

We allow for sentences of the form: at time s the agent believed that she believed that she .... To allow for this

indefinite (however finite) nesting, the definition of Lahas to be inductive where at stage n+1 all sentences from

the previous levels are captured for the belief predicate.

Definition 3. The language Lais a sorted restricted version of first-order logic having three sorts:

• S1is used to represent SnLw.

• S2is used to represent time.

• S3is used to represent SnLat its various stages of construction as shown below.

Lawill be defined as the union of a sequence of languages {Ln}n∈Nwhich are defined as follows:

• n = 0: L0is a restricted first-order sorted language that does not contain variables or quantifiers, and consists of

the following symbols:

– the propositional connective ¬

– a set of constant symbols C = {i: i ∈ N} of sort S2(this represents the time indices)

– a set of constant symbols D = {σ: σ ∈ SnLw}, each is of sort S1(here for simplicity, the constant symbols used

to represent SnLware the sentences themselves)

– a set of constant symbols E0= {θ: θ ∈ SnLw}, each is of sort S3(again for simplicity the sentences themselves

are used as constant symbols)

– the unary predicate symbol Now of sort S2

– the binary predicate symbol Contra of sort (S1×S2).

• n ? 1: Assume that Lmhas already been defined for all 0 ? m < n. Lnis a restricted first-order sorted language

that does not contain variables or quantifiers. In addition to the symbols of Ln−1, it contains a set of constant

symbols En= {θ: θ ∈ SnLn−1} of sort S3. Also, L1(and hence all Li, 1 ? i) contains a binary predicate symbol

Bel of sort (S3×S2).

Page 4

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINT

4

AID:2334 /FLA [m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.4(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

Let E =?

The sort S2stands for time. In La, Now is used to express the agent’s time, Contra is used to indicate the existence

of a direct contradiction in its beliefs, and Bel expresses the fact that the agent had a particular belief at a given

time. We use these predicate symbols because they are crucial for active logic. The semantics for these predicates

will be defined formally in Definition 8. Note that La contains only the connective ¬; hence statements such as

Bel(θ,t) → Bel(θ,t +1) are not in the language.

However, we do not specify one specific set of active logic rules. This means that the semantics we will specify is

applicable to a class of active logics with different rules that share a few common features.

i∈NEi. Let the language La=?

n∈NLnso SnLa=?

n∈NSnLn. C,D,E are sets of constant symbols.

All of these constant symbols are terms in the language La.

Definition 4. Let L = La∪Lw, in the sense that SnL= SnLa∪SnLw.

Definition 5. Let the agent’s knowledge base at time t, KBt, be a finite set of sentences from L, that is, KBt⊆ SnL. In

the case of KB0the only formulas of SnLwwe allow are those whose superscripts are all 0.

We can imagine KB0containing any sentences from L with which a system designer would equip an agent. KB0

may or may not be consistent (no system designer is perfect!). After time 0, each KBt can be different from KB0

because of inference, observation, forgetting, and the like. Also, although all the sentences about the world initially

have superscript 0 for the sentence symbols, as we will see later the agent may assign different superscripts to the

sentence symbols thereby changing a possibly inconsistent set to one that is consistent.

2.3. The semantics of Lw

In the following several definitions, we define the semantics of the formalism given above, in the standard way.

Definition 6. An Lw-truth assignment is a function h : S → {T,F} defined over the set S of sentence symbols in Lw.

Definition 7. An Lw interpretation h (we keep the same notation for this induced interpretation) is a function

h:SnLw→ {T,F} over SnLwthat extends an Lw-truth assignment h as follows:

h(¬ϕ) = T ⇐⇒ h(ϕ) = F

h(ϕ → ψ) = F ⇐⇒ (h(ϕ) = T and h(ψ) = F)

We also stipulate a standard definition of consistency for Lw: a set of Lwsentences is consistent iff there is some

interpretation h in which all the sentences are true. Notationally we write the usual h |= Σ, to mean all the sentences

of Σ are assigned T by h.

We call Wt the set of all L-expressible facts about the external world. Thus, Wt is consistent at every time t.

This means that for every t there exists an Lw-truth assignment function ht, such that ht|= Wt. We also call the

induced interpretation ht, keeping the same notation. This result does not depend upon the assumption that the world

is stationary. In a stationary world a single htwill work for all t; in a changing world there may be a different ht, but

there will still be some ht, for each t. For a brief discussion of how we intend to approach the issue of a changing

world in future work, see Section 11.

This does not mean that the agent’s beliefs about the world are all true, consistent and constant (indeed, we expect

they will contain contradictions and change over time), only that there is some set of true and consistent sentences

that describe the world for every t. We’ll detail later how to interpret and model the agent’s world-knowledge (this

being the crux of the issue when dealing with inconsistency). First, however, we turn to a model of the agent’s meta-

knowledge.

3. A model of the agent’s Labeliefs

First of all it is important to note that, even in the case where the agent’s beliefs are incomplete, incorrect, or

inconsistent, there is always a complete and consistent theory of those beliefs at the meta level, and this theory can be

Page 5

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINT AID:2334 /FLA[m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.5(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

5

expressed using the language La. For instance, if the agent believes both that John is happy (S0

happy (¬S0

not happy” (Bel(¬S0

Now we define an interpretation that models the theory about the agent at the meta level. In what follows, Σ is to

be understood formally as any set of sentences from L; typically we will assume it to be some subset of the agent’s

knowledge base, combining beliefs about the world and about the agent, at some time t. Our point of view is that

at any time t the agent may deduce new beliefs from its knowledge base at time t − 1, may add new sentences, for

example from observations, or delete some sentences.

The following definition consists of ten bullet points: the first identifies the domain, the next three provide the

interpretations for the three sorts; the following three provide the interpretations for all the constant symbols; the last

three provide the interpretations for the three predicate symbols. Now keeps track of the time, and indicates the current

time of the agent’s internal clock. Contra indicates the existence of a direct contradiction in Σ at some time s ? t. Bel

has the rough meaning “believes that”, and states that a given sentence from L was in the agent’s KB at some time

s ? t. We define the interpretation HΣ

1) and that John is not

1), the two sentences “the agent believes that John is happy” (Bel(S0

1)) can both be true at the same time.

1)) and “the agent believes that John is

t+1(at time t +1 based on Σ) modeling facts about the agent as follows.

Definition 8. HΣ

t+1is defined as the following interpretation:

• Domain(HΣ

• HΣ

• HΣ

• HΣ

• ∀n ∈ C: HΣ

• ∀σ ∈ D: HΣ

• ∀θ ∈ E: HΣ

• The predicate symbol Now has the following semantics: HΣ

otherwise HΣ

t+1|= ¬Now(s).

• The predicate symbol Contra has the following semantics: HΣ

Contra(σ,s) ∈ Σ or s = t and ∃σ,¬σ ∈ Σ; otherwise HΣ

• The predicate symbol Bel has the following semantics: HΣ

s = t and θ ∈ Σ; otherwise HΣ

t+1) = N ∪SnL.

t+1(S1) = SnLw(the set of propositions about the world).

t+1(S2) = N (all non-negative integers).

t+1(S3) = SnL(the set of sentences representing the agent’s knowledge base).

t+1(n) = n.

t+1(σ) = σ.

t+1(θ) = θ.

t+1|= Now(s) ⇐⇒ s = t + 1 and Now(t) ∈ Σ;

t+1|= Contra(σ,s) ⇐⇒ either s < t and

t+1|= ¬Contra(σ,s).

t+1|= Bel(θ,s) ⇐⇒ either s < t and Bel(θ,s) ∈ Σ or

t+1|= ¬Bel(θ,s).

4. A model of the agent’s Lwbeliefs

Now we turn to the challenging problem of how to model, at the object level, the agent’s beliefs about the world,

given that these beliefs are not just evolving from moment to moment, but that at any given time, they may be

inconsistent. Our scenario is as follows. At time 0 the agent has a finite set of initial beliefs, KB0, about the world. All

the sentence symbols have superscript 0. Then the agent starts to reason about the world using rules of active logic.

This is where the agent may assign, via its apperception function, non-zero superscripts to some sentence symbols to

avoid inconsistency. The agent may also obtain additional information about the world over time through other means,

such as observations.

We will tackle this problem initially in three steps. First, we define a weak notion of consistency allowing for

inconsistency in the agent’s knowledge about the world; second, we will define a class of “apperception functions”

intended to capture the intuition that an inconsistent KB will not necessarily seem inconsistent to the agent; and finally,

we will show that there is some apperception function that, when applied to a given set of sentences, always produces

a consistent set. Having shown this, we will proceed in the following sections to define a viable notion of active

consequence, discuss the relation of this notion of consequence to the classical notion of logical consequence, and

prove the soundness of some of the central inference rules of active logic.

Recalling that Σ need not be consistent concerning facts about the world we define a weak version of consistency.

Definition 9. A set of sentences Σ ⊆ SnLis said to be Laconsistent iff Σ ∩SnLais classically consistent.

Page 6

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINT

6

AID:2334 /FLA [m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.6(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

Remark 1. From now on, we will assume that Σ is Laconsistent. We also introduce the symbol Γ to refer to the

potentially inconsistent set of Lwsentences in Σ: Γ = Σ ∩SnLw.

We next define an apperception (self-awareness) function for the agent. The notion of an apperception function

is intended to help capture, at least roughly, how the world might seem to an agent with a given belief set Σ. For

a real agent, only some logical consequences are believed at any given time, since it cannot manage to infer all the

potentially infinitely many consequences in a finite time, let alone in the present moment. Moreover, even if the agent

has contradictory beliefs, the agent still has a view of the world, and there will be limits on what the agent will and

won’t infer. This is in sharp distinction to the classical notion of a model, where (i) inconsistent beliefs are ruled out

of bounds, since then there are no models, and (ii) all logical consequences of the KB are true in all models.

The task we are addressing, then, is that of finding a notion of semantics that avoids both (i) and (ii) above. Our

idea—via apperception functions—is to suppose that an agent’s limited resources apply also to its ability to inspect

its own knowledge. Thus, if S0

iare both in Σ, the agent might not realize, at first, that the two instances of

Siare in fact instances of the same sentence symbol. Thus it might seem to the agent that the world is one in which,

say, S1

i. This allows the agent to have inconsistent beliefs while still having a consistent world

model. Moreover, it allows us to see how an agent with inconsistent beliefs could avoid vacuously concluding any

proposition, and also reason in a directed way, by applying inference rules only to an appropriately apperceived subset

of its beliefs. We hope that this approach can shed some light on focused, step-wise, resource-bounded reasoning more

generally.

An example of issue (i) might be Fred, who believes that if John is from the midwest then John is unhappy

(S0

world view of such an agent by supposing that at least one of these beliefs is taken to have a different apparent

meaning, one that is not inconsistent with the others (e.g. S0

carefully about all his beliefs, nor realized all of their consequences, and so never noticed that his beliefs entail both

S0

of the implication) he will recognize the contradiction at that time, and remove it (see below).

An example of issue (ii), although one currently beyond what our formalism can represent, might be Andrew Wiles

working on a proof of Fermat’s Last Theorem (FLT). He did not know, until he had completed his proof, that FLT was

true. Yet he did have among his beliefs sufficient axioms to guarantee FLT as a consequence. So how did the world

seem to him? Along the lines we are suggesting, he viewed some sentences as having possible interpretations different

from what he later discovered to be the case. In effect, apperception functions, collectively, allow for a blurring of the

identities, and hence meanings, of symbols.

The apperception functions we define can make changes only to Γ . An apperception function does not change

Σ −Γ . We use the same notation ap when the apperception function is applied to an occurrence of a sentence symbol,

a sentence, or a set of sentences. We start by defining a function that changes the superscripts of sentence symbols to

0. This is used to recover the original direct contradictions that were modified by the assignment of superscripts.

iand ¬S0

iis true, and so is ¬S2

2→ ¬S0

1), believes that John is from the midwest (S0

2), and believes that John is happy (S0

1). We represent the

2→ ¬S1

1). This might happen because Fred hasn’t thought

1and ¬S0

1. Note, however, that in our model, should Fred ever conclude ¬S0

1(or ¬S1

1, from the apperceived version

Definition 10. For any sentence φ ∈ SnLw, let z(φ) be the sentence φ with all superscripts reset to 0. If Σ ⊆ SnLw,

then z(Σ) = {z(φ)|φ ∈ Σ}.

Definition 11. An apperception ap is a function ap:Σ → Σ?where Σ and Σ?are sets of L-sentences. An ap is

represented as a finite sequence of non-negative integers: ?n1,...,np?. The effect of ap on Σ is as follows:

1. Let Σ be a set of L-sentences and let Γ = Σ ∩ Lw. Using the lexicographic order given earlier, let the kth

sentence symbol in Γ be Sj

Sj

iis unchanged.

2. ap(Σ) = (Σ −Γ )∪ap(Γ ) (ap does not change Σ −Γ ).

Example 2. Let Σ = {Now(5),Bel(S0

elements lexicographically yields ord(Γ ) = {S1

{Now(5),Bel(S0

i. The effect of the ap = ?n1,...,np? is to change Sj

ito Snk

i

if 1 ? k ? p, otherwise

2,4),¬S1

2,S1

2,¬S1

2,S0

1→ S4

2,S0

5}. In this case Γ = {¬S1

1→ S4

2,S1

2,S0

1→ S4

5}. Writing the

5}. Consider ap = ?1,3,2,16,7?. Then ap(Σ) =

2,4),S1

2,¬S3

2,S2

1→ S16

5}.

Page 7

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINTAID:2334 /FLA[m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.7(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

7

Infinitely many apperception functions are needed because a finite set of sentences in Lwmay have an arbitrarily

large (finite) number of sentence symbols. However, if Γ is known to contain p occurrences of sentence symbols,

then it suffices to deal only with apperception functions that are sequences of up to p integers as the integers in the

later locations are not applied. There are only finitely many such apperception functions.

The purpose of the apperception functions is to get rid of inconsistencies in Σ. Hence we are interested only in

apperception functions that output consistent sets. The set of apperception functions that do this depends on Σ.

Definition 12. Let AP denote the class of all apperception functions. APΣ= {ap ∈ AP|ap(Σ) is consistent}.

Next we show that APΣis never empty.

Theorem 1. For all Σ, APΣ?= ∅.

Proof. Let ap assignauniquesuperscripttoeachoccurrenceofeverysentencesymbolin Γ .Thennosentencesymbol

appearing in ap(Γ ) is duplicated, hence each can be assigned a truth value independently. So ap(Γ ) is consistent.

Since the remaining sentences in Σ are consistent by assumption, and are in La, ap ∈ APΣ.

2

5. Active consequence

5.1. The definition of active consequence

At this point we are ready to define the notion of active consequence at time t—the active logic equivalent of

logical consequence. We start by defining the concept of 1-step active consequence as a relationship between sets of

sentences Σ and Θ of L, where Σ ⊆ KBtand Θ is a potential subset of KBt+1. When we define this notion we want

to make sure that Θ contains only sentences required by Σ and the definition of HΣ

definition.

t+1. This is the reason for the next

Definition 13. Given Σ and ap ∈ APΣ, define

dcs(Γ ) =?φ ∈ Γ |∃ψ ∈ Γ such that z(φ) = ¬z(ψ) or ¬z(φ) = z(ψ)?,

apz(Γ ) = ap(Γ )−dcs(Γ ).

The meaning of Definition 13 is that we are removing direct contradictions from ap(Γ ) while ignoring the super-

scripts.

Definition 14. Let Σ,Θ ⊆ SnL. Then Θ is said to be a 1-step active consequence of Σ at time t, written Σ |=1Θ if

and only if ∃ap ∈ APΣsuch that

i. if σ ∈ Θ ∩SnLwthen apz(Γ ) |= σ (σ is a classical logical consequence of apz(Γ )), and

ii. if σ ∈ Θ ∩SnLathen H(Σ−Γ)∪z(Γ )

t+1

|= σ.

In this definition, for the sentences of Θ in the agent’s language (at the meta level) 1-step active consequence

depends on the interpretation Ht+1. Instead of Γ , we include z(Γ ) to capture all direct contradictions even if the

superscripts have been changed. This also means that the Bel and Contra statements will contain sentence symbols

only with superscript 0. For all the sentences of Θ expressing facts about the world, there must be some apperception

function such that the apperception of Σ (the Lwpart) minus the direct contradictions classically implies these sen-

tences. In the following we define the more general case of n-active consequence for any positive integer n (similarly,

as a result of this definition Θ is a potential part of KBt+n).

Page 8

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINT

8

AID:2334 /FLA[m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.8(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

Definition 15.

i. Let Σ,Θ ⊆ SnL. Then Θ is said to be an n-step active consequence of Σ at time t, written Σ |=nΘ, if and only

if

∃? ⊆ SnL: Σ |=n−1? and ? |=1Θ

ii. We say that Θ is an active consequence of Σ, written Σ |=aΘ, if and only if Σ |=nΘ for some positive integer n.

(5.1)

Next we give some examples to illustrate the concept of active consequence.

Example 3.

i. Let Σ = {S0

ii. Let Σ = {Now(t),S0

easy to see that {S0

Hence Σ |=1Θ.

iii. Let Σ,Θ be as in the previous example with Bel(S0

hence Σ ?|=1Θ. Therefore, for any later time t + k and ? obtained by active consequence from Σ, H?

Bel(S5,t), so Σ ?|=aΘ.

iv. Let Σ = {Now(t)} and Θ = {Now(t + 5)}. Then HΣ

{Now(t +1)}, and we get {Now(t)} |=5{Now(t +5)}, so {Now(t)} |=a{Now(t +5)}. Hence Σ |=aΘ.

v. Let Σ = {S0

Σ |=1?, through the apperception function ap(Σ) = {S1

the definition, regardless of the apperception function applied in this step.

1,¬S0

1} and Θ = {Contra(S0

1,S0

4,S0

1,t)}. Then Σ |=1Θ.

12} and Θ = {Now(t +1),S0

12} are logical consequences of {S0

1→ S0

4,S0

4,S0

12}. Let ap ∈ APΣbe the identity function. It is

1→ S0

5/ ∈ Σ for any i, Ht+1?|= Bel(S0

1,S0

4,S0

12}. Also by definition HΣ

t+1|= Now(t +1).

5,t) added to Θ. Since Si

5,t),

t+k?|=

t+1?|= Θ. However, HΣ

1,t + 1)}. We will see that Σ |=2Θ. Let ? = {S1

1,S2

t+1|= Now(t + 1). So {Now(t)} |=1

1,S0

2,S0

2→ ¬S0

1} and Θ = {Contra(S0

1,¬S2

1}. Then

2,S2

2→ ¬S2

1}. Then ? |=1Θ by the second part of

Note that in Example 3.v, it is not the case that Σ |=1{Contra(S0

appearance of the relevant direct contradiction were already in place at time t. This underlines the fact that in active

logic it can take time for consequences to appear in the KB. In the case of Lasentences, this temporal aspect of the

logic is regulated and enforced directly by the semantics. For Lwsentences, it is an artifact of the particular set of

rules that a given active logic agent is equipped with (see Sections 9 and 10 for more discussion of this issue).

Thus, for instance, considering the types of rules in active logic, given a rule like:t: α,α→ϕ,ϕ→ψ

infer ψ in one step from the formulas given at time t; however an agent equipped only with a simple version of modus

ponens, such as that given in Definition 22 (see Section 6.1) would take two time steps to conclude ψ from the same

formulas. Both rules would be sound in active logic (see Definition 16), but a given agent might not be equipped with

both rules (see Section 10). Since our definition of 1-step active consequence for sentences in Lwis based on logical

implication, it is at least as powerful any set of sound syntactical rules could be.

1,t)} even though the conditions for the later

t+1:

ψ

an agent could

5.2. The relationship between active consequence and 1-step active consequence

By our definition of active consequence, Σ |=1Θ implies Σ |=aΘ. We may wonder how much bigger Θ may be

in the latter case. Consider first a very simple situation: Σ = {S0

Here we have Σ |=1Θ and Θ |=1Θ?, hence Σ |=2Θ?. This illustrates that considering Lathere can be additional

sentences for each n-step active consequence for each new value of n. We show that this not does not happen for

sentences of Lw.

Theorem 2. Suppose Σ, Θ ⊆ SnLw. Then Σ |=1Θ ⇔ Σ |=aΘ.

Proof. Since both Σ and Θ are sentences in Lw, it suffices to deal only with sentences in Lw. The ⇒ part follows

from the definition of |=a.

Going in the other direction assume that Σ |=aΘ. By Definition 15 there must be a positive integer n such that

Σ |=nΘ, and that means that there is a ? ⊂ SnLwsuch that Σ |=n−1? and ? |=1Θ. We divide the proof into two

cases depending on the consistency of Σ.

1}, Θ = {Bel(S0

1,t)} and Θ?= {Bel(Bel(S0

1,t),t +1)}.

Page 9

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINTAID:2334 /FLA [m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.9(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

9

Suppose that Σ is consistent. Consider what can happen in n−1 steps, that is, Σ |=n−1? where φ ∈ ?. Such a φ

must have been obtained by n − 1 applications of classical logical implication to Σ except that we may also change

sentence symbol superscripts through n − 1 apperception functions, one at each step. The key observation here is

that both the application of classical logical implication and the application of apperception functions are transitive

operations. This means that whatever can be obtained by n − 1 applications of logical implication can already be

obtained by a single application of logical implication, and the same goes for apperception functions. Hence Σ |=1?.

Doing this process again, but using ? |=1Θ, we obtain Σ |=1Θ.

Suppose next that Σ is not consistent. Then in the first step of the implication, that is, to get Σ |=1?, an appercep-

tion function, ap, must have been applied to Σ first, making ap(Σ) consistent (and removing direct contradictions),

only then is the 1-step active consequence determined. Thus Σ |=1? iff ap(Σ) |=1? for some ap ∈ APΣ, where

ap(Σ) is consistent. But then we are back at the previous case where Σ was consistent (where now ap(Σ) is consis-

tent) and the result follows.

2

Although we proved this result only for sentences in Lw, the same proof works (restricted to sentences of Lw) even

if Σ and Θ contain sentences in La.

5.3. The relationship between classical logical consequence and active consequence

How does classical logical consequence compare to active logic consequence? For sentences in SnLathe two are

incomparable. For consider Σ = {Now(t)}. Clearly, Σ |= Σ, but Σ ?|=aΣ because Now(t) will not be true at any time

after t. Next consider Θ = {Bel(Now(t),t)}. Then Σ ?|= Θ but Σ |=aΘ.

So for the comparison we restrict our attention to SnLw. In classical logic an inconsistent set of sentences logically

implies every sentence, but that is not the case for active consequence. The interesting question is what happens if

Σ ⊆ SnLwis consistent. It seems reasonable to expect active consequence to behave just like logical consequence.

Recalling our theorem from the previous subsection, it suffices to compare only |= and |=1because in this case |=1

and |=agive the same result.

Thus in the consistent case we might expect Σ |= Θ ⇔ Σ |=aΘ. The first implication, Σ |= Θ ⇒ Σ |=aΘ,

holds because we can choose the apperception function to be the identity function. Intuitively the opposite implication

should hold as well. For consider that every given set of consistent sentences has a certain definite set of conclusions

(consequences)—call this the “inferential power” of the set. We would expect this same set in active logic to have

no more inferential power than it has under classical logical consequence. For consider an apperception function that

assigns a different number to every sentence symbol in Σ = {S0

Now the sentence symbol S2can no longer be inferred for any superscript. But this also presents a problem for

the reverse implication. For Σ |=aΘ holds but Σ |= Θ does not. The equivalence holds, however, if we restrict all

sentence symbols to have superscript 0.

1,S0

1→ S0

2}, e.g., turns it into Θ = {S1

1,S2

1→ S3

2}.

Theorem 3. Let Σ,Θ ⊆ SnLw. If Σ is consistent, Σ = z(Σ), and Θ = z(Θ), then Σ |= Θ ⇔ Σ |=aΘ.

Proof. By Theorem 2 it suffices to prove that Σ |= Θ ⇔ Σ |=1Θ. It follows from Σ = z(Σ) and Θ = z(Θ) that

all superscripts of sentences must be 0. In the application of the definition of 1-step consequence, an apperception

function must be used. Since the apperception function leaves all superscripts at 0, it must be the identity function, so

1-step active consequence is identical to logical consequence, that is, Σ |= Θ ⇔ Σ |=1Θ.

2

In Section 2.2 we stated that our semantics does not presuppose any one specific set of active logic rules because

it is applicable to many different active logic systems with different rules. This means that we cannot expect to obtain

the kind of completeness theorem for this semantics that one might get for a single specific set of rules. However, it is

clear that 1-step active consequence is very powerful for consistent sets of sentences. It encompasses any set of active

logic rules for SnLw. In that sense it is the limiting case for all possible sets of such active logic rules and provides

an approximation to a completeness result. In the following, we write ? for derivability in active logic, instead of the

vertical notation commonly used there. See the next section for the standard vertical notation.

Theorem 4. Suppose that Σ,Θ ⊆ SnLw, Σ = z(Σ), Θ = z(Θ), Σ is consistent and Σ and Θ are finite.

Page 10

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINT

10

AID:2334 /FLA[m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.10(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

(a) Let ? represent the derivability relation for any active logic. If Σ ? Θ then Σ |=aΘ.

(b) If Σ |=aΘ then there is an active logic with derivability relation ? such that Σ ? Θ.

Proof. (a) If Σ ? Θ then every φ ∈ Θ must logically follow from Σ, hence Σ |=1Θ, so Σ |=aΘ.

(b) If Σ |=aΘ then for each φ ∈ Θ introduce a (valid) active logic rule stating that Σ ? φ. For the active logic

defined by these rules (for SnLw), Σ ? Θ.

6. Sound and unsound inferences in active logic

2

At this point we consider possible inference rules for active logic. We start with some notes about the syntax of

active logic rules. Because active logic is a step logic, we always precede both the antecedent and the consequent

(which are divided by a horizontal line) with an indication of the time, thus:

t: antecedent

t +1: consequent

The antecedent can be any of the following:

• a single formula, e.g. θ, or Now(t)

• any set of formulas separated by commas, e.g. θ,θ → σ

• anysetofformulasmeetingsomespecifiedconditions,andrepresentedbyasinglecapitalletter,withasemi-colon

between the capital letter and the conditions e.g. Σ; θ ∈ Σ

• any set of formulas representing the database of an agent at a specific time. This will be represented by KBt, and

may also specify conditions using the same convention as above.

The consequent can be any of the following:

• a single formula, e.g. θ, or Now(t)

• any set of formulas separated by commas, e.g. θ,θ → σ.

Now we define the notion of a-sound inference.

Definition 16. An active sound (a-sound) inference is one in which the consequent is a 1-step active consequence of

the antecedent.

Recall that (1-step) active consequence is defined between sets of sentences. However, in accordance with the

syntax defined above, we will omit the set notation symbols { and }.

6.1. Some active-sound inference rules

For all six rules given here, a-soundness follows directly from the definitions. We prove the last as an illustration.

Definition 17. If Now(t) ∈ KBtthen the timing inference rule is defined as follows:

t: Now(t)

t +1: Now(t +1)

Definition 18. If ϕ,¬ϕ ∈ KBt, where ϕ ∈ SnLw, then the direct contradiction inference rule is defined as follows:

t: ϕ,¬ϕ

t +1: Contra(ϕ,t)

Definition 19. If ϕ ∈ KBt, where ϕ ∈ SnL, then the positive introspection inference rule is defined as follows:

t: ϕ

t +1: Bel(ϕ,t)

Page 11

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINTAID:2334 /FLA[m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.11(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

11

Definition 20. If ϕ / ∈ KBt, where ϕ ∈ SnL, then the negative introspection inference rule is defined as follows:

t: KBt;ϕ / ∈ KBt

t +1: ¬Bel(ϕ,t)

Definition 21. If ϕ ∈ SnLsuch that ϕ ∈ KBt, ¬ϕ / ∈ KBt, ϕ ?= Now(t), and ϕ is not a contradiction, then the inheritance

inference rule is defined as follows:

t: ϕ

t +1: ϕ

Definition 22. Let Θ = {ϕ,ϕ → ψ} ⊆ (KBt∩ SnLw) such that Θ is consistent. Assume ¬ϕ / ∈ KBtand ¬(ϕ → ψ) / ∈

KBt(see Section 6.3 for more on this restriction), then the active modus ponens inference rule is defined as follows:

t: ϕ,ϕ → ψ

t +1: ψ

Theorem 5. The rules given in Definitions 17–22 are a-sound.

ForDefinitions17–20,theira-soundnessfollowsfromthedefinitions.Bywayofillustration,considerthefollowing

for active modus ponens (Definition 22):

Proof. Use an apperception function which is the identity on Θ and assigns a unique different superscript to any other

symbol in KBt.

2

6.2. Active-unsound inference rules

We examine a number of instances of classically unsound inference rules, and get the expected intuitive results that

these inferences are also active-unsound. In all cases ϕ and ψ are arbitrary sentences of L.

Definition 23. We call this first rule the ϕ implies ψ, or ϕ → ψ rule.

t: ϕ

t +1: ψ

Theorem 6. The ϕ → ψ inference rule is not a-sound (is a-unsound).

Proof. Let ϕ = S0

would mean that ψ classically follows from ϕ, and that is false.

1and let ψ = ¬(S0

1→ S0

1). Then ψ is not an active consequence of ϕ, because by Theorem 3, this

2

Definition 24. We call this next rule the ϕ implies not ϕ, or ϕ-not-ϕ rule: We assume that ϕ is a consistent formula.

t: ϕ

t +1: ¬ϕ

Theorem 7. The ϕ-not-ϕ inference rule is a-unsound

Proof. Let ϕ = S0

1and apply Theorem 3.

2

Interestingly, although the ϕ-not-ϕ inference rule is a-unsound in general (with respect to the big language L),

there is one special instance in which it is sound, namely:

t: Now(t)

t +1: ¬Now(t)

This further underlines the specialstatus of timeand the Now() predicatein activelogic; this result would obviously

not be classically sound.

Page 12

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINT

12

AID:2334 /FLA [m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.12(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

However, one rule that is classically sound, but a-unsound, is the explosive rule. This shows that active logic is a

paraconsistent logic, something we consider one of its advantages over classical formalisms.

Definition 25. Let Σ ⊆ SnLwbe inconsistent. Let ψ ∈ SnLw. We define the explosive rule with respect to the language

Lwas follows.

t: Σ;Inconsistent(Σ)

t +1: ψ

Theorem 8. The explosive inference rule is a-unsound.

Proof. Let ψ be ¬(S0

Hence ap(Σ) ?|=1ψ. By Theorem 2 the result follows.

1→ S0

1). No apperception function ap that turns Σ into a consistent set can logically derive ψ.

2

6.3. Inconsistent KBs, apperception functions and the application of a-sound rules

We noted above that there can be no official catalog of rules for active logic; any a-sound rule can qualify, and a

given active logic agent may be equipped with any number of these rules (see Section 9 for more on this). However,

the fact that a-soundness is defined in terms active consequence, which is itself defined in terms of apperception

functions, means that not every a-sound rule will be available for use in every situation. More specifically, whether

or not there are direct contradictions in Σ = KBt, the apperception function may change which rules can and cannot

be applied for that Σ.2(We list only Γ in the examples below.) Let Γ = {S0

contradiction, the active modus ponens rule would not apply (S0

Next, consider a case where active modus ponens does apply, namely, let Γ = {S0

derive S0

using an a-sound notational variant of the classically sound rule3

1,¬S0

1,S0

1→ S0

2}. Because of the direct

1and ¬S0

1would be removed from the KB).

1,S0

2. A different possible consequence is {S0

1→ S0

2,¬S0

2}. So we can

2→ S0

2by using an ap that only changes the superscript of ¬S0

3},

¬ψ

ψ → α

and using an ap that only changes the superscript of S0

But note that the set {S0

function that would allow this set to be derived. Thus we cannot necessarily combine a-sound rules and guarantee that

the result is an active consequence. (The problem exists only for rules involving Lw.) This also underlines the fact

once again that apperception functions can limit the inferential power of a given set of sentences. For a discussion of

the practical effects of this limitation, see Sections 9 and 10.

This concludes the presentation of the active logic semantics. In the next two sections (7 and 8) we will discuss

some of the general properties of active logic that follow from its semantics, and compare active logic to other related

work. After that, in Sections 9 and 10, we will discuss some of the practical issues involved with using active logic in

real-world reasoning agents.

1.

2,S0

2→ S0

3} is not an active consequence of Γ because there is no single apperception

7. General properties of active logic

One of the original motivations for active logic was the need to design formalisms for reasoning about an approach-

ing deadline; for this use it is crucial that the reasoning take into account the ongoing passage of time as that reasoning

proceeds. Thus, active logic reasons one step at a time, updating its belief about the current time at each step, using

rules like the timing rule given in Definition 17.

2In fact, it is generally true of apperception functions that they will determine which rules are applicable in a given KB at a given time; however,

in a consistent KB, there will always be an eligible apperception function that makes no alterations to the KB, thus not changing which rules apply.

Thus, the remarks below are limited to the case of an inconsistent KB.

3While this rule can be written so as to be a-sound, it is rather a dangerous rule in a non-monotonic logic, and it would probably not be advisable

to include it among the catalog of rules with which a practical active logic agent is equipped.

Page 13

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINTAID:2334 /FLA[m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.13(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

13

This step-wise, time-aware approach gives active logic fine control over what it does, and does not, derive and

inherit at each step; for instance, Now(t) is not inherited at time step t + 1. To “inherit” P is, roughly speaking,

to assert P at time t + 1 just in case it was believed at time t. However, in a temporal, non-monotonic formalism,

what is justified now may not be justified later. For a simple example, consider that a certain observation at time t

may justify the conclusion that it is raining at time t, and it may be reasonable to continue to believe this at time

t + 1 (i.e. to inherit the belief). However, at some point in time, t + n, neither the original observation, nor the

inherited belief can be considered justification for the continued belief that it is raining. Thus, although inheriting is

a reasonable default behavior, there will be conditions and limits.4This is accomplished by special inheritance rules

like Definition 21. Note in particular the conditions governing that rule, conditions that can be tailored for different

agents and circumstances.

Such step-wise control over inference gives active logic the ability to explicitly track the individual steps of a

deduction. Thus, for instance, an inference rule can refer to the results of all inferences up until now—i.e. through

time t—as it computes the subsequent results (for time t + 1). This allows an active logic to reason, for example,

about its own (past) reasoning; and in particular about what it has not yet concluded. Moreover, this can be performed

quickly, since it involves little more than a lookup of the current knowledge base (see, e.g. Definition 20). Although the

complexity of this operation is low—O(n)—it is nevertheless the case that if the KB is allowed to grow indefinitely,

the operation will take increasing time. Currently beliefs older than some arbitrary threshold are removed from active

memory and written to a searchable log file. However, we are investigating various more intelligent methods for

selective “forgetting”.

Thislastpointisworthfurtherelaborationandemphasis,foritiscentraltotheactivelogicapproachtomodelingthe

reasoning of real-world agents. The reason that determining what one does not know—otherwise known as negative

introspection—is simple in active logic is a direct result of the practical acknowledgment that any real agent is limited

to reasoning only with whatever formulas (wffs) it has been able to come up with so far, rather than with implicit but

not yet performed inferences. Thus, determining if a given formula P is known is not a question of seeing if P is a

consequence of one’s current beliefs, but only a question of seeing if P is actually present in the KB. This approach is

especially important to the issue of performing consistency checks before accepting new formulas into the KB. After

all, before accepting P, one may well want to know whether P is consistent with one’s current beliefs. In general, P

is not consistent with the KB if ¬P can be derived from KB. However, it is not in general possible to know, for any

given formula if that formula is derivable from current beliefs, without actually going through the required deductions

to prove it. That could take a great deal of time—more time than a typical agent will have before deciding to accept

P. Cutting this process down to a simple KB look-up of ¬P, then, is an important practical simplification. So, instead

of looking for arbitrary contradictions to P, we are only looking for direct contradictions (i.e. ¬P).5

But won’t this practical simplification mean that active logic KBs are more likely to become inconsistent? That is

certainly a possibility, and yet, insofar as (a) contradictions are an inevitable part of living in and reasoning about the

real world, and (b) the consistency of complex KBs is practically impossible to determine or maintain, then it seems

a better bet to focus less on maintaining consistency, and more on an ability to reason effectively in the presence of

contradictions, taking action with respect to them only when they become revealed in the course of inference (which

itself might be directed toward finding contradictions, to be sure).

This is where the other central features of active logic—its step-wise control over inference, and the built-in ability

to refer to individual steps of reasoning—come into play, making active logic a natural formalism for detecting and

reasoning about contradictions and their causes. For as soon as a contradiction reveals itself—that is, as soon as P

and ¬P are both present in the KB—it is possible to “capture” it, preventing further reasoning using the contradictory

formulas as premises (and thereby preventing any explosion of wffs), while at the same time marking their presence,

to allow further consideration of the cause of the contradiction. Current implementations of active logic incorporate a

“conflict-recognition” inference rule like Definition 18 for this purpose.

Through the use of such rules, direct contradictions can be recognized as soon as they occur, and further reasoning

can be initiated to repair the contradiction, or at least to adopt a strategy with respect to it, such as simply avoiding

the use of either of the contradictory formulas for the time being. Unlike in truth maintenance systems [15,16] where

4Inheritance and disinheritance are directly related to belief revision [23] and to the frame problem [11,31]; see [34] for further discussion.

5This discussion is not meant to imply that, if ¬P is found in the KB, that the agent will necessarily, for that reason, reject P, for there may be

good reason to reject ¬P, instead.

Page 14

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINT

14

AID:2334 /FLA [m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.14(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

a separate process resolves contradictions using justification information, in active logic the contradiction detection

and handling [32] occur in the same reasoning process. In fact, the Contra predicate is a meta-predicate: it is about

the course of reasoning itself (and yet is also part of that same evolving history).

Thus, speaking somewhat more broadly, active logic is a paraconsistent logic that achieves its paraconsistency in

virtue of possessing two simultaneously active (and interactive) modes of reasoning, which might be called circum-

spective and literal. In literal mode, the reasoning agent is simply working with, and deriving the consequences of,

its current beliefs. In circumspective mode, the reasoning agent is reasoning about its beliefs, noting, for instance,

that it has derived a contradiction, and deciding what to do about that. It is important to active logic that these are

not separate, isolated modes, but interactive and part of the same overall reasoning process. Thus, for instance, the

(circumspective) derivation of Contra is triggered by the (literal) derivation of P and ¬P, and reasoning with Contra

happens alongside reasoning about other matters. Likewise, reasoning about a contradiction may eventually result in

the reinstatement of one of the conclusions, P or ¬P, to be carried forward and reasoned with in literal mode. It is pre-

cisely this ongoing interaction between literal and circumspective modes, between reasoning and self-monitoring, that

allows active logic to avoid the pitfalls of explosive logics, and makes it more appropriate to the needs of real-world

agents.

8. Comparison with related work

Active logic is primarily related to two bodies of work—work on temporal logics, and work on paraconsistent

logics. We will treat each of these subjects in turn.

8.1. Temporal logics

Temporal logics—logical formalisms explicitly allowing for the representation of temporal information—were

introduced by Prior (under the name of Tense Logic) in a series of writings between 1957 and 1969 [43–45]. Pnueli

established the relevance of tense logic for understanding the runtime behavior of programs [38]. Such temporal logics

are modal, with operators for notions such as the future truth of a predicate. A first-order approach to reasoning about

time was employed by Allen [1], with expressions such as Holds(A,t) to mean A is true at time t; Allen and others

made major strides in the use of such formalisms (so-called action logics) in AI. Part of the effect of these latter efforts

was to connect temporal logic to belief logics, i.e., logics for representing information about an agent that plans and

acts in a dynamic world. Thus action logics typically have temporal aspects, since the passage of time is of central

importance to the planning and carrying out of actions; see for instance [22].

Another central feature of most such logics is a treatment of the frame problem. Definition 21 (the inheritance rule)

might be considered a kind of frame axiom. While it does not quite assert that φ remains true despite an action having

occurred, it has a similar effect: it says that φ will remain believed unless there is a reason not to believe it, such as

might happen if an action is known to have negated φ.

Various logics of action and belief have been extensively studied for as long as AI has existed [28–31]. Typically,

the formalism is designed to represent the formation of an agent’s beliefs (including its beliefs about the results of

actions) based on a starting set of information (initial beliefs, or axioms). However, since belief-formation in any

real-world agent must occur as a process in time, it is natural to consider a logic in which not only is time represented

(i.e., one is able to express things about time, as in a temporal logic) but also the passage of time is represented

as an evolving process in which the “present” time moves forward during belief formation. Thus the agent has a

certain set of beliefs “now”, and another set at a later “now”. But if the logic is to be used by the agent, then its own

evolving notion of what time it is must be factored into the formalism as well. This is where active logics come in: an

agent/temporal logic with a twist: an evolving now and corresponding time-sensitive inference rules.

Active logic is not the only formalism to consider time in this way. For instance, SNePS [50], especially as applied

to the Embodied Cassie project [49], incorporates an indexical, evolving-time variable NOW. Cassie, a natural-

language-using autonomous robot, uses this variable to track the passage of time, allowing it to do such things as

appropriately alter verb tenses when discussing present or past actions. Cassie’s temporal awareness also plays a role

in time-sensitive planning projects like maintaining its battery and remediating unexploded land mines (in simulation).

The motivations for including such an evolving “now” in Cassie are quite similar to the motivations for including

one in active logic. Ismail and Shapiro write: “[E]mbodied cognitive agents should ... act in and reason about a

Page 15

ARTICLE IN PRESS

ARTINT:2334

Please cite this article in press as: M.L. Anderson et al., Active logic semantics for a single agent in a static world, Artificial Intelligence (2007),

doi:10.1016/j.artint.2007.11.005

JID:ARTINTAID:2334 /FLA[m3SC+; v 1.86; Prn:7/01/2008; 19:09] P.15(1-19)

M.L. Anderson et al. / Artificial Intelligence ••• (••••) •••–•••

15

changing world, using reasoning in the service of acting and acting in the service of reasoning. Second, they should

be able to communicate their beliefs, and report their past, ongoing, and future actions in natural language. This

requires a representation of time ...” [27]. However, there are some significant differences in the nature of the “now”

incorporated into each formalism, and how it can therefore be used.

Perhaps the biggest difference is that for the SNePS-based agent Cassie, NOW is a meta-logical variable, rather

than a logical term fully integrated into the SNePS semantics. The variable NOW is implemented so that it does,

indeed, change over time (and, in particular, changes whenever Cassie acts in any way, including by reasoning), but

this change is the result of actions triggering an external time-variable update. In active logic, in contrast, reasoning

itself implies the passage of time. Perhaps in part because of this difference, SNePS is a monotonic logic, whereas

active logic is non-monotonic, leveraging the facts that beliefs are had at times, and beliefs can be had about beliefs,

to easily represent such things as “I used to believe P, but now I believe ¬P” using the Bel operator. SNePS is also

able to represent beliefs about beliefs, but there is no indication that this ability is leveraged by SNePS to guide belief

updates. Rather, all Cassie’s beliefs are about states holding over time, so that belief change is effected by allowing

beliefs to expire, rather than by formally retracting them. This is a strategy similar to that employed by the situation

calculus (which does not itself incorporate a changing Now term) [31]. Finally, although SNePS is a paraconsistent

logic, it is so in virtue of the fact that contradictions imply nothing at all, whereas in active logic contradictions imply

Contra, a meta-level operator that can trigger further reasoning.

8.2. Paraconsistent logics

As mentioned in the introduction, the term paraconsistent logic is applied to logics that are not explosive. Another

way to look at this concept is to consider that classical logic is so averse to inconsistency that it cannot distinguish

between local inconsistency, where for some formula A both A and ¬A hold, and global inconsistency, where for all

formulas A both A and ¬A hold. So in a paraconsistent logic, local inconsistency does not imply global inconsis-

tency. For various reasons, including philosophical issues, the intrinsic interest of investigating paraconsistency, and

particularly the increasing number of applications involving inconsistencies, there has been growing interest in this

field, including several books, numerous papers, and three World Congresses on Paraconsistency: [5] and [12] are the

Proceedings of the first two (see also [13] for a historical survey).

As noted in the survey paper [24], paraconsistency may be achieved in several different ways. Modifying the

axioms or rules is one technique. Another method stays within the framework of classical logic by the use of maximal

consistent subsets of formulas. Consider an inconsistent set of formulas Γ . There must always be some subsets of Γ

that are consistent (for example, ∅ is consistent) hence there must be maximal consistent subsets of Γ . In this method

A is deduced from Γ if A is deduced classically from all maximal consistent subsets of Γ [48]. Some researchers use

additional criteria to find preferred consistent subsets and work with those [8].

Another technique [7] extends the set of classical truth values from {True,False} to a larger set. Usually, the

set of truth values is given an algebraic structure, typically a lattice. Perhaps the best-known of these is the lattice

FOUR = {True,False,Both,Neither} where Both stands for an inconsistency. A fourth approach extends classical

logic by the addition of modal or metalevel operators. Modal logic has an operator for a formula to be possible (true

in some world) and necessary (true in all worlds) where worlds are selected in some way. Both a formula A and its

negation ¬A may be possible because they are true in different worlds, but that does not mean that all formulas are

possible.

Consider now how active logic fits into the classification given above. In active logic the rules of inference are

limited, and are based on the passage of time. Also the language contains the meta-level operator Contra to indicate

contradictory statements. Hence active logic combines two of the methods above to achieve paraconsistency.

8.3. Other related work

Several other interesting frameworks exist that encompass many logical systems in a uniform manner. We briefly

discuss two such frameworks here.

A Labelled Deductive System (LDS) [20] is a logical reasoning system employing both formulas and annotations

for those formulas, called labels. The labels can have various contents with effects on the deductions. For instance,

if the label indicates that one formula is better supported by evidence than another, then deductions using the better