Active logic semantics for a single agent in a static world

Department of Computer Science, University of Maryland, College Park, MD 20742, USA
Artificial Intelligence (Impact Factor: 3.37). 05/2008; 172(8-9):1045-1063. DOI: 10.1016/j.artint.2007.11.005

ABSTRACT For some time we have been developing, and have had significant practical success with, a time-sensitive, contradiction-tolerant logical reasoning engine called the active logic machine (ALMA). The current paper details a semantics for a general version of the underlying logical formalism, active logic. Central to active logic are special rules controlling the inheritance of beliefs in general (and of beliefs about the current time in particular), very tight controls on what can be derived from direct contradictions (P&¬P), and mechanisms allowing an agent to represent and reason about its own beliefs and past reasoning. Furthermore, inspired by the notion that until an agent notices that a set of beliefs is contradictory, that set seems consistent (and the agent therefore reasons with it as if it were consistent), we introduce an “apperception function” that represents an agent's limited awareness of its own beliefs, and serves to modify inconsistent belief sets so as to yield consistent sets. Using these ideas, we introduce a new definition of logical consequence in the context of active logic, as well as a new definition of soundness such that, when reasoning with consistent premises, all classically sound rules remain sound in our new sense. However, not everything that is classically sound remains sound in our sense, for by classical definitions, all rules with contradictory premises are vacuously sound, whereas in active logic not everything follows from a contradiction.

Download full-text


Available from: John Grant, Sep 27, 2015
17 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cooperation is a complex task that necessarily involves communication and reasoning about others' intentions and beliefs. Multi-agent communication languages aid design- ers of cooperating robots through standardized speech acts, sometimes including a formal semantics. But a more direct approach would be to have the robots plan both regular and communicative actions themselves. We show how two ro- bots with heterogeneous capabilities can autonomously de- cide to cooperate when faced with a task that would other- wise be impossible. Request and inform speech acts are for- mulated in the same first-order logic of action and change as is used for regular actions. This is made possible by treating the contents of communicative actions as quoted formulas of the same language. The robot agents then use a natural deduction theorem prover to generate cooperative plans for an example scenario by reasoning directly with the axioms of the theory.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Commonsense reasoning has proven exceedingly difficult both to model and to im-plement in artificial reasoning systems. This paper discusses some of the features of human reasoning that may account for this difficulty, surveys a number of reasoning systems and formalisms, and offers an outline of active logic, a non-classical para-consistent logic that may be of some use in implementing commonsense reasoning.