Active logic semantics for a single agent in a static world

Department of Computer Science, University of Maryland, College Park, MD 20742, USA
Artificial Intelligence (Impact Factor: 3.37). 05/2008; 172(8-9):1045-1063. DOI: 10.1016/j.artint.2007.11.005


For some time we have been developing, and have had significant practical success with, a time-sensitive, contradiction-tolerant logical reasoning engine called the active logic machine (ALMA). The current paper details a semantics for a general version of the underlying logical formalism, active logic. Central to active logic are special rules controlling the inheritance of beliefs in general (and of beliefs about the current time in particular), very tight controls on what can be derived from direct contradictions (P&¬P), and mechanisms allowing an agent to represent and reason about its own beliefs and past reasoning. Furthermore, inspired by the notion that until an agent notices that a set of beliefs is contradictory, that set seems consistent (and the agent therefore reasons with it as if it were consistent), we introduce an “apperception function” that represents an agent's limited awareness of its own beliefs, and serves to modify inconsistent belief sets so as to yield consistent sets. Using these ideas, we introduce a new definition of logical consequence in the context of active logic, as well as a new definition of soundness such that, when reasoning with consistent premises, all classically sound rules remain sound in our new sense. However, not everything that is classically sound remains sound in our sense, for by classical definitions, all rules with contradictory premises are vacuously sound, whereas in active logic not everything follows from a contradiction.

Download full-text


Available from: John Grant
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cooperation is a complex task that necessarily involves communication and reasoning about others' intentions and beliefs. Multi-agent communication languages aid design- ers of cooperating robots through standardized speech acts, sometimes including a formal semantics. But a more direct approach would be to have the robots plan both regular and communicative actions themselves. We show how two ro- bots with heterogeneous capabilities can autonomously de- cide to cooperate when faced with a task that would other- wise be impossible. Request and inform speech acts are for- mulated in the same first-order logic of action and change as is used for regular actions. This is made possible by treating the contents of communicative actions as quoted formulas of the same language. The robot agents then use a natural deduction theorem prover to generate cooperative plans for an example scenario by reasoning directly with the axioms of the theory.
    Full-text · Article · Jan 2009
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper argues for a "commonsense core" hypothesis, with emphasis on the issue of consistency in agent knowledge bases. This is part of a long-term research program, in which the hypothesis itself is being gradually refined, in light of various sorts of evidence. The gist is that a commonsense reasoning agent that would otherwise become incapacitated in the presence of inconsistent data may - by means of a modest additional error-handling "core" component - carry out more effective real-time reasoning, and also that there may be cases of interest in which the "core" is more usefully integrated into the knowledge base itself.
    No preview · Conference Paper · Jan 2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper discusses the issues on the logical foundations of knowledge representation for intelligent autonomous agents. We analyze the limitations of the mathematical logic approaches in knowledge representations and show that classical semantic interpretations are not suitable for expressing the autonomous knowledge of agents. We propose a novel logical framework in which the semantic interpretations are not based on the models, but on the sensors of agents. We also illustrate how agents can interpret formulas in autonomously.
    No preview · Article · Jan 2012
Show more