Computationally grounded account of belief and
awareness for AI agents
Natasha Alechina and Brian Logan
We discuss the problem of designing a computationally grounded logic
for reasoning about epistemic attitudes of AI agents, mainly concentrating on
beliefs. We briefly review exisiting work and analyse problems with seman-
tics for epistemic logic based on accessibility relations, including interpreted
systems. We then make a case for syntactic epistemic logics and describe
some applications of those logics in verifying AI agents.
The Belief-Desire-Intention (BDI) model of agency is arguably the most
widely adopted approach to modelling artificial intelligence agents . In
the BDI approach, agents are both characterised and programmed in terms
of propositional attitudes such as beliefs and goals and the relationships be-
tween them. For the BDI model to be useful in developing AI agents, we
must must be able to correctly ascribe beliefs and other propositional atti-
tudes to an agent. However standard epistemic logics suffer from several
problems in ascribing beliefs to computational agents. Critically, it is not
clear how to connect the computational implementation of an agent to the
beliefs we ascribe to it. As a result, standard epistemic logics model agents
as logically omniscient. The concept of logical omniscience was introduced
by Hintikka in , and is usually defined as the agent knowing all logi-
cal tautologies and all the consequences of its knowledge. However, log-
ical omniscience is problematic when attempting to build realistic models
of agent behaviour, as closure under logical consequence implies that delib-
eration takes no time. For example, If processes within the agent such as
belief revision, planning and problem solving are modelled as derivations in
a logical language, such derivations require no investment of computational
resources by the agent.
In this paper we present an alternative approach to modelling agents
which addresses these problems. We distinguish between beliefs and rea-
soning abilities which we ascribe to the agent (‘the agent’s logic’) and the
logic we use to reason about the agent. In this we follow, e.g., [21, 20, 17].
In the spirit of , our logic to reason about the agent’s beliefs is grounded
in a concrete computational model. However, unlike [33, 29] we choose
not to interpret the agent’s beliefs as propositions corresponding to sets of
possible states or runs of the agent’s program, but syntactically, as formulas
‘translating’ some particular configuration of variables in the agent’s internal
state. One of the consequences of this choice is that we avoid modelling the
agent as logically omniscient. This has some similarities with the bounded-
resources approach of  and more recent work such as [1, 4].
This paper is essentially a high-level summary of the course on logics and
agent programming languages the authors gave at the 21st European Summer
School in Logic, Language and Information held in Bordeaux in 2009. Some
of the ideas have appeared in our previous work, for example [5, 6], but have
never been summarised in a single article.
The rest of the paper is organised as follows. In section 2 we discuss mo-
tivations for modelling intentional attitudes of AI agents in logic. In section 3
we analyse problems with the standard semantics for epistemic logic, includ-
ing interpreted systems. In section 4 we discuss other approaches to mod-
elling knowledge and belief, namely the syntactic approach, logic of aware-
ness, and algorithmic knowledge. Then we introduce our proposal based on
the syntactic approach in section 5 and briefly survey some of the applica-
tions of the syntactic approach in verification of agent programs in section
2 Logic for verification
There are many reasons for modelling agents in logic. The focus of our
work is on specifying and verifying AI agents using logic. The specifica-
tion and verification of agent architectures and programs is a key problem
in agent research and development. Formal verification provides a degree of
certainty regarding system behaviour which is difficult or impossible to ob-
tain using conventional testing methodologies, particularly when applied to
autonomous systems operating in open environments. For example, the use
of appropriate specification and verification techniques can allow agent re-
searchers to check that agent architectures and programming languages con-
form to general principles of rational agency, or agent developers to check
that a particular agent program will achieve the agent’s goals in a given range
Ideally, such techniques should allow specification of key aspects of the
agent’s architecture and program, and should admit a fully automated ver-
ification procedure. One such procedure is model-checking . Model-
checking involves representing the system to be verified as a transition sys-
tem M which can serve as a model of some (usually temporal) logic, spec-
ifying a property of the system as a formula φ in that logic, and using an
automated procedure to check whether φ is true in M. However, while there
has been considerable work on the formal verification of software systems
and on logics of agency, it has proved difficult to bring this work to bear on
verification of agent architectures and programs. On the one hand, it can
be difficult to specify and verify relevant properties of agent programs using
conventional formal verification techniques, and on the other, standard epis-
temic logics of agency (e.g., ) fail to take into account the computational
limitations of agent implementations.
Since an agent program is a special kind of program, logics intended
for the specification of conventional programs can be used for specifying
gent Systems (AAMAS’03), pages 409–416, New York, NY, USA, 2003.
 Edmund M. Clarke, Orna Grumberg, and Doron A. Peled.
Checking. The MIT Press, Cambridge, Massachusetts, 1999.
 P. R. Cohen and H. J. Levesque. Intention is choice with commitment.
Artificial Intelligence, 42:213–261, 1990.
 Ho Ngoc Duc. Reasoning about rational, but not logically omniscient,
agents. Journal of Logic and Computation, 7(5):633–648, 1997.
 Jennifer J. Elgot-Drapkin and Donald Perlis. Reasoning situated in time
I: Basic concepts. Journal of Experimental and Theoretical Artificial
Intelligence, 2:75–98, 1990.
 R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi. Reasoning about
Knowledge. MIT Press, Cambridge, Mass., 1995.
 Ronald Fagin, Joseph Y. Halpern, and Moshe Y. Vardi. A non-standard
approach to the logical omniscience problem. Artificial Intelligence,
 Michael P. Georgeff, Barney Pell, Martha E. Pollack, Milind Tambe,
and Michael Wooldridge. The belief-desire-intention model of agency.
In J¨ org P. M¨ uller, Munindar P. Singh, and Anand S. Rao, editors, In-
telligent Agents V, Agent Theories, Architectures, and Languages, 5th
International Workshop, (ATAL’98), Paris, France, July 4-7, 1998, Pro-
ceedings, volume 1555 of Lecture Notes in Computer Science, pages
1–10. Springer, 1999.
 J. Hintikka. Knowledge and belief. Cornell University Press, Ithaca,
 K. Konolige. A Deduction Model of Belief. Morgan Kaufmann, San
Francisco, Calif., 1986.
 H. J. Levesque. A logic of implicit and explicit belief. In Proceedings
of the Fourth National Conference on Artificial Intelligence, AAAI-84,
pages 198–202. AAAI, 1984.
 Alessio Lomuscio, Hongyang Qu, and Franco Raimondi. MCMAS: A
model checker for the verification of multi-agent systems. In Ahmed
Bouajjani and Oded Maler, editors, Computer Aided Verification, 21st
International Conference, CAV 2009, Grenoble, France, June 26 - July
2, 2009. Proceedings, volume 5643 of Lecture Notes in Computer Sci-
ence, pages 682–688. Springer, 2009.
 John-Jules Ch. Meyer. Our quest for the holy grail of agent verifi-
cation. In Nicola Olivetti, editor, Automated Reasoning with Ana-
lytic Tableaux and Related Methods, 16th International Conference,
TABLEAUX 2007, Aix en Provence, France, July 3-6, 2007, Proceed-
ings, volume 4548 of Lecture Notes in Computer Science, pages 2–9.
 Riccardo Pucella. Deductive algorithmic knowledge. Journal of Logic
and Computation, 16(2):287–309, 2006.
 Franco Raimondi. Model-checking multi-agent systems. PhD thesis,
Department of Computer Science, University College London, Univer-
sity of London, 2006.
 Franco Raimondi and Alessio Lomuscio. A tool for specification and
verification of epistemic properties in interpreted systems. Electronic
Notes in Theoretical Computer Science, 85:176–191, 2004.
 A. S. Rao and M. P. Georgeff. Modeling rational agents within a BDI-
architecture. In Proceedings of the Second International Conference on
 S. J. Rosenschein and L. P. Kaelbling. A situated view of representation
and control. Artificial Intelligence, 73:149–173, 1995.
 N. Seel. The ‘logical omniscience’ of reactive systems. In Proceedings
of the Eighth Conference of the Society for the Study of Artificial Intel-
ligence and Simulation of Behaviour (AISB’91), pages 62–71, Leeds,
 Doan Thu Trang, Brian Logan, and Natasha Alechina. Verifying Drib-
ble agents. In Matteo Baldoni, Jamal Bentahar, M. Birna van Riems-
dijk, and John Lloyd, editors, Declarative Agent Languages and Tech-
nologies VII, 7th International Workshop, DALT 2009, Budapest, Hun-
gary, May 11, 2009. Revised Selected and Invited Papers, volume 5948
of Lecture Notes in Computer Science, pages 244–261, 2010.
 W. van der Hoek and M. Wooldridge.
agency. Logic Journal of the IGPL, 11(2):133–157, 2003.
 W. van der Hoek and M. Wooldridge.
agency. Logic Journal of the IGPL, 11(2):135–159, 2003.
 Michael Wooldridge. Computationally grounded theories of agency.
In E. Durfee, editor, Proceedings of the Fourth International Confer-
ence on Multi-Agent Systems (ICMAS-2000), pages 13–20. IEEE Press,
 Michael Wooldridge. Reasoning About Rational Agents. MIT Press,
Towards a logic of rational
Towards a logic of rational