Conference Paper

Computationally Grounded Account of Belief and Awareness for AI Agents.

Conference: Proceedings of The Multi-Agent Logics, Languages, and Organisations Federated Workshops (MALLOW 2010), Lyon, France, August 30 - September 2, 2010
Source: DBLP


We discuss the problem of designing a computationally grounded logic for reasoning about epistemic attitudes of AI agents, mainly concentrating on beliefs. We briefly review exisiting work and analyse problems with semantics for epistemic logic based on accessibility relations, including interpreted systems. We then make a case for syntactic epistemic logics and describe some applications of those logics in verifying AI agents.

32 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In recent years intelligent agents have been the focus of much attention from the Artificial Intelligence (AI) and many other communities. In AI research, agent-based systems technology has emerged as a new paradigm for conceptualizing, designing, and implementing sophisticated software systems. Furthermore, there has been a move of these systems into safety-critical domains including healthcare, emergency scenarios, and disaster recovery. While agents provide great benefits in developing many complex software applications (e.g., systems that have multiple components, distributed over networks, exhibit dynamic changes, and require autonomous behavior), they also present new challenges to application developers, namely verifying requirements and ensuring functional correctness. These problems become even more challenging in the case of multiagent systems (MASs), where agents exchange information via messages. Systematic, formal approaches to their specification and verification can allow addressing these problems.
    11/2012; 2(4). DOI:10.4172/2165-7866.1000e109

Similar Publications


32 Reads
Available from