Conference Paper

Explaining Task Processing in Cognitive Assistants That Learn.

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

As personal assistant software matures and assumes more autonomous control of its users' activities, it becomes more critical that this software can explain its task processing. It must be able to tell the user why it is doing what it is doing, and instill trust in the user that its task knowledge reflects standard practice and is being appropriately applied. We will describe the ICEE (Integrated Cognitive Explanation Environment) explanation system and its approach to explaining task reasoning. Key features include (1) an architecture designed for re-use among many different task execution systems; (2) a set of introspective predicates and a software wrapper that extract explanation- relevant information from a task execution system; (3) a version of the Inference Web explainer for generating formal justifications of task processing and converting them to user- friendly explanations; and (4) a unified framework for explanation in which the task explanation system is integrated with previous work on explaining deductive reasoning. Our work is focused on explaining belief-desire-intention (BDI) agent execution frameworks with the ability to learn. We demonstrate ICEE's application within CALO, a state-of-the-art personal software assistant, to explain the task reasoning of one such execution system.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We include rule-based systems as well as hybrid AI systems that may include a wide range of reasoning components including potentially inductive or abductive reasoning as well as the more traditional deductive reasoning. As such, we include historical explanation work (e.g., [4,5,6]) and also explanation work aimed more at evolving hybrid AI systems (e.g., [7,8,9]). The survey includes the domains of expert systems, cognitive assistants, Semantic Web [10], and, more recently, explanations that work with black-box models, i.e., deep learning models [11]. ...
... Several researchers have proposed comprehensive definitions of explanations [22,23,7,24] and have presented explanation components that they deem necessary to satisfy either their work or the domains where they hope the explanations will be useful. However, with a shift of focus in AI we feel the need to revisit the work on defining explanation as we consider what is desirable in next-generation "explainable knowledge-enabled systems." ...
... While many have attempted to define explanations (e.g., [22,32]), additional efforts have attempted to improve the generation of explanations (e.g., [7,29,33]) and tackle various aspects of explainability (e.g., [34,19]). To begin to address the need of building explainable, knowledge-enabled AI systems, we present a list of desirable properties from the synthesis of our literature review of past explanation work. ...
Preprint
Full-text available
Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.
... In addition, researchers from economics focused on trust in information and trust in action for governing common resources [19] [36][37]. Furthermore, researchers from automation discussed the trust of people in reliance on automation [30][31][32][33]. Researchers from IS showed that technology trusting expectations influence trusting intention through performance, disconfirmation, and satisfaction [27]. ...
... In the case of personal assistants or CAs, Nunes, Barbosa, and de Lucena [35] theoretically described a domain-neutral user meta-model that allows highlevel user models to be used with configurations and preferences that increase users' trust on personal assistance software. In the same way, McGuinness, Wolverton and da Silva [31] explained that transparency (verification) and provenance (source of information) are the main factor in trusting cognitive assistants. However, no researcher has yet discussed how people trust their CAs in daily life. ...
Conference Paper
Full-text available
The main purpose of this research is to develop a framework of trust determinants in the interactions between people and cognitive assistants (CAs). We define CAs as new decision tools, able to provide people with high quality recommendations and help them make data-driven decisions understanding the environment around people. We also define trust as the belief of people that CAs will help them reach a desired decision. An extensive review on trust in psychology, sociology, economics and policy making, organizational science, automation, and robotics is conducted to determine the factors influence people’s trust on CAs. On the basis of this review, we develop a framework of trust determinants in people’s interaction with CAs where reliability, attractiveness, and emotional attachment positively affect the intention of people in society to use CAs. Our framework also shows that innovativeness positively moderates the intention to use CAs. Finally, in this paper, we suggest future research directions for developing and validating more concrete scales in measuring trust determinants in the interactions between people and CAs.
... Note that aspects of H and α can be further filtered to identify elements of interest to a particular user following techniques such as those in (McGuinness et al. 2007). ...
Article
In this paper we examine the general problem of generating preferred explanations for observed behavior with respect to a model of the behavior of a dynamical system. This problem arises in a diversity of applications including diagnosis of dynamical systems and activity recognition. We provide a logical characterization of the notion of an explanation. To generate explanations we identify and exploit a correspondence between explanation generation and planning. The determination of good explanations requires additional domain-specific knowledge which we represent as preferences over explanations. The nature of explanations requires us to formulate preferences in a somewhat retrodictive fashion by utilizing Past Linear Temporal Logic. We propose methods for exploiting these somewhat unique preferences effectively within state-of-the-art planners and illustrate the feasibility of generating (preferred) explanations via planning.
... Methods in one group modify or map learned models or reasoning systems to make their decisions more interpretable, e.g., by mapping decisions to input data [17], explaining the predictions of classifiers by learning equivalent interpretable models [37], or biasing a planning system towards making decisions easier for humans to understand [45]. Methods in the other group provide descriptions that make a reasoning system's decisions more transparent, e.g., describing planning decisions [5], combining reasoning based on classical first order logic with interface design to help humans understand a plan [4,26], describing why a particular solution was obtained for a given problem using non-monotonic logical reasoning [8], or using rules made of monotonic operators to define proof trees that provide a declarative view (i.e., explanation) of the trace of a computation [9]. Researchers have also explored explanations for non-monotonic rule-based systems in semantic web applications [2,18]. ...
Article
Full-text available
A robot’s ability to provide explanatory descriptions of its decisions and beliefs promotes effective collaboration with humans. Providing the desired transparency in decision making is challenging in integrated robot systems that include knowledge-based reasoning methods and data-driven learning methods. As a step towards addressing this challenge, our architecture combines the complementary strengths of non-monotonic logical reasoning with incomplete commonsense domain knowledge, deep learning, and inductive learning. During reasoning and learning, the architecture enables a robot to provide on-demand explanations of its decisions, the evolution of associated beliefs, and the outcomes of hypothetical actions, in the form of relational descriptions of relevant domain objects, attributes, and actions. The architecture’s capabilities are illustrated and evaluated in the context of scene understanding tasks and planning tasks performed using simulated images and images from a physical robot manipulating tabletop objects. Experimental results indicate the ability to reliably acquire and merge new information about the domain in the form of constraints, preconditions, and effects of actions, and to provide accurate explanations in the presence of noisy sensing and actuation.
... Methods in one group modify or map learned models or reasoning systems to make their decisions more interpretable, e.g., by mapping decisions to input data [17], explaining the predictions of classifiers by learning equivalent interpretable models [37], or biasing a planning system towards making decisions easier for humans to understand [45]. Methods in the other group provide descriptions that make a reasoning system's decisions more transparent, e.g., describing planning decisions [5], combining reasoning based on classical first order logic with interface design to help humans understand a plan [4,26], describing why a particular solution was obtained for a given problem using non-monotonic logical reasoning [8], or using rules made of monotonic operators to define proof trees that provide a declarative view (i.e., explanation) of the trace of a computation [9]. Researchers have also explored explanations for non-monotonic rule-based systems in semantic web applications [2,18]. ...
Chapter
Full-text available
A robot’s ability to provide explanatory descriptions of its decisions and beliefs promotes effective collaboration with humans. Providing such transparency in decision making is particularly challenging in integrated robot systems that include knowledge-based reasoning methods and data-driven learning algorithms. Towards addressing this challenge, our architecture couples the complementary strengths of non-monotonic logical reasoning with incomplete commonsense domain knowledge, deep learning, and inductive learning. During reasoning and learning, the architecture enables a robot to provide on-demand explanations of its decisions, beliefs, and the outcomes of hypothetical actions, in the form of relational descriptions of relevant domain objects, attributes, and actions. The architecture’s capabilities are illustrated and evaluated in the context of scene understanding tasks and planning tasks performed using simulated images and images from a physical robot manipulating tabletop objects. Experimental results indicate the ability to reliably acquire and merge new information about the domain in the form of constraints, and to provide accurate explanations in the presence of noisy sensing and actuation.
... Several prior works suggested methods for explaining recommendations given by MDP-based intelligent assistants [13,15,16,31,32] or explaining plans [55]. Wang et al. generated explanations of robot reasoning based on Partially Observable Markov Decision Problems (POMDPs) [68] and similar approaches have been developed for explaining decisions in the context of Hierarchical Task Networks (HTNs) planning, explaining an agent's actions based on its task model [45,46]. The problem we address differs from the problem of generating explanations for specific decisions, as rather than providing justifications to a specific action, we aim to describe which actions would be taken in information-critical states, with the overarching goal of providing a global understanding of the agent's behavior. ...
Article
Full-text available
Intelligent agents and AI-based systems are becoming increasingly prevalent. They support people in different ways, such as providing users with advice, working with them to achieve goals or acting on users’ behalf. One key capability missing in such systems is the ability to present their users with an effective summary of their strategy and expected behaviors under different conditions and scenarios. This capability, which we see as complementary to those currently under development in the context of “interpretable machine learning” and “explainable AI”, is critical in various settings. In particular, it is likely to play a key role when a user needs to collaborate with an agent, when having to choose between different available agents to act on her behalf, or when requested to determine the level of autonomy to be granted to an agent or approve its strategy. In this paper, we pose the challenge of developing capabilities for strategy summarization, which is not addressed by current theories and methods in the field. We propose a conceptual framework for strategy summarization, which we envision as a collaborative process that involves both agents and people. Last, we suggest possible testbeds that could be used to evaluate progress in research on strategy summarization.
... They should have analytic cognitive capabilities, enabling them to assist humans in time consuming and complex tasks, such as mathematical calculations, information analysis, among others [5]. Intelligent cognitive assistants shall be able to adapt to a dynamic world by learning from experience and explaining how and where the guidance came from (provenance) [6]. ...
Conference Paper
Cognitive assistants bring new opportunities to strengthen the coupling of data, information and knowledge aiming to collective intelligence. This technology can boost decision making on complex issues, such as Sustainability and Bioeconomy on agricultural production systems. However, the design of collaborative systems that explore cognitive assistants is an open research question. This challenge deals with meaning interpretation and involves semantic formal models to represent, in a systemic view, knowledge related to the structures, competencies, responsibilities, policies, and culture of a production system. In this paper, we propose a design framework for the development of solutions based on cognitive assistant technology. This framework contributes to a novel view of the design process by integrating a method for modelling complex system with formal semantic models, collective intelligent systems, and cognitive assistant development technologies. A scenario in a sugarcane production system illustrates the proposed framework.
... This framework combines the trustworthiness on systems and relative advantages of using the system. Trustworthiness of people towards CAs has similarities with the literature of trust on automation (Mayer et al., 1995;McGuinness et al., 2007;Muir, 1994;Muir and Moray, 1996), trust in information systems (Lankton et al., 2014), trust on robotics in terms the attractiveness of robots and the emotional feelings people have toward them (Hancock et al., 2011;Yuksel et al., 2017), and trust on CAs (Siddike and Kohda, 2018). ...
Article
Cognitive assistants (CAs) are new decision tools, able to provide people with high quality recommendations. CAs are very primitive and beginning to appear in the market. As a result, trustworthiness and relative advantages of using CAs are the most influential factors for acceptance of CAs by the people in the society. The prime objective of this paper is to investigate how trustworthiness and relative advantages play the most important role for acceptance of CAs using novel approach of metaphors. Three metaphors namely: pets, alarm clock and vase were used to investigate the acceptance of CAs by the people in the society. To achieve the objective, a qualitative research was undertaken to deeply investigate the issue. A total of 32 interviews were conducted into three steps. The interview data was analysed using MAXQDA 12 (a qualitative data analysis software) by applying the 'grounded theory' research approach. Results indicate that the metaphor pets (trustworthiness and relative advantages) and alarm clock (only relative advantages of using CAs) influence the people's acceptance of CAs in the society. A theoretical framework of acceptance of CAs was presented based on the findings and insights from this research. Finally, the paper concludes by suggesting future research directions.
... Note that aspects of H and α can be further filtered to identify elements of interest to a particular user following techniques such as those in (McGuinness et al. 2007). ...
Article
We investigate agent supervision, a form of customization, which constrains the actions of an agent so as to enforce certain desired behavioral specifications. This is done in a setting based on the Situation Calculus and a variant of the ConGolog programming language which allows for nondeterminism, but requires the remainder of a program after the execution of an action to be determined by the resulting situation. Such programs can be fully characterized by the set of action sequences that they generate. The main results are a characterization of the maximally permissive supervisor that minimally constrains the agent so as to enforce the desired behavioral constraints when some agent actions are uncontrollable, and a sound and complete technique to execute the agent as constrained by such a supervisor. 1
... " [6] Prior research on assistive agents emphasizes the importance of ease of understanding by the user of the agent's operation, together with ease of directing, ignoring and correcting the agent, as well as working entirely without it [8, 10, 23]. Transparency and controllability are essential to build trust, which is especially important in an agent with an extended life cycle, such as a user's assistant [25], and even more so if the agent acts on its own initiative. Returning to the earlier example, CALO's actions are pertinent to the important upcoming meeting. ...
Article
The increased scope and complexity of tasks that people perform as part of their routine work has led to growing interest in the development of intel-ligent personal assistive agents that can aid a human in managing and performing tasks. As part of their operation, such agents should be able to anticipate user needs, opportunities, and problems, and then act on their own initiative to ad-dress them. We characterize the properties desired for proactive behavior of this type, and present a BDI-based agent cognition model designed to support proac-tive assistance. Our model for proactive assistance employs a meta-level layer to identify potentially helpful actions and determine when it is appropriate to perform them. We conclude by identifying technical challenges in developing systems that embody proactive behaviors.
... Several issues remain open for investigation. We plan to implement the recommendations identified in this study by expanding ICEE (Integrated Cognitive Explanation Environment) [18], our complex explanation framework consistent with the model suggested in [26]. We also recognize the value in two different types of user studies: those aimed at guiding design and identifying design recommendations (like the one reported in this paper), and those that focus on evaluating an existing implementation. ...
Conference Paper
Full-text available
As adaptive agents become more complex and take increasing autonomy in their user's lives, it becomes more important for users to trust and understand these agents. Little work has been done, however, to study what factors influence the level of trust users are willing to place in these agents. Without trust in the actions and results produced by these agents, their use and adoption as trusted assistants and partners will be severely limited. We present the results of a study among test users of CALO, one such complex adaptive agent system, to investigate themes surrounding trust and understandability. We identify and discuss eight major themes that significantly impact user trust in complex systems. We further provide guidelines for the design of trustable adaptive agents. Based on our analysis of these results, we conclude that the availability of explanation capabilities in these agents can address the majority of trust concerns identified by users. Author Keywords
... PML facilitates generation and sharing of provenance metadata for data derivation within and across intelligent systems, and acts as an enabler of trust by supporting explanations of information sources, assumptions, and learned information. As a critical part of the Inference Web (IW) [5] project, PML has been used in many domains [6], including: information extraction [7], logical reasoning [8], workflow processing [9], semantic eScience [10], and machine learning [11], [12]. Three workflow-based case studies we explore are as follows: ...
Conference Paper
Full-text available
In this paper, we describe how a semantic web-based provenance Interlingua called the Proof Markup Language (PML) has been used to encode workflow provenance in a variety of diverse application areas. We highlight some usability and interoperability challenges that arose in the application areas and show how PML was used in the solutions.
Chapter
Model-based approaches to AI are well suited to explainability in principle, given the explicit nature of their world knowledge and of the reasoning performed to take decisions. AI Planning in particular is relevant in this context as a generic approach to action-decision problems. Indeed, explainable AI Planning (XAIP) has received interest since more than a decade, and has been taking up speed recently along with the general trend to explainable AI. In the lecture, we provide an overview, categorizing and illustrating the different kinds of explanation relevant in AI Planning; and we outline recent works on one particular kind of XAIP, contrastive explanation. This extended abstract gives a brief summary of the lecture, with some literature pointers. We emphasize that completeness is neither claimed nor intended; the abstract may serve as a brief primer with literature entry points.
Article
Full-text available
The main purpose of this research is to develop a framework of trust determinants in the interactions between people and cognitive assistants (CAs). CAs are defined as new decision tools that can provide people with high quality recommendations and help them make data-driven decisions to understand the environment around them. Trust is defined as the belief of people that CAs will help them reach a desired decision. An extensive review on trust in psychology, sociology, economics and policy making, organizational science, automation, and robotics was conducted to determine the factors that influence people's trust in CAs. On the basis of this review, a framework of trust determinants in people's interactions with CAs was developed where reliability, attractiveness, and emotional attachments positively influence people's trust in CAs. The framework also shows that relative advantages of innovativeness positively affect the intention to use CAs. Future research directions are suggested for developing and validating more concrete scales in measuring trust determinants in the interactions between people and CAs.
Article
Full-text available
As research moves into its next phase as a distributed inter-connected multi-disciplinary global community, many issues in information access and integration become more challenging and simultaneously more important. We are investigating a number of aspects of semantic information integration ranging from underlying representation languages and infrastructure, to tools for analysis, evolution and maintenance, to applications ranging from immunology to solar-terrestrial physics. We take a multi-dimensional approach: 1 – provide infrastructure for tracking and explaining where information came from, how it was manipulated, and why recipients might decide to trust information. Our major research thrust is on Inference Web [McP04] – an infrastructure to support explanations for answers from question answering systems. As part of this effort, we have designed the proof markup language [PMF06] – a representation Interlingua for encoding knowledge provenance. This effort is funded largely by DARPA for use in explaining cognitive assistants, and in particular task processing [MPGW06] (in the DARPA PAL program), for use in explaining integrated learners (in the DARPA integrated learning program), and for use in explaining analyst tools [WMP+05] (in the DTO novel intelligence for massive data program). 2 – explore semantic technology-based infrastructural approaches to access and integration. In particular, we are engaging in an NSF-funded effort to design and implement a virtual observatory for solar terrestrial physics [FMM+06]. We have also begun a NASA-funded effort to explore semantically-enabled scientific data integration with the initial domain areas of volcanoes and climate [FMRS06]. 3 – explore information privacy issues, initially focusing on usage of information (as opposed to collection of information). We are engaged in an NSF-funded cybertrust program effort to provide transparent accountable data mining systems [WAB06]. At this workshop, I would be interested in describing the proof markup language and discussing how it can be and is being used to encode and explain data provenance. I would also be interested in discussing needs from scientific applications for information integration and semantic approaches for supporting cyberinfrastructure. This latter topic may be broken into thrusts for semantic integration and access, as well as ontology evolution and maintenance in a collaborative distributed environment, such as the web.
Conference Paper
There has been little work in explaining recommendations generated by Markov Decision Processes (MDPs). We analyze the difculty of explaining policies computed automatically and identify a set of templates that can be used to generate explanations automatically at run-time. These templates are domain-independent and can be used in any application of an MDP. We show that no additional e ort is required from the MDP designer for producing such explanations. We use the problem of advising undergraduate students in their course selection to explain the recommendation for selecting speci c courses to students. We also propose an extension to leverage domain-speci c constructs using ontologies so that explanations can be made more user-friendly. 1
Conference Paper
Full-text available
As more data (especially scientific data) is digitized and put on the Web, it is desirable to make provenance metadata easy to access, reuse, integrate and reason over. Ontologies can be used to encode expectations and agreements concerning provenance metadata representation and computation. This paper analyzes a selection of popular Semantic Web provenance ontologies such as the Open Provenance Model (OPM), Dublin Core (DC) and the Proof Markup Language (PML). Selected initial findings are reported in this paper: (i) concept coverage analysis – we analyze the coverage, similarities and differences among primitive concepts from different provenance ontologies, based on identified themes; and (ii) concept modeling analysis – we analyze how Semantic Web language features were used to support computational provenance semantics. We expect the outcome of this work to provide guidance for understanding, aligning and evolving existing provenance ontologies.
Article
As personal assistant software matures and assumes more autonomous control of user activities, it becomes more critical that this software can tell the user why it is doing what it is doing, and instill trust in the user that its task knowledge reflects standard practice and is being appropriately applied. Our research focuses broadly on providing infrastructure that may be used to increase trust in intelligent agents. In this paper, we will report on a study we designed to identify factors that influence trust in intelligent adaptive agents. We will then introduce our work on explaining adaptive task processing agents as motivated by the results of the trust study. We will introduce our task execution explanation component and provide examples in the context of a particular adaptive agent named CALO. Key features include (1) an architecture designed for re-use among different task execution systems; (2) a set of introspective predicates and a software wrapper that extracts explanation-relevant information from a task execution system; (3) a version of the Inference Web explainer for generating formal justifications of task processing and converting them to user-friendly explanations; and (4) a unified framework for explaining results from task execution, learning, and deductive reasoning.
Article
Full-text available
We describe an intelligent personal assistant that has been developed to aid a busy knowledge worker in managing time commitments and performing tasks. The design of the system was motivated by the complementary objectives of (a) relieving the user of routine tasks, thus allowing her to focus on tasks that critically require human problem-solving skills, and (b) intervening in situations where cognitive overload leads to oversights or mistakes by the user. The system draws on a diverse set of AI technologies that are linked within a Belief-Desire-Intention agent system. Although the system provides a number of automated functions, the overall framework is highly user-centric in its support for human needs, responsiveness to human inputs, and adaptivity to user working style and preferences.
Conference Paper
Full-text available
Question answering systems users may find answers without any supporting information insufficient for determining trust levels. Once those question answering systems begin to rely on source information that varies greatly in quality and depth, such as is typical in web settings, users may trust answers even less. We address this problem by augmenting answers with optional information about the sources that were used in the answer generation process. In addition, we introduce a trust infrastructure, IWTrust, which enables computations of trust values for answers from the Web. Users of IWTrust have access to sources used in answer computation along with trust values for those source, thus they are better able to judge answer trustworthiness. Our work builds upon existing Inference Web components for representing and maintaining proofs and proof related information justifying answers. It includes a new TrustNet component for managing trust relations and for computing trust values. This paper also introduces the Inference Web answer trust computation algorithm and presents an example of its use for ranking answers and justifications by trust.
Article
Full-text available
Action. between two actions A and B, with no links between any actions inside the "lump" and any actions outside. If the subgraph between the actions is sufficiently complex (has more structure than two simple actions in parallel), the strategy suggests that its explanation should be postponed to a separate "section." Meanwhile the whole subgraph is incorporated into the current explanation as if it were a simple action (this is, of course, the same strategy that is applied for an action that is above the primitive level in the abstraction hierarchy). Forward description is deemed appropriate when the action graph is a right-branching structure; in this case the actions are generally dealt with in time order, giving a message of one of the forms: time_then (result (Act, State) .... ) neutral_seq ( result (Act, State), embed (causes (State, state (user, enabled (Acts))), .... []) where Act is the first action, State a state that it makes true., and "..." is the message derived from the subsequent actions. When the action graph is a left branching structure, however, the strategy of backwards description is suggested. This gives rise to mes- sages of the form: e:mbed (prereqs(userct,Pres), now (state(user, enabled ( Figure 9 Closure of Expansion.
Article
The control problem—which of its potential actions should an AI system perform at each point in the problem-solving process?—is fundamental to all cognitive processes. This paper proposes eight behavioral goals for intelligent control and a ‘blackboard control architecture’ to achieve them. The architecture distinguishes domain and control problems, knowledge, and solutions. It enables AI systems to operate upon their own knowledge and behavior and to adapt to unanticipated problem-solving situations. The paper shows how opm, a blackboard control system for multiple-task planning, exploits these capabilities. It also shows how the architecture would replicate the control behavior of hearsay-ii and hasp. The paper contrasts the blackboard control architecture with three alternatives and shows how it continues an evolutionary progression of control architectures. The paper concludes with a summary of the blackboard control architecture's strengths and weaknesses.
Article
The Semantic Web is being designed to enable automated reasoners to be used as core components in a wide variety of Web applications and services. In order for a client to accept and trust a result produced by perhaps an unfamiliar Web service, the result needs to be accompanied by a justification that is understandable and usable by the client. In this paper, we describe the proof markup language (PML), an interlingua representation for justifications of results produced by Semantic Web services. We also introduce our Inference Web infrastructure that uses PML as the foundation for providing explanations of Web services to end users. We additionally show how PML is critical for and provides the foundation for hybrid reasoning where results are produced cooperatively by multiple reasoners. Our contributions in this paper focus on technological foundations for capturing formal representations of term meaning and justification descriptions thereby facilitating trust and reuse of answers from web agents.
Article
The Semantic Web lacks support for explaining answers from web applications. When applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. Many users also do not know how implicit answers were derived. The Inference Web (IW) aims to take opaque query answers and make the answers more transparent by providing infrastructure for presenting and managing explanations. The explanations include information concerning where answers came from (knowledge provenance) and how they were derived (or retrieved). In this article we describe an infrastructure for IW explanations. The infrastructure includes: IWBase — an extensible web-based registry containing details about information sources, reasoners, languages, and rewrite rules; PML — the Proof Markup Language specification and API used for encoding portable proofs; IW browser — a tool supporting navigation and presentations of proofs and their explanations; and a new explanation dialogue component. Source information in the IWBase is used to convey knowledge provenance. Representation and reasoning language axioms and rewrite rules in the IWBase are used to support proofs, proof combination, and Semantic Web agent interoperability. The Inference Web is in use by four Semantic Web agents, three of them using embedded reasoning engines fully registered in the IW. Inference Web also provides explanation infrastructure for a number of DARPA and ARDA projects.
Article
Intelligent systems are often called upon to form plans that direct their own or other agents' activities. For these systems, the ability to describe plans to people in natural ways is an essential aspect of their interface. In this paper, I present the Cooperative Plan Identification (CPI) architecture, a computational model that generates concise, effective textual descriptions of plans. In this model, speakers and hearers cooperate with one another in their communication about a plan. A hearer interprets a concise plan description by filling in the missing detail using plan reasoning. A cooperative speaker selects the content of a plan description based on his expectation that the hearer is able to complete the description in much the same way that a planning system completes a partial plan. The architecture has been empirically evaluated in an experiment, also described here, in which subjects following instructions produced by the CPI architecture performed their tasks with fewer execution errors and achieved a higher percentage of their tasks' goals than did subjects following instructions produced by alternative methods.
Article
Existing explanation facilities are typically far more appropriate for knowledge engineers engaged in system maintenance than for end-users of the system. This is because the explanation is little more than a trace of the detailed problem-solving steps. An alternative approach recognizes that an effective explanation often needs to substantially reorganize the actual line of reasoning and bring to bear additional information to support the result. Explanation itself becomes a complex problem-solving process that depends not only on the actual line of reasoning, but also on additional knowledge of the domain. This paper presents a new computational model of explanation and argues that it results in significant improvements over traditional approaches.
Article
The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a Belief-Desire-Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. 1 Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments i...
www.darpa.mil/ipto/programs Agents that Explain Their Own Actions
  • W Johnson
Integrated Learning, 2006. www.darpa.mil/ipto/programs/il/ Johnson, W. 1994. Agents that Explain Their Own Actions. 4th Conference on Computer Generated Forces and Behavioral Representation.
Plan-Based Construction of Strategic Explanations
  • R Schulman
  • B Hayes-Roth
Schulman, R. and Hayes-Roth, B. 1988. Plan-Based Construction of Strategic Explanations, Technical Report, KSL-88-23, Knowledge Systems Lab., Stanford Univ.