Article

Using Justification Patterns to Advise Novice UNIX Users

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Novice unix users have many incorrect beliefsabout unix commands. An intelligent advisory system for unix should provide explanatory responses that correct thesemistaken beliefs. To do so, the system must be able to understand how the useris justifying these beliefs, and it must be ableto provide justifications for its own beliefs.These tasks not only require knowledgeabout specific unix-related plans butalso abstract knowledge about how beliefs can be justified.This paper shows how this knowledge can be representedand sketches how it can be used to form justificationsfor advisor beliefs and to understand justifications given for user beliefs.Knowledge about belief justification is captured byjustification patterns,domain-independent knowledge structuresthat are similar to the abstract knowledge structures usedto understand the point behind a story.These justification patterns allow the advisor to understand andformulate novel belief justifications, giving the advisorthe ability to recognize and respond to novel misconceptions.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... AQUA [31], [32], for example, is a help system that conducts dialogues with UNIX users and tries to help them when they face problems in their interaction. In AQUA, the user explicitly states what his/her situation was when the problem arose. ...
Article
Full-text available
This paper describes the adaptation of a cognitive theory, called Human Plausible Reasoning (HPR), for the purposes of an intelligent graphical user interface (GUI). The GUI is called intelligent file manipulator (IFM) and manages files and folders in a similar way as the Windows 98/NT Explorer. However, IFM also incorporates intelligence, which aims at rendering the interaction more human-like than in a standard explorer in terms of assistance to users' errors. IFM constantly reasons about users' actions, goals, plans, and possible errors, and offers automatic assistance in case of a problematic situation. HPR is used in IFM to simulate the reasoning of users in its user modeling component and the reasoning of human expert helpers when they try to provide assistance to users. The adaptation of HPR in IFM has focused on the domain representation, statement transforms, and certainty parameters. The certainty parameters of HPR have been combined in a novel way with user stereotypes and the simple additive weighting theory. IFM has been evaluated and the evaluation results showed that IFM could generate plausible hypotheses about users' errors and helpful advice to a satisfactory extent; hence, HPR seemed to have fulfilled the purpose for which it was incorporated in IFM.
Article
Online help for learners can be classified along many dimensions including, passive to active, canned to knowledge-based, generic to task specific, collaborative to autonomous, and centralised to distributed. Across these dimensions, software tools that support online help employ techniques such as dialogue analysis, user modelling, context-specific inferencing, collaborative communication, and task-specific pedagogies. In this paper, we present a framework that seamlessly integrates help tools and techniques to facilitate three-way dialogues between a person who requires help (learner), a person who provides help (helper), and the online assistant help system (Helper's Assistant) that mediates between the learner and the helper. The focus of the framework is to provide personalised and context-specific help. We present a case for the need for such online assistants, review some of the key techniques employed in help systems, discuss the salient features of the framework in assisting online helpers, and describe the design and analysis of a study investigating the effectiveness of Helper's Assistant. The novelty in this approach lies particularly in the fact that the focus of the assistant is helping the helper and not the learner directly.
Article
Thesis (Ph. D.)--University of Saskatchewan, 2001. Includes bibliographical references.
Article
Full-text available
Responses to misconceptions given by human conversational partners very often contain information refuting possible reasoning which may have led to the misconceptions. Surprisingly there is a great deal of regularity in these responses across different domains of discourse. For instance, one reason a user might have given an object a property it does not have is that the user confused the object with another similar object. In correcting such a misconception, a human conversational partner is likely to point out this possible confusion.This work describes a method for generating responses like the one just described by reasoning on a highlighted model of the user to identify possible sources of the error. Through a transcript study a number of response strategies were abstracted. Each strategy was associated with a structural configuration of the user model. For example, the above mentioned strategy of pointing out a similar confused object is associated with a configuration of the user model that indicates the user believes there is an important similar object that has the property involved in the misconception. Upon finding that configuration in the highlighted user model, the system can respond with the associated strategy.Notice that the reasoning must be done on a highlighted user model since the perception of both an object's importance and its similarity with another object change with the perspective being taken on the domain. This paper investigates how domain perspective can be modeled to provide the needed highlighting and introduces a similarity metric that is sensitive to the highlighting provided by the domain perspective. Finally, the paper shows how the highlighting affects misconception responses.
Article
Full-text available
UC (UNIX Consultant) is an intelligent, natural-language interface that allows naive users to learn about the UNIX operating system. UC was undertaken because the task was thought to be both a fertile domain for Artificial Intelligence research and a useful application of AI work in planning, reasoning, natural language processing, and knowledge representation. The current implementation of UC comprises the following components: A language analyzer, called ALANA, that produces a representation of the content contained in an utterance; an inference component called a concretion mechanism that further refines this content; a goal analyzer, PAGAN, that hypothesizes the plans and goals under which the user is operating; an agent, called UCEgo, that decides on UC’s goals and proposes plans for them; a domain planner, called KIP, that computes a plan to address the user’s request; an expression mechanism, UCExpress, that determines the content to be communicated to the user, and a language production mechanism, UCGen, that expresses UC’s response in English. UC also contains a component called KNOME that builds a model of the user’s knowledge state with respect to UNIX. Another mechanism, UCTeacher, allows a user to add knowledge of both English vocabulary and facts about UNIX to UC’s knowledge base. This is done by interacting with the user in natural language. All these aspects of UC make use of knowledge represented in a knowledge representation system called KODIAK. KODIAK is a relation-oriented system that is intended to have wide representational range and a clear semantics, while maintaining a cognitive appeal. All of UC’s knowledge, ranging from its most general concepts to the content of a particular utterance, is represented in KODIAK.
Article
This paper discusses the problem of recognizing and responding to plan-oriented misconceptions in advice-seeking dialogs, concentrating on the problems of novice computer users. A cooperative response is one that not only corrects the user's mistaken belief, but also addresses the missing or mistaken user beliefs that led to it. Responding appropriately to a potentially incorrect user belief is presented as a process of 1. checking whether the advisor holds the user's belief; 2. confirming the belief as a misconception by finding an explanation for why the advisor does not hold this belief; 3. detecting the mistaken beliefs underlying the misconception by trying to explain why the user holds the incorrect belief, and 4. providing these explanations to the user. An explanation is shown to correspond to a set of advisor beliefs, and searching for an explanation to proving whether various abstract configurations of advisor beliefs hold. A taxonomy of domain-independent explanations for potential user misconceptions involving plan applicability conditions, preconditions, and effects is presented.
Book
"The first volume to appear on this topic and now a classic in the field, "Intelligent Tutoring Systems" provides the reader with descriptions of the major systems implemented before 1981. The introduction seeks to emphasise the principal contributions made in the field, to outline continuing research issues, and to relate these to research activities in artificial intelligence and cognitive science. Subject areas discussed are as varied as arithmetic, algebra, electronics, and medicine, together with some informal gaming environments"
Article
Several years ago, we began a project called UC (UNIX Consultant). UC was to function as an intelligent natural language interface that would allow naive users to learn about the UNIX operating system by interacting with the consultant in ordinary English. We sometimes refer to UC as 'an intelligent 'help' facility' to emphasize our intention to construct a consultation system, rather than a natural language front end to an operating system. Whereas front- ends generally take the place of other interfaces, UC was intended to help the user learn how to use an existing one. Our hope was that the consultation task would requires us to address fundamental problems in natural language processing, planning and problem solving, and knowledge representation, all of which are of interest to us. We believe this to be the case because (1) the domain of an operating system is quite large and complex, (2) users' conceptions of computer systems are often based on other domains, particularly space and containment, and (3) the structure of a consultant session requires the consultant to understand the user's language, hypothesize his intentions, reason about the user's problem, access knowledge about the topic in question, and formulation a reasonable response. In sum, virtually all the problems of language processing and reasoning arise in some fashion.
Article
From the Publisher:This book describes a theory of memory representation, organization, and processing for understanding complex narrative texts. The theory is implemented as a computer program called BORIS which reads and answers questions about divorce, legal disputes, personal favors, and the like. The system is unique in attempting to understand stories involving emotions and in being able to deduce adages and morals, in addition to answering fact and event based questions about the narratives it has read. BORIS also manages the interaction of many different knowledge sources such as goals, plans, scripts, physical objects, settings, interpersonal relationships, social roles, emotional reactions, and empathetic responses. The book makes several original technical contributions as well. In particular, it develops a class of knowledge constructs called Thematic Abstraction Units (TAUs) which share similarities with other representational systems such as Schank's Thematic Organization Packets and Lehnert's Plot Units. TAUs allow BORIS to represent situations which are more abstract than those captured by scripts, plans, and goals. They contain processing knowledge useful in dealing with the kinds of planning and expectation failures that characters often experience in narratives; and, they often serve as episodic memory structures, organizing events which involve similar kinds of planning failures and divergent domains. An appendix contains a detailed description of a demon-based parser, a kernel of the BORIS system, as well as the actual LISP code of a microversion of this parser and a number of exercises for expanding it into a full-fledged story-understander. Michael G. Dyer is anAssistant Professor in the Department of Computer Science at UCLA. His book is included in The MIT Press Artificial Intelligence Series.
Conference Paper
This paper presents a theory of reasoning and argument comprehension currently implemented in OpEd, a computer system that reads short politico-economic editorials and answers questions about the editorial contents. We believe that all arguments are com- posed of a fixed number of abstract argument structures, which we call Argument Units (AUs). Thus, argument comprehension is viewed in OpEd fundamentally as the process of recognizing, instantiating, and applying argument units. Here we discuss: (a) the knowledge and processes necessary to understand opinions, arguments, and issues which arise in politico-economic editorials; and (b) the relation of this research to previous work in natural language understanding. A description of OpEd and examples of its current input/output behavior are also presented in this paper. I. INTRODUCTION An intelligent computer program must be able to understand people's opinions and reasoning. This requires a theory of the processes and knowledge sources used during reasoning and argu- ment comprehension. To develop such a theory, we have studied the problems that arise in understanding newspaper and magazine editori- als which convey writers' opinions on politico-economic issues. This theory has been implemented in OpEd (Opinions to/from the Editor), a computer program that currently reads two short politico-editorial segments and answers questions about the editorial contents. Thus, OpEd also includes a theory of memory search and retrieval for rea- soning and argument comprehension.
Conference Paper
Novice computer users have many incorrect be­ liefs about the commands on their system. This paper considers the problem of providing ex­ planatory responses that correct these mistaken user beliefs. Current approaches correct mis­ taken beliefs by trying to infer the reasons why the user holds them. In contrast, our advisor corrects these beliefs simply by explaining why he doesn't share them. This allows the ad­ visor to provide reasonable advice even when no robust user model is available. Our advi­ sor constructs this explanation from scratch, using a set of domain-indepe ndent strategies for justifying plan-oriented beliefs. This dif­ fers from existing systems, such as explanation- based story understanders, that provide expla­ nations by modifying existing explanations and fail to address the underlying problem of form­ ing the initial explanation. This approach gives our advisor the ability to explain novel miscon­ ceptions.
Article
An intelligent advisory system should be able to provide explanatory responses that correct mistaken user beliefs. This task requires the ability to form a model of the user's relevant beliefs and to understand and address feedback from users who are not satisfied with its advice. This paper presents a method by which a detailed model of the user's relevant domain-specific, plan-oriented beliefs can gradually be formed by trying to understand user feedback in an on-going advisory dialog. In particular, we consider the problem of constructing an automated advisor capable of participating in a dialog discussing which UNIX command should be used to perform a particular task. We show how to construct a model of a UNIX user's beliefs about UNIX commands from several different classes of user feedback. Unlike other approaches to inferring user beliefs, our approach focuses on inferring only the small set of beliefs likely to be relevant in contributing to the user's misconception. And unlike other approaches to providing advice, we focus on the task of understanding the user's descriptions of perceived problems with that advice.
Article
Thesis (Ph. D.)--University of California, Los Angeles, 1991. Typescript (photocopy). Vita. Includes bibliographical references (leaves 321-328).
Article
In discourse processing, two major problems are understanding the underlying connections between sucee.
Generalized Plan Recognition Reasoning on a Highlighted User Model to Respond to Misconceptions User Modeling and Dialog Systems
  • H Kautz
  • J Allen
  • Pa Philadelphia
  • K Mccoy
Kautz, H. & Allen, J. (1986). Generalized Plan Recognition. In Proceedings of the Sixth National Conference on Artificial Intelligence. Philadelphia, PA. McCoy, K. (1989). Reasoning on a Highlighted User Model to Respond to Misconceptions. In Kobsa, A. & Wahlster, W. (eds.) User Modeling and Dialog Systems. New York, NY: Springer Verlag.
Modeling the User's Plans and Goals User Modeling and Dialog Systems
  • S Carberry
Modeling the User's Plans and Goals (eds.) <i>User Modeling and Dialog Systems&lt
  • S Carberry
Adversary Arguments and the Logic of Personal Attacks (eds.) <i>Strategies for Natural Language Processing&lt
  • M Flowers
  • R Mcguire
  • L Birnbaum
Participating in Plan-oriented Dialogs In <i&gt
  • A Quilici