Knowledge Acquisition

Print ISSN: 1042-8143
Publications
Knowledge-based systems provide intellectual assistance through the manipulation of symbols. The theoretical basis for this manipulation is derived from the logic research programme which restricts the creation of models to predefined symbols, a semantics based upon truth and valid inference. However, it is made clear that the design of any artefact, including a knowledge-based system, will require one or more representations that emerge from the creation of new meanings to symbols, a semantics based upon experience and non-valid inference. It is a representation that models an abstraction of the world and that is abduced from a theory. It is proposed that there should be a parallel research programme to that of Logic. A programme that investigates the human use of theories, models and artefacts in design and discovery.
 
We describe three general temporal-abstraction mechanisms needed for managing time-stamped data: point temporal abstraction (a mechanism for abstracting several parameter values into one class); temporal inference (a mechanism for inferring sound logical conclusions over a single interval or two meeting intervals); and temporal interpolation (a mechanism for bridging non-meeting temporal intervals). Making explicit the knowledge required for temporal abstractions supports the acquisition of problem-solving knowledge needed for planning, plan execution, problem identification and plan revision. These mechanisms are implemented in the RÉSUMÉ system, and will be used in the context of our ongoing PROTÉGÉ-II project, whose goal is to generate knowledge-based systems automatically, as well as the appropriate knowledge-acquisition tools, custom-tailored to acquire the specific domain and task knowledge needed by the specific problem-solving method chosen for the task.
 
Most work in knowledge acquisition assumes that there is a single problem-solving task for which the knowledge base being constructed through acquisition will be applied. However, there is growing interest in a class of knowledge-based systems, referred to as organizational knowledge systems, where it cannot be assumed that there is a single task model or problem-solving method for which the knowledge is to be acquired. This paper discusses this type of knowledge-based system, the factors in organizational problem-solving which mitigate its existence, and presents a framework for knowledge acquisition in this environment. The acquisition framework, known as Conceptual Abstraction, has its basis in semantic data modeling, model-based KA, and visual programming. It has been effectively applied in the specification of knowledge schemata for organizational knowledge bases. The paper discusses the underlying model for abstraction and the associated abstraction mechanisms of the Conceptual Abstraction approach, and illustrates their use through example.
 
In this article we present an automated method for acquiring strategic knowledge from experts. Strategic knowledge is used by an agent to decide what action to perform next, where actions effect both the agent's beliefs and the state of the external world. Strategic knowledge underlies expertise in many tasks, yet it is difficult to acquire from experts and is generally treated as an implementation problem. The knowledge acquisition method consists of the design of an operational representation for strategic knowledge, a technique for eliciting it from experts, and an interactive assistant that manages a learning dialog with the expert. The assistant elicits cases of expert-justified strategic decisions and generalizes strategic knowledge with syntactic induction guided by the expert. The knowledge acquisition method derives its power and limitations from the way in which strategic knowledge is represented and applied.
 
Knowledge acquisition tools can be associated with knowledge-based application problems and problem-solving methods. This descriptive approach provides a framework for analysing and comparing tools and techniques, and focuses the task of building knowledge-based systems on the knowledge acquisition process. Knowledge acquisition research strategies discussed at recent Knowledge Acquisition Workshops are shown, distinguishing dimensions of knowledge acquisition tools are listed, and short descriptions of current techniques and tools are given.
 
Acquiring knowledge from a human expert is a major problem when building a knowledge-based system. AQUINAS, an expanded version of the expertise transfer system (ETS), is a knowledge acquisition workbench that combines ideas from psychology and knowledge-based systems research to support knowledge acquisition tasks. AQUINAS interviews experts directly and helps them organize, analyse, test, and refine their knowledge bases. Expertise from multiple experts or other knowledge sources can be represented and used separately or combined, giving consensus and dissenting opinions among groups of experts. Results from user consultations are derived from information propagated through hierarchies. ETS and AQUINAS have assisted in building knowledge-based systems for several years at The Boeing Company.
 
A primitives-based generic approach to perform knowledge acquisition is proposed. The approach is generic in the sense that it enables the user to construct domain-specific knowledge acquisition tools for specific tasks. It is also primitives-based since the construction of specific knowledge acquisition tools is based on a primitives kernel that contains problem solving primitives, acquisition primitives, interaction primitives, representation schemas and knowledge verification primitives. A generic knowledge acquisition shell is developed on the basis of this approach. It facilitates the development of proper specific knowledge acquisition tools for specific tasks through the construction of experimental knowledge acquisition tools. Furthermore, the shell is developed as an open architecture (i.e. separating the generic knowledge acquisition structure from specific knowledge acquisition structures) so that further enhancement can be done readily.
 
Workers in artificial intelligence (AI) have developed many interactive programs that assist in the knowledge-acquisition process. Because of the diverse nature of these tools, it is often difficult to understand how each one relates to all the others. This paper describes a taxonomy for knowledge-acquisition aids that is based on the terms and relationships that a given tool uses to establish the semantics of a user's entries. Such semantic assumptions, or conceptual models, have important implications for how a knowledge-acquisition tool is used and to what degree it can assist its users in analysing new applications at the knowledge level. Furthermore, when the conceptual model of a knowledge-acquisition tool can be made explicit, knowledge engineers can use metalevel programs to edit that conceptual model, creating knowledge-acquisition aids that are custom-tailored for particular applications. One such metalevel tool, PROTÉGÉ, has been developed to allow editing of the conceptual models of programs that acquire knowledge for tasks that can be solved via the method of skeletal-plan refinement.
 
Laddering is a structured questioning technique derived from the repertory grid technique, enabling a hierarchy of concepts to be established. Previous empirical studies have demonstrated its utility for knowledge elicitation in two classificatory domains. In this paper, three experimental studies of laddering are described. In experiment 1, the technique was used in another classificatory domain, metallic corrosion, and the effects of repeated exposure to the technique and feedback, in the form of Pseudo-English Production Rules, were investigated. These variables had no effect on the productivity of the technique. Experiment 2 compared the laddering technique with three other techniques in a medical diagnostic domain. As in previous studies laddering was found to be the most productive technique despite the change in the type of domain about which knowledge was elicited. In experiment 3, the preferences of subjects interviewed using two versions of the laddering technique, "textual" and "graphical", were compared with those obtained when the subjects were interviewed using a computerised laddering tool. Although the "gain" obtained in the three conditions varied, the group of subjects did not prefer one type of laddering to another. The laddering tool used in the experiment was designed as one tool in an integrated Knowledge Engineering Workbench (KEW). The potential for synergy between the laddering tool and other knowledge acquisition techniques implemented within KEW is explored. Guidance and advice concerning the appropriate context to employ laddering within the knowledge acquisition process is provided.
 
It is shown how certain kinds of domain independent expert systems based on classification problem-solving methods can be constructed directly from natural language descriptions by a human expert. The expert knowledge is not translated into production rules. Rather, it is mapped into conceptual structures which are integrated into long-term memory (LTM). The resulting system is one in which problem-solving, retrieval and memory organization are integrated processes. In other words, the same algorithm and knowledge representation structures are shared by these processes. As a result of this, the system can answer questions, solve problems or reorganize LTM.
 
The paradigm of knowledge-based systems has become of practical interest to a broad variety of persons: software engineers, knowledge engineers and domain experts. Therefore, it becomes necessary to make explicit the underlying assumptions of the field. In this paper, the terms “knowledge” and “modeling” as they occur in texts on knowledge acquisition and machine learning are investigated. It is shown that the terms are used with very different meanings corresponding to different views of knowledge acquisition. The transfer view, the performance or building-blocks view, the knowledge-level or stepwise refinement view, and the constructive view of knowledge and its acquisition are described. The implications for designing systems which support a user in constructing a knowledge base are indicated. In particular, it is stressed that systems must support revisions of all modeling decisions if we want to prevent from the next bottleneck, the bottleneck of knowledge-base maintenance.
 
Current knowledge acquisition literature provides few guidelines as to the appropriate set of validation criteria and techniques to be applied at various stages of the knowledge-based systems (KBS) development process. Utilizing a representational homomorphism definition of validation, this paper proposes a framework where validation evolves as a sequence of stages paralleling the different stages of the KBS development life cycle. This framework incorporates an inventory of validation methods and identifies the entities to be measured, the types of evidence to be collected, the criteria to be applied and the type of comparisons to be made to assess validity.
 
Many efforts in knowledge acquisition are designed from a knowledge engineer's perspective and as a consequence fall short of allowing experts to elaborate successfully their own situated knowledge. Knowledge engineering approaches are typically not user-centered and consequently are often the cause of a bottleneck in system development. This paper describes and evaluates the Advanced Knowledge And Design Acquisition Methodology (AKADAM) project as an attempt to overcome such inadequacies by provision of user-centered knowledge acquisition techniques. Both theoretical and practical issues are examined. The role of multiple perspectives (i.e. "knowledge as rules", "knowledge as concepts", and "knowledge as designs"), their relationship to a user-centered approach, and the necessity of flexible knowledge integration are portrayed by applying AKADAM to a complex, real-world domain (i.e. the development of an electronic associate for fighter pilots). Results suggest that this approach is capable of providing: (a) a naturalistic knowledge elicitation environment endorsed by users, (b) an externalization of experts' intuitive knowledge in a form which is similar to their own mental representation and (c) an integrated, large-scale knowledge set suitable for infusing knowledge into AI architectures and human-computer interface design.
 
The Knowledge Engineering process may be partitioned into three general phases. The first phase is characterized by domain definition, application selection and resource commitment. The second reflects the numerous iterations of knowledge acquisition and knowledge-based development. The final phase is the installation, maintenance and use of the system. Unlike phase two, which is characterized by powerful methodologies, few structured techniques for domain definition exist for the first phase. Knowledge Acquisition principles, but not current knowledge acquisition tools, are applicable to phase one activity. Cognosys is a semi-automated system developed to build general content models of a domain before the use of specific knowledge acquisition tools. Cognosys is described, as are future developments and research using the tool.
 
Knowledge acquisition for expert systems is a purely practical problem to be solved by experiment, independent of philosophy. However the experiments one chooses to conduct will be influenced by one's implicit or explicit philosophy of knowledge, particularly if this philosophy is taken as axiomatic rather than as an hypothesis. We argue that practical experience of knowledge engineering, particularly in the long term maintenance of expert systems, suggests that knowledge does not necessarily have a rigorous structure built up from primitive concepts and their relationships. The knowledge engineer finds that the expert's knowledge is not so much recalled, but to a greater or lesser degree “made up” by the expert as the occasion demands. The knowledge the expert provides varies with the context and gets its validity from its ability to explain data and justify the expert's judgement in the context. We argue that the physical symbol hypothesis with its implication that some underlying knowledge structure can be found is a misleading philosophical underpinning for knowledge acquisition and representation. We suggest that the “insight” hypothesis of Lonergan (1958) better explains the flexibility and relativity of knowledge that the knowledge engineer experiences and may provide a more suitable philosophical environment for developing knowledge acquisition and representation tools. We outline the features desirable in tools based on this philosophy and the progress we have made towards developing such tools.
 
This paper describes the use of the hypermedia systems and tools† in the knowledge acquisition and in the specification of user interfaces for knowledge-based systems. A pragmatic approach that combines informal task analysis with object-oriented modelling is introduced. This is supported by a hypermedia-based specification and documentation tool for knowledge requirements and user interface requirements. This approach and the tool are illustrated with examples from a case study: a specification of an intelligent user interface for a statistical analysis package.
 
Knowledge modelling is undoubtedly a major problem in knowledge acquisition. Drawing from industrial case studies that have been carried out, the paper lists some key problems which still dog knowledge modelling. Next, it critically reviews current knowledge modelling techniques and tools and concludes that these real knowledge acquisition issues are not tackled by them. We consider the spelling out of these problems and the fact that they are not addressed by current tools and techniques to be a major contribution of this paper. The paper strongly argues for knowledge modelling to be domain-driven, i.e. driven by the nature of the domain being modelled. The key argument in this paper is that ignoring the nature or characterization of the domain inevitably results in knowledge imposition rather than knowledge acquisition as domains get shoe-horned into some (current) set of models, representations and tools. After examining the nature of domains, the paper proceeds to outline an emerging hypothesis for knowledge modelling. It concludes with a specification of a tool suite for addressing the issues identified in this paper.
 
Metatool support for knowledge acquisition is an approach to automate the implementation of domain-specific knowledge-acquisition tools. Dots is a metatool that can be used by developers to generate domain-specific knowledge-acquisition tools. Whenever a domain model useful for expressing the relevant expertise can be established, developers can use Dots to specify and generate a knowledge-acquisition environment for development of expert systems. Dots assumes that the knowledge-acquisition tools generated are based on the knowledge-elicitation technique of graphical knowledge editing. A salient aspect of Dots is that no particular domain, task or problem-solving method is presupposed by the metatool. We achieve this generalization by introducing an abstract-architecture view—that is, an architectural model of the target knowledge-acquisition tool—as the framework for specifying target knowledge-acquisition tools. Dots provides facilities for editing this abstract architecture and for instantiating knowledge-acquisition tools from it.
 
There is much current interest in automation of manual elicitation techniques, but little is known about whether automated versions of a technique produce similar results to the manual versions. This paper describes a formal comparison between an item sort, a card sort and a computerized label sort in the same domain. No significant differences were found between the types of knowledge elicited by different types of sort. These findings suggest that computerized implementations of sorting procedures will elicit the same knowledge as manual sorts.This result also emphasizes the need for advice about knowledge elicitation to be based on formal experimental results rather than on assumptions, a-priori reasoning or case studies.
 
The strengths and weaknesses of our earlier system, KEATS-1, have led us to embark upon the design and implementation of a new knowledge engineering environment, KEATS-2, which provides a novel, integrated framework for performing both bottom-up and top-down knowledge acquisition. In this paper we discuss the nature of the knowledge acquisition activities and we introduce the support tools embedded in KEATS-2. We characterize knowledge acquisition as the composition of knowledge elicitation, data analysis and domain conceptualization and we emphasize that a knowledge engineering tool has to support these activities as well as bridging the gap between acquiring the data and implementing the final system.
 
A Group Decision Support System (GDSS) environment provides computer and group-process support to individuals working together toward common goals. Knowledge acquisition from multiple experts can be structured as a group activity. This paper discusses a methodology of employing a GDSS environment to facilitate the acquisition of knowledge from a group of experts. Observations of a field study which utilized the methodology to develop an expert system for an information centre are discussed.
 
The effective application of current decision tree and influence diagram software requires a relatively high level of sophistication in the theory and practice of decision analysis. Research on intelligent decision systems aims to lower the cost and amount of training required to use these methods through the use of knowledge-based systems; however, application prototypes implemented to date have required time-consuming and tedious hand-crafting of knowledge bases. This paper describes the development of DDUCKS, an “open architecture” problem-modeling environment that integrates components from AXOTL, a knowledge-based decision analysis workbench, with those of AQUINAS a knowledge acquisition workbench based on personal construct theory. The knowledge base tools in AXOTL can be configured with knowledge to provide guidance and help in formulating, evaluating and refining decision models represented in influence diagrams. Knowledge acquisition tools in DDUCKS will allow the knowledge to be efficiently modeled, more easily maintained, and thoroughly tested.
 
The evaluation of knowledge acquisition (KA) tools, techniques and products is a key concern for researchers in KA. This paper presents and demonstrates the use of a framework for generating testable propositions to guide empirical research evaluating KA tools and for integrating the findings of past, ongoing and future studies. By considering the tools and techniques used in KA as independent variables, it isolates two major categories of dependent variables and discusses empirical measures for them. Additionally, it examines four sets of moderating variables that bear upon the success of any KA activity: (1) human factors; (2) problem space characteristics; (3) system development approach; and (4) the organizational environment. The research methods suitable for the comparison of KA tools and techniques are also discussed.
 
Bias can hinder the process of knowledge acquisition by causing discrepancies between what is reported by the expert or interpreted by the knowledge engineer and the expert's actual thinking. Thus, bias affects the building of systems that reflect the experts' actual heuristics and operational procedures. Bias is defined, here, as an altering or misrepresentation of the expert's thought processes. This paper focuses on three manual elicitation techniques—the verbal protocol, a type of verbal probe, and the ethnographic query. These techniques were tailored to avoid the introduction of bias and to be easy for the lay person to use. For each technique, detailed guidance is given on situations in which it can be used, how to set up for its use, and how it can be administered.
 
Despite an increased interest in knowledge elicitation, there is still very little formal evidence evaluating the relative efficiency of the techniques available. In this paper we compare four KE techniques: structured interview, protocol analysis, card sort and laddered grid. Studies are reported across two classification domains, using eight experts in each. Despite its common usage, protocol analysis is shown to be the least efficient technique. The implications of this finding are reviewed. Finally, a study is reported in which non-experts are subjected to “knowledge elicitation”. Subjects entirely ignorant of a domain are able to construct plausible knowledge bases from common sense alone. The ramifications of these findings for knowledge engineers is discussed.
 
During the evolution of a design concept, designers must integrate diverse sources and kinds of information about requirements, constraints, and tradeoffs. In doing so, they make certain assumptions and develop criteria against which alternatives are evaluated for suitability. Unfortunately, much of this process is implicit, making later review difficult if not impossible. When requirements change, impacts on the design are difficult to trace. This can lead to costly rework or serious errors. We are developing “Canard”, an automated tool which uses possibility tables, constraints, and knowledge bases to assist in the generation of design alternatives consistent with goals and constraints. The facility also attempts to capture and document assumptions and tradeoffs made during the design process. We present an example which illustrates the use of Canard for a simple configuration problem. A more complex example traces the activity of a Boeing expert building a possibility table for robot arm design. Finally, the application of Canard to a NASA corporate memory facility project is described.
 
There are two main problems in the development of a knowledge-based system (KBS). The first is the modelling of domain expertise. The second is the modelling of the application of this knowledge to tasks that future users want to perform. This paper discusses how the second problem can be addressed in a systematic way. Our argument is that the second problem is at least equally important and if it is given serious attention, the first main problem will become simpler, because efforts can be directed at the subset of expertise that is actually required.This Analysis of Cooperation helps to arrive at a consistent set of functional requirements for a future KBS and a population of intended users. It comprises (i) a theoretical framework for system development, (ii) a technique for constructing a model of cooperation and (iii) a recommendation to use the “Wizard of Oz” technique for validating a model of cooperation in experiments with future users. In such an experiment, users attempt to perform tasks with the help of a mock-up of the future system operating according to the model of cooperation.
 
The numerous tasks required by the knowledge engineering process and their inherent complexity combine to make building knowledge-based systems both a time consuming and arduous activity. The key to reducing the complexity of the problem is to provide a methodological framework which can clarify the nature of the intermediate steps required to encode effectively knowledge into a performance system. Such a framework can then be used to drive the design of a comprehensive knowledge engineering toolkit. This is the approach we adopted in the KEATS project. In this paper, we provide an overview of the KEATS knowledge engineering methodology, which is based on a view of knowledge engineering as iterative refinement of qualitatively and teleologically different models, and we show how these ideas have driven the design of the KEATS toolkit.
 
Knowledge base building environments must progress in two important directions: (i) increased participation of domain experts in the knowledge design process through new computational models and effective man-machine interfaces; and (ii) automated knowledge acquisition tools to facilitate the overt expression of knowledge. This paper presents the integration of a knowledge acquisition methodology with a performance system. The resulting architecture represents a combination of techniques from psychology, cognitive sciences and artificial intelligence. New dimensions emerge from this implementation and integration both at the theoretical and practical levels. The overall system is not linked to a particular control structure and is not task dependent. We discuss the value of intermediate representations in this context and the role of different approaches to the induction process. Topological induction is particularly efficient in the elicitation process and stresses the importance of interactive inductive techniques with participation from the experts. While the knowledge acquisition tool provides an analysis and structuring of the domain knowledge, the control is implemented using the performance system's interface. Therefore, both modules participate in the overall knowledge acquisition process. Beyond the integration of these knowledge acquisition and performance systems, the architecture can also be integrated with databases, text analysis techniques, and hypermedia systems.
 
A knowledge-acquisition system was designed and built to help an architectural firm automate their diagnosis of building problems. The system was tailored to the firm's database system and building survey method. Rules are generated from the data by the induction learning algorithms ID3 or AQ11 and the orderly development of the rule-base is ensured by a verification procedure. Architectural diagnostics rely on the expertise of an experienced analyst. Building diagnostic processes require spatial, verbal and numerical reasoning. The rigid data structure imposed by the firm excluded crucial levels of semantic information. Induction methods proved useful tools for organizing data, but the expertise was captured by ad hoc editing of the rule-base. This project identified a need in the building industry to develop a taxonomy and a representation to support the building diagnostic process.
 
Humans are well-known for being adept at using their intuition and expertise in many situations. However, in some settings even human experts are susceptible to errors in judgement, and a failure to recognize the limits of knowledge. This happens often especially in semi-structured situtations, where multi-disciplinary expertise is required, or when uncertainty is a factor. At these times our natural ability to recognize and correct errors fails us, since we have faith in our reasoning. One way to deal with such problems is to have a computerized “critic” to assist in the process. This article introduces the concept of automated critics that collaborate with human experts to help improve their problem solving performance. A critic is a narrowly focused program that uses a knowledge base to help it recognize (1) what types of human error have occurred, and (2) what kinds of criticism strategies could help the user prevent or eliminate those errors. In discussing the “errors” half of this knowledge base, there is a difference between the expert's knowledge and his or her judgement. The focus in this article is more on judgement than on knowledge but both are addressed.
 
KNACK is an automated, specialized knowledge acquisition tool to generate expert systems that assist with different classes of reporting tasks. KNACK has been used by knowledge engineers to produce a series of reporting systems. Whenever such a system revealed inadequacies, KNACK was used to improve it. If individual task requirements could not be realized with KNACK, the knowledge base was manually enhanced. This paper summarizes the experience gained and outlines the scope of the current KNACK tool. It characterizes the application systems built, relates data on KNACK's performance to the defined characteristics, identifies shortcomings, and suggests directions along which to extend KNACK.‡
 
We describe an approach for understanding devices which provides leverage for the knowledge acquisition taks. In our functional approach, knowledge is acquired in three steps: the device is decomposed to an ensemble of subdevices, the functions/goals/purposes of each subdevice are stated abstractly, and finally, the way(s) each function/goal/purpose is accomplished is represented. The heart of our approach is a distribution of causal knowledge into fragments which are indexed by the functions/purposes/goals of the device. We show that the functional approach supports automated knowledge acquisition for diagnostic problem solving, and we argue that it also supports manual knowledge acquisition for device knowledge.
 
It is relatively easy for design and scheduling experts to supply considerations that determine individual design parameters and constraints on individual design parameters. It is much more difficult for them to describe a strategy for the order in which these considerations and constraints should be addressed. SALT (KNowledge ACquisition Language) is a successful knowledge acquisition tool that elicits the knowledge pieces design experts find easiest to supply and interprets the pieces to represent their connections. This paper describes a proposal for building strategic knowledge from this representation using a combination of analysis of the existing knowledge base and acquisition of domain-specific control knowledge for ordering subtasks.
 
For the purpose of modelling commonsense reasoning, we investigate connectionist models of rule-based reasoning, and show that while such models can usually carry out reasoning in exactly the same way as symbolic systems, they have more to offer in terms of commonsense reasoning. A connectionist architecture, CONSYDERR, is proposed for capturing certain commonsense reasoning competence, which partially remedies the brittleness problem in traditional rule-based systems. The architecture employs a two-level, dual representational scheme, which utilizes both localist and distributed representations and explores the synergy resulting from the interaction between the two. CONSYDERR is therefore capable of accounting for many difficult patterns in commonsense reasoning with this simple combination of the two levels. This work shows that connectionist models of reasoning are not just “implementations” of their symbolic counterparts, but better computational models of commonsense reasoning.
 
One problem of eliciting knowledge from several experts is that experts may share only parts of their terminologies and conceptual systems. Experts may use the same term for different concepts, use different terms for the same concept, use the same term for the same concept, or use different terms and have different concepts. Moreover, clients who use an expert system have even less likelihood of sharing terms and concepts with the experts who produced it. This paper outlines a methodology for eliciting and recognizing such individual differences. It can then be used to focus discussion between experts on those differences between them which require resolution, enabling them to classify them in terms of differing terminologies, levels of abstraction, disagreements, and so on. The methodology promotes the full exploration of the conceptual framework of a domain of expertise by encouraging experts to operate in a “brain-storming” mode as a group, using differing viewpoints to develop a rich framework. It reduces social pressures forcing an invalid consensus by providing objective analysis of separately elicited conceptual systems.
 
Two studies were designed to evaluate the efficiency of decision table representations for constructing and comprehending expert system rules by nonprogrammers with no experience in either knowledge engineering or expert systems. The first study compared the speed and accuracy of a decision table editor for constructing rules in a tabular representation relative to a standard text editor. Rules were constructed faster and more accurately with the decision table editor than with the text editor. The second study focused on the representational value of decision tables for comprehending expert system rules. In a verification task, subjects responded to questions of different types as accurately and rapidly as possible on the basis of the logical structure of a set of rules represented in either a decision table or textual format. The decision table showed an advantage only in situations where the diagrammatic, integral representation of the decision table expedited the perceptual and symbolic matching processes involved in the search.
 
We propose a method for constructing domain knowledge and explain it by using the office domain as an example. Our method produces generic concepts which cover the domain within their scope. They serve as tools for modelling applications, and enable model builders to adopt comprehensive and unbiased points of view. These concepts possess such practical properties because they implement ontological principles, i.e. the most important ways of viewing and discriminating a domain's objects. When built in a principled manner, domain concepts are highly modular; they can be refined and assembled without overlap to form case models. Ontological principles can further guide model builders in decomposing a modelling task.
 
Our research goal is to make programming easier. A central focus in our research is developing programming constructs, called mechanisms, that are both (1) usable and (2) reusable. A mechanism is usable if it can be used to automate a task by someone who understands and performs that task, but who does not necessarily know how to program. A mechanism is reusable, if it can be employed for several domains and tasks. Focusing on usability can lead to application-specific mechanisms, that is, can reduce reusability. On the other hand, focusing on reusability can lead to task-independent structures, that is, lessen their usability. The trick is to come up with mechanisms satisfying both requirements: usability and reusability. We have identified some quantitative characteristics describing such mechanisms.
 
In this paper we examine the greedy mutual information algorithm for decision tree design, analysing both theoretical and practical applications (namely, edge detection). We review our earlier theoretical results on tree design algorithms such as ID3, which confirm that the greedy mutual information heuristic is well-founded. The theoretical models based on rate-distortion theory and prefix-coding analogies explain previously observed experimental phenomena reported in the literature. An application to edge detection is described where we primarily emphasize the inductive methodology rather than the domain application (image processing) per se. We conclude that inductive learning paradigms based on information-theoretic models are both theoretically well-behaved and useful in practical problems.
 
All knowledge-based systems need to make decisions. Typically, these decisions involve selecting one choice out of a small number of alternatives based on a given set of parameters. The knowledge needed to make such decisions can often be expressed with a task-specific knowledge-based technique that we call structured matching, which integrates the knowledge and control for making a decision within a hierarchical structure. We show that structured matching is a common technique by documenting its use in several knowledge acquisition and knowledge-based systems. From a formal description of structured matching, we demonstrate why structured matching is so useful: it is tractable, it is applicable to classification and recognition tasks, and it facilitates knowledge acquisition. We conclude that structured matching is a ubiquitous and powerful technique for characterizing and constructing knowledge-based systems.
 
The Knowledge Acquisition Support Environment (KASE) and some case studies on expert system development are described. DiPROS is a diagnostic expert system building tool, which has a knowledge base editor and an inference engine specialized for diagnostic tasks. DiKAST is an interactive knowledge-acquisition (KA) tool, which guides repairs on defects in the knowledge base. In order to build this KA facility, many kinds of KA interview strategies were formalized. We applied KASE tools in building a defect diagnosis system for a color picture tube manufacturing plant. The prototype system was able to be developed in a short time by the domain experts alone, who had no previous experience in computer programming.
 
Designing effective inference strategies has been something of a black art. In this paper we articulate a methodology for designing inference strategies based on an inspection of already acquired domain knowledge. We formulate the design principle of knowledge-mediated control, which dictates that every deviation from a strategy of blind search should be mediated by explicit knowledge structures. We apply this principle to develop several inference strategies for troubleshooting electronic instruments. Finally, we identify a model of the inference process which serves as a basis for a tool to support this methodology.
 
This paper discusses the KADs approach to knowledge engineering. In KADS, the development of a knowledge-based system (KBS) is viewed as a modelling activity. A KBS is not a container filled with knowledge extracted from an expert, but an operational model that exhibits some desired behaviour that can be observed in terms of real-world phenomena. Five basic principles underlying the KADS approach are discussed, namely (i) the introduction of partial models as a means to cope with the complexity of the knowledge engineering process, (ii) the KADS four-layer framework for modelling the required expertise, (iii) the re-usability of generic model components as templates supporting top-down knowledge acquisition, (iv) the process of differentiating simple models into more complex ones and (v) the importance of structure—preserving transformation of models of expertise into design and implementation. The actual activities that a knowledge engineer has to undertake are briefly discussed. We compare the KADS approach to related approaches and discuss experiences and future developments. The approach is illustrated throughout the paper with examples in the domain of troubleshooting audio equipment.
 
Building knowledge-based problem solvers requires an intellectually challenging modeling stage whose dominance over other activities is now widely recognized. In spite of this, current languages and environments leave the modeling activity on the shoulders of the human, concentrating on the routine programming aspect. Next generation languages and tools will have to explicitly support modeling in the first place. This paper presents a proposal for such a next generation knowledge modeling environment and discusses some steps we have made in this direction. Unlike existing programming environments, knowledge modeling environments focus on manipulating explicit, declarative specifications of problem-solving models which must be acquired, organized, modified, explained, validated, simulated and, eventually, translated into performance computer languages. Programming is only one of the activities supported in such an environment. This paper also discusses the knowledge modeling language we have developed as the foundation of the modeling environment. This language extends term classification technology with refinement, constraints, patterns and events, actions and methods, in order to support the description of both domain and control specifications required by problem-solving models. To substantiate the claims about the adequacy of the language, the paper presents two important modeling applications. The first is developing a full KADS language on top of it and the second is modeling a well known generic problem solving method, "propose-and-revise".
 
The main difficulties in knowledge acquisition from domain experts stem from the variety of forms of knowledge, the various representations of knowledge, and the problems in making these explicit and accessible. There is, at present, no systematic overall methodological framework for knowledge acquisition to guide the organization and arrangement of the appropriate application of the many manual and automated techniques and methods used for knowledge acquisition. In considering these problems it is appropriate to draw on studies in cognitive science and associated disciplines to examine the models of the expert and the demands and goals of the task. This paper develops the modeling processes involved from the perspective of the expert trying to communicate his view of a target system and transfer it into computer implementable form. It identifies the distinct processes of elicitation, analysis and implementation, the knowledge representations of the intermediate knowledge bases which can be used to help the expert review and refine his conceptual model, and the computer knowledge bases which may be unrecognizable by the expert as related to his developing models. Finally, several methods of knowledge aquisition are reviewed in the context of these models.
 
The paper describes a framework, RATIONALE, for building knowledge-based diagnostic systems that explain by reasoning explicitly. Unlike most existing explanation facilities that are grafted onto an independently designed inference engine, RATIONALE behaves as though it has to deliberate over and explain to itself, each refinement step. By treating explanation as primary, RATIONALE forces the system designer to represent knowledge explicitly that might otherwise be left implicit. This includes knowledge as to why a particular hypothesis is preferred, an exception is ignored, and a global inference strategy is chosen. RATIONALE integrates explanations with reasoning by allowing a causal and/or functional description of the domain to be represented explicitly. Reasoning proceeds by constructing a hypothesis-based classification tree whose root hypothesis contains the most general diagnosis of the system. Guided by a focusing algorithm, the classification tree branches into more specific hypotheses that explain the more detailed symptoms provided by the user. As the system is used, the classification tree also forms the basis for a dynamically generated explanation tree which holds both the successful and failed branches of the reasoning knowledge. RATIONALE is implemented in Quintus Prolog with a hypertext and graphics oriented interface under NeWS. It provides an environment for tying together the processes of knowledge acquisition, system implementation and explanation of system reasoning.
 
Systems need to be instructable, to provide end-users with the opportunity to automate repetitive tasks. Instructable systems—both real robots and metaphorical agents—must acquire skills and knowledge from examples and other instructions easily given by users in factories, laboratories and offices. The instructability of a system includes its predisposition to human instructions, the classes of task the system can learn and the speed of learning. The human interface must exploit the user's natural instructional abilities and require minimal acquisition of expertise prior to teaching. It may be assumed that typical casual users will not be expert programmers, but will be able to do the tasks they wish to teach and also show them to other humans. In this case, inductive learning techniques are employed to generalize the teacher's examples, in a manner biased by the teacher's other instructions, and thereby form a procedural task description. Additional instructions can drastically reduce the example and computational complexities of learning problems without compromising learnability. Existing systems and research are re-evaluated from an instructable perspective. Three experimental prototypes are described. Two systems instruct robots: one emphasizing examples and the other emphasizing more explicit instructions. The third is an instructable agent that is an office clerk metaphor. Instructability is seen as a small, but significant step toward intelligence. Designers must explicitly consider the instructability of computer systems, so that task automation is brought to all users.
 
We have developed a learner, AUXIL, which has the ability to solve auxiliary-line problems in geometry in an intelligent way. First, we show that a basic mechanism for producing auxiliary-lines is to associate a certain condition or subgoal in the problem with an appropriate figure-pattern and that AUXIL can produce a successful auxiliary-line by making use of associative strategies, which we call figure-pattern strategies. Secondly, we proposed a new method, frustration-based learning, which can acquire associative strategies through experiences of solving a variety of auxiliary-line problems. AUXIL simulates the following expert behavior. When an expert tries to solve such a problem, he feels frustrated because enough information is not given in a problem space for him to proceed an inference and to find a correct path from given conditions to the goal. Here, he concentrates himself on the conditions or subgoals which have caused frustration. After he has produced an auxiliary-line and made a complete proof-tree, he would learn several associative strategies. Each frustration-causing condition or subgoal will constitute the if-part of each strategy. He will then recognize several lumps of figure-patterns in the proof-tree, each of which has contributed to resolving each frustration. All pieces of geometrical information of each figure-pattern will constitute the then-part of each strategy. Learning an auxiliary-line problem through frustration-based learning means to understand it as a composition of figure-patterns each of which has those features represented in the THEN-part of the corresponding strategy. The frustration-based learning method is regarded as a method for learning some essential figure-patterns which underlie and structurize a problem solving process of elementary geometry.
 
Growing problems of knowledge communication, ranging from high costs of employee training to critical shortages of expertise, have prompted many companies to invest heavily in expert system technology. Unfortunately, problems of knowledge communication also plague corporate expert system projects. Team members can't talk to the expert, to their management, or to each other; and no one, later on, may understand the expert system or what it does. In their struggle to communicate, project participants can benefit from a model that records an evolving understanding of the targeted decision-making knowledge. The CAMEO™ (Computer-Aided Modeling of Expertise in Organizations) tool of knowledge modeling takes an assertion-oriented approach towards group communication and the support of collaborative work. This approach contrasts with the rule-oriented approach of most expert system shells. As statements of decision-making input and outcome, assertions play a key unifying role in a CAMEO knowledge model: (1) New assertions are generated from an underlying data model to build in the terminology of the expert and realities of the corporate data environment; (2) Existing assertions, as elements of the knowledge model, are reused whenever possible to promote an essential coordination of group efforts; (3) Based upon this generation and reuse of assertions, an inference network is automatically built by CAMEO; network structure can be tested for consistency and completeness. A completed CAMEO knowledge model is then a logical design in efforts to build knowledge-based systems, re-design training manuals and improve company operations.
 
Top-cited authors
Thomas Gruber
Joost Breuker
  • University of Amsterdam
Guus Schreiber
  • Vrije Universiteit Amsterdam
Bob J Wielinga
  • University of Amsterdam
Mark Alan Musen
  • Stanford University