Article

Reasoning from First Principles in Electronic Troubleshooting

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

While expert systems have traditionally been built using large collections of rules based on empirical associations, interest has grown recently in the use of systems that reason “from first principles”, i.e. from an understanding of causality of the device being examined. Our work explores the use of such models in troubleshooting digital electronics.In discussing troubleshooting we show why the traditional approach—test generation—solves a different problem and we discuss a number of its practical shortcomings. We consider next the style of debugging known as discrepancy detection and demon-strate why it is a fundamental advance over traditional test generation. Further explor-ation, however, demonstrates that in its Standard form discrepancy detection encounters interesting limits in dealing with commonly known classes of faults. We suggest that the problem arises from a number of interesting implicit assumptions typically made when using the technique.In discussing how to repair the problems uncovered, we argue for the primacy of models of causal interaction, rather than the traditional fault models. We point out the importance of making these models explicit, separated from the troubleshooting mechanism, and retractable in much the same sense that inferences are retracted in current systems. We report on progress to date in implementing this approach and demonstrate the diagnosis of a bridge fault—a traditionally difficult problem—using our approach.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... We have previously proposed the use of a layered set of models as a mechanism for guiding diagnosis [2,3]. Here we describe the implementation of that idea and demonstrate its utility in diagnosing a bridge fault. ...
... In previous papers [2,3] we outlined a progression of ...
... 2. For the rationale behind this ordering see [2]. ...
Article
interest has grown recently in developing expert systems that reason "from first principles", i.e., capable of the kind of problem solving exhibited by an engineer who can diagnose a malfunctioning device by reference to its schematics, even though he may never have seen that device before. In developing such a system for troubleshooting digital electronics, we have argued for the importance of pathways of causal interaction as a key concept. We have also suggested using a layered set of interaction paths as a way of constraining and guiding the diagnostic process. We report here on the implementation and use of these ideas. We show how they make it possible for our system to generate a few sharply constrained hypotheses in diagnosing a bridge fault. Abstracting from this example, we find a number of interesting general principles at work. We suggest that diagnosis can be viewed as the interaction of simulation and inference and we find that the concept of locality proves to be extremely useful in understanding why bridge faults are difficult to diagnose and why multiple representations are useful.
... It then analyses the neural network outputs, and either confirms the diagnosis, or offers an alternative solution. The expert system takes into account symbolic information issues which are poorly handled by neural networks, performing a better fault diagnosis than either of the two acting independently [15,16]. ...
... There have been increasing attempts in the last few years to apply AI to corrosion process and corrosion monitoring evaluation and related faults. Interpretation of the measured data is not always easy, but some efforts in this direction, are found in the literature [16][17][18][19]. Roberge, developed a stochastic process detector to study and evaluate fluctuations in the potential or current corroding electrode known as the electrochemical noise (EN), an expression often used to describe these fluctuations, revealing the fractal nature of signals and shapes [20,21]. ...
Article
Full-text available
Abstract: This work presents the application of artificial intelligence for the assessment on the corrosion conditions diagnosis of electric transmission line tower foundations. A literature and patents review and background in relation to corrosion monitoring and application of artificial intelligence relevant to this problem, along with an example of a novel corrosion monitoring of tower legs in the field, based in the close remote procedure are presented. A predictive computational model was developed, under the criteria of artificial neural networks, to obtain and propose a degree of corrosion index in tower leg foundations of electric transmission lines, for the purpose of establishing a decision making process for predictive maintenance programs. Information was gathered in the field, through electrochemical corrosion monitoring of eighty tower legs belonging to twenty transmission line towers under different environmental conditions. A data base was elaborated and with it, the neural network was developed, trained and its performance proved. The configuration 6-5-2 (6 Input, 5 hidden, 2 output neurons) used, presented an excellent agreement (R2=0.9998) between experimental and simulated corrosion data. The sensitivity analysis showed that all studied input variables (soil resistivity, free corrosion potential and electrochemical measurements) have an effect on the estimation of the corrosion level, with the strongest being the corrosion potential and resistance. With the results obtained and presented, an assessment on the corrosion conditions diagnosis for transmission line tower foundations, predictive maintenance programs and actions can be proposed and established
... This system is unlike previous efforts to integrate AI and reengineering because it includes substantial application (procurement reengineering) knowledge that is based on the premise that the basic knowledge about how organizations should perform this process has changed. Further, the system was loosely built on the basis of first principles (Davis 1983), in particular, Hammer's "principles of reengineering." Like many other knowledge-based systems, this system is diagnosis-based (e.g., Buchanan and Shortliffe 1983) in that it searches for "symptoms" of problems and then proposes "cures." ...
... If such sit-uations are found, they form the basis of recommendations for reengineering. 4. Treat geographically dispersed resources as though they were centralized. ...
... Expert Systems have traditionally been built using large collection of rules based on empirical associations; Interest has built up recently in use of Artificial Intelligence techniques that reason from first principles i.e. from an understanding of causality of the device being diagnosed. Randall Davis [1] discussed causal interaction model for fault diagnosis. Expert system that reason based on understanding of the structure and function of the unit under test has been explored in number of domains, including medicine [2][3], computer fault diagnosis [4], automobile engine fault diagnosis [5], and electronics equipment fault diagnosis [6]. ...
Article
Full-text available
The paper presents an object oriented fault diagnostic expert system framework which analyses observations from the unit under test when fault occurs and infers the causes of failures. The frame work is characterized by two basic features. The first includes a fault diagnostic strategy which utilizes the fault classification and checks knowledge about unit under test. The fault classification knowledge reduces the complexity in fault diagnosis by partitioning the fault section. The second characteristic is object oriented inference mechanism using backward chaining with message passing within objects. The refractoriness and recency property of inference mechanism improve efficiency in fault diagnosis. The developed framework demonstrates its effectiveness and superiority compared to earlier approaches.
... Expert Systems have traditionally been built using large collection of rules based on empirical associations. Interest has grown recently in the use of expert systems that reason from an understanding of causality of the device being diagnosed [1]. The proposed work explores the use of such models in troubleshooting unit under test. ...
Article
Full-text available
The paper presents a universal fault diagnostic expert system frame work. The frame work is characterized by two basic features. The first includes a fault diagnostic strategy which utilizes the fault classification and checks knowledge about unit under test. The degree of accuracy to which faults are located is improved by using fault classification knowledge. The second characteristic is object oriented inference mechanism using message passing. Object orientation in inference mechanism improved inference efficiency. The developed framework demonstrates its effectiveness and superiority compared to earlier approaches using case studies.
... However, a shortage of skilled SEs can result in delays in system recovery. To address this issue, many studies on the automation of the O&M technologies of ICT systems have been conducted [5]- [9]. Although automation technology can be helpful in making troubleshooting more efficient and hastening recovery from obstacles, it is difficult to completely automate the troubleshooting of ICT systems because troubleshooting is typically an ill-structured and ill-defined problem [10]. ...
Article
Full-text available
As information and communication technology systems become larger and more complex, system troubleshooting difficulty increases. To date, however, no efficient method for troubleshooting training has been developed owing to a lack of understanding of how skilled system engineers perform troubleshooting. The goal of this study was to investigate and compare the network troubleshooting characteristics of skilled and unskilled system engineers. We hypothesized that to efficiently troubleshoot a network system, skilled system engineers divided the overall network into functional and non-functional sub-networks by confirming connections between network devices using similar method. To observe troubleshooting behavior, we developed a virtual network comprising several servers, routers, and terminals on which a group of six skilled and unskilled system engineers performed normal troubleshooting activities. It was found that the skilled system engineers tended to narrow down the problem space by connection confirmation between network devices. The coincidences of connection confirmation between the skilled system engineers were significantly higher amongst the whole group. At the beginning of the troubleshooting assessment, the most skilled participants appropriately hypothesized which device was experiencing trouble, based on information presented in advance of the assessment. In contrast, the unskilled system engineers, and/or those unfamiliar with network troubleshooting, did not narrow the problem space but instead randomly searched for obstacle causes in selected network devices. These results suggest that unskilled system engineers should be taught methods for the appropriate and logical reduction of the problem space in network troubleshooting.
... While Milne uses multiple levels to catego rize knowledge representation, we have explicitly used these same levels to describe a diagnostic architecture. Chandrasekaran (1983), Davis (1983), Fink (1985a, b, 1987, and Searl (1987) all address compiled verses deep (structural, behavioral and functional) knowledge based diagnosis. The research by Fink is particularly relevant to this research in that she describes a system that combines knowledge from more than one level, although the application described in that research is several orders of magnitude simpler than the TOF Scintillation Array. ...
Article
We present a general architecture for the monitoring and diagnosis of large scale sensor-based systems with real time diagnostic constraints. This architecture is multileveled, combining a single monitoring level based on statistical methods with two model based diagnostic levels. At each level, sources of uncertainty are identified, and integrated methodologies for uncertainty management are developed. The general architecture was applied to the monitoring and diagnosis of a specific nuclear physics detector at Lawrence Berkeley National Laboratory that contained approximately 5000 components and produced over 500 channels of output data. The general architecture is scalable, and work is ongoing to apply it to detector systems one and two orders of magnitude more complex.
... System Knowledge includes understanding of the structure and function of the system and the components within the system [37]. In a circuit, this may involve recognizing that a complex circuit is composed of multiple subsystems or identifying a particular resistor as a "feedback resistor." ...
Article
Full-text available
We explore the overlap of two nationally-recognized learning outcomes for physics lab courses, namely, the ability to model experimental systems and the ability to troubleshoot a malfunctioning apparatus. Modeling and troubleshooting are both nonlinear, recursive processes that involve using models to inform revisions to an apparatus. To probe the overlap of modeling and troubleshooting, we collected audiovisual data from think-aloud activities in which eight pairs of students from two institutions attempted to diagnose and repair a malfunctioning electrical circuit. We characterize the cognitive tasks and model-based reasoning that students employed during this activity. In doing so, we demonstrate that troubleshooting engages students in the core scientific practice of modeling.
... For example, text-editing tasks are performed only in some larger context such August 1988-421 as transcription, data entry, and composition. The interface is an external representation of an application world; that is, a medium through which agents come to know and act on the world-troubleshooting electronic devices (Davis, 1983), logistic maintenance systems, managing data communication networks, managing power distribution networks, medical diagnosis (Cohen et aI., 1987;Gadd and Pople, 1987), aircraft and helicopter flight decks (Pew et aI., 1986), air traffic control systems, process control accident response (Woods, Roth, and Pople, 1987), and command and control of a battlefield (e.g., Fischhoff et aI., 1986). Tasks are properties of the world in question, although performance of these fundamental tasks (i.e., demands) is affected by the design of the external representation (e.g., Mitchell and Saisi, 1987). ...
Article
Full-text available
Cognitive engineering is an applied cognitive science that draws on the knowledge and techniques of cognitive psychology and related disciplines to provide the foundation for principle-driven design of person-machine systems. This paper examines the fundamental features that characterize cognitive engineering and reviews some of the major issues faced by this nascent interdisciplinary field.
... The procedure is repeated until the potential faulty area is reduced to a single component. This strategy is efficient when the faulty system is complex and the initial problem space appears to contain several potential faults with no strong indication of where the actual fault lies [7] . 5. Functional/discrepancy detection: Isolate the fault by looking for the mismatches between what is expected in a normal system operation and the actual behaviors exhibited [2] . By detecting the mismatches, the troubleshooter can identify the components where the difference is located and, in turn, isolate the actual fault. ...
... In other words, often only shallow knowledge models are used and not deep models that involve the underlying causal relations of the human-related domain that is concerned; for this distinction, see, e.g. (Chandrasekaran and Mittal, 1982;Dhar and Pople, 1987;Davis, 1983). ...
Chapter
This chapter briefly outlines how dynamic computational models, and in particular temporal-causal network models, can contribute to smarter applications. The scientific area that addresses Ambient Intelligence (also called Pervasive Computing) applications is discussed in which both sensor data and knowledge from the human-directed sciences such as health sciences, neurosciences, and psychological and social sciences are incorporated. This knowledge enables the environment to perform more in-depth, human-like analyses of the functioning of observed humans, and to come up with better informed actions. It is discussed which ingredients are important to realise this view, and how frameworks can be developed to combine them to obtain the intended type of systems: coupled reflective human-environment systems. Such systems include computational models by which they are able to model and simulate (parts of) their own behavior. Finally, further perspectives are discussed for Ambient Intelligence applications based on these coupled reflective systems.
... One domain that has received a lot of attention is electronic circuit diagnosis. Both Davis and co-workers [88][89][90][91] and Genesereth [92,93] have developed algorithms for reasoning from circuit structure and component function to diagnose circuit malfunctions. This work has been extended by de Kleer and Williams [94] to handle multiple-fault situations. ...
Chapter
Introduction Fault Tree Analysis Alarm Analysis Decision Tables Sign-Directed Graphs Diagnostic Strategies Based on Qualitative Models Diagnostic Strategies Based on Quantitative Models Artificial Neural Network Strategies Knowledge-Based System Strategies Methodology Choice Conclusions References
... Randall Davis discussed causal interaction model for fault diagnosis. Expert system that reason based on understanding of the structure and function of the unit under test has been explored in number of domains, including medicine, computer fault diagnosis, automobile engine fault diagnosis, electronics equipment fault diagnosis and troubleshooting, radio transmitter [7] . This research work focus on fault troubleshooting for radio transmitter. ...
Article
Full-text available
Radio Transmitter fault troubleshooting and detection is a complicated process and requires high level of expertise. Any attempt of developing an expert system dealing with radio Transmitter failure detection has to overcome various difficulties. This paper describes and proposed an expert-based system for Radio Transmitter detection and control. The Expert System (ES) is a computer system that emulates the decision-making ability of a human expert in a restricted domain. The Expert System is one of the leading Artificial Intelligence (AI) techniques that have been adopted to handle such task. This paper explains the need for an expert system and certain issues on developing knowledge-based systems, the Radio Transmitter detection process and the difficulties involved in developing the system. The system structure and its components and their functions are described as well. The process of diagnosing faulty conditions varies widely across to different approaches to systems diagnosis. The application of decision-making knowledge based methods to fault detection allows an in-depth diagnosis by simulating the human reasoning activity. The major aim of this research paper is to give expert knowledge or guide on the rectification of several known faults commonly developed by Radio Transmitter and then develop an expert software system using an object oriented (C#) approach that gives any user the expertise knowledge to rectify the problem of Radio Transmitter.
... By modeling various intermediate relationships between variables, the system can reason about a broader range of situations than a system that directly associates negotiation strategies with input variables. This approach is consistent with recent trends in artificial intelligence research to develop expert systems using "deep knowledge" about the relationships between variables (Davis 1984;Michie 1982). ...
Article
A large number of marketing decisions are based on “expert” judgments. In the emerging field of expert systems, techniques are being developed for systematically representing and using expert knowledge in computer systems. The computerization of marketing expertise will enhance decision support to marketing managers. The authors evaluate the opportunities and difficulties associated with building marketing expert systems by discussing the development of NEGOTEX (negotiations expert), a system that provides guidelines to individuals or teams preparing for international marketing negotiations. Possible benefits of this methodology in other areas of marketing are identified.
... In [25], the authors revise the state of the art in knowledge-based diagnosis, and also propose a sample set of applications frequently cited in the AI literature. For our particular application, we choose a subset of that sample, that consists of the following applications: MYCIN [13], [4] HT [29], [30], GDE [12], TEKNOLID [3], [9], MOLE [20], ALEXIP [2], [27], [32], AUSTRAL [17], [ ...
Conference Paper
Full-text available
This paper proposes a framework to analyse, represent and compare knowledge-based applications for diagnosis. It defines a three-dimensional space in which an application may be represented by a point, whose coordinates are defined on each of the 3 axes corresponding to the conceptual, the functional and the technical dimensions of it. The conceptual dimension focuses on the problem solving method, while the functional dimension relates to the way in which causality is represented in the models. Finally, the technical dimension describes the computation techniques used to implement the problem solving method. Describing the applications according to this framework allows to easily observe and analyze the similarities and differences among them. As an application, we propose the positioning in this framework of fourteen selected diagnosis applications frequently cited in the AI literature. As a conclusion, we show that the diversity among diagnosis knowledge-based applications mainly lies at the conceptual dimension because the other two are strongly correlated.
... In [7], the authors revise the state of the art in knowledge-based diagnosis, and also propose a sample set of applications frequently cited in the AI literature. For our particular application, we choose a subset of that sample, that consists of the following applications: MYCIN [8], [9] HT [10], [11], GDE [12], TEKNOLID [13], [14], MOLE [15], ALEXIP [16], [17], [18], AUSTRAL [19], [20] IXTET [21], MIMIC [22], the off-line operation of GASPAR [23], [24], [25], [26] CA-EN [27], [28], [29], DIAPO [30], the Diagnosis Template in CommonKADS [4] and FAULTY II [31]. ...
Conference Paper
Full-text available
This paper proposes a framework to analyse, represent and compare knowledge-based applications for diagnosis. It defines a three-dimensional space in which an application may be represented by a point, whose coordinates are defined on each of the 3 axes corresponding to the conceptual, the functional and the technical dimensions of it. This framework is based on concepts that come from the Systems Theory. Describing the systems according to this framework allows to easily observe and analyze the similarities and differences among them. As an application, we propose the positioning in this framework of fourteen selected diagnosis applications frequently cited in the AI literature. A preliminary conclusion after this positioning is that the diversity among diagnosis knowledge-based applications mainly lies at the conceptual dimension because the other two are strongly correlated. To overcome this correlation between the functional and technical dimensions and to yield the framework really operational, we propose to replace the technical dimension by a phenomenological one, where we will describe the nature of the phenomena to be diagnosed.
... Through such knowledge an AI system was still capable of some reasoning even if the subject was out of the direct scope of the system. A seminal article on this subject was Randall Davis' "Reasoning from first principles in electronic troubleshooting" 149 which was published in 1983, which tried to code some kind of understanding of how devices work, and refer to that knowledge when solving unknown problems. ...
Book
Full-text available
Big Data, and increasingly Artificial Intelligence, are hot topics in business and society, and many people and organizations have an interest to master those topics better to understand how to apply it to their business or activities. This books gives many examples of how big data is used in different use cases and applications across a variety of sectors. In this book we let data “speak” to tell stories. Stories that matter for business, for people and for society.
Article
In the context of medical expert systems a deep system is often used synonymously with a system that models some kind of causal process or function. We argue that although causality might be necessary for a deep system it is not sufficient on its own. A deep system must manifest the expectations of its user regarding its flexibility as a problem solver and its human-computer interaction (dialogue structure and explanation structure). These manifestations are essential for the acceptability of medical expert systems by their users. We illustrate our argument by evaluating a representative sample of medical expert systems. The systems are evaluated from the perspective of how explicitly they incorporate their particular models of expertise and how understandably they progress towards solutions. The dialogue and explanation structures of these systems are also evaluated. The results of our analysis show that there is no strong correlation between causality and acceptability. On the basis of this we propose that a deep system is one that properly explicates its underlying model of human expertise.
Article
A study is described that examines training of what is termed a 'structural' faultfinding strategy. Such a strategy involves reasoning about the structuralfeatures of a problem domain in order to interpret symptoms and narrow down the area in which the fault lies. Training was carried out using verbal descriptions and schematic representations of the flow through five chemical plants. Training for this structural fault-finding strategy involved three components: identification of plant characteristics (e.g. direction of flow, type of logic gate); production of the symptom propagation pattern of a given failure; and identification of all possible failed items given a specified pattern of symptom propagation. The effectiveness of the structural strategy was assessed in terms ofthe number of failed items identified correctly for novel symptom patterns at three levels of transfer comprising: familiar plants; novel plants with familiar structural features; and truly novel plants where the structural features are extended and combined in novel ways. Transfer of training was positive at all three transfer levels such that errors of both omission and commission were reduced in comparison to a control condition. Further training and assessment was provided concerning fault-finding. While no improvement in fault-finding accuracy was found as a result of training in the structural strategy, data concerning instrument readings requested prior to diagnosis, indicated that this training had a beneficial e ffect. It is suggested that a structural strategy is of benefit during the initial stage of fault-finding but needs to be trained and combined with other types of knowledge that support subsequent stages of the diagnostic process.
Article
This paper presents the processes of knowledge acquisition and ontology development for structuring the knowledge base of an expert system. Ontological engineering is a process that facilitates construction of the knowledge base of an intelligent system. Ontology is the study of the organization and classification of knowledge. Ontological engineering in artificial intelligence has the practical goal of constructing frameworks for knowledge that allow computational systems to tackle knowledgeintensive problems and it supports knowledge sharing and reuse. To illustrate the process of conceptual modelling using the Inferential Modelling Technique as a basis for ontology construction, the tool and processes are applied to build an expert system in the domain of monitoring of a petroleum-production facility.
Article
This paper presents a method for localizing faulty components of control systems by replaceable parts such as print boards and cables, in a large scale plant like a nuclear power plant. Most of today's control systems form a distributed configuration including many digital controllers interconnected by data communication networks. Usually, to localize the faulty components in nuclear plant control systems, suspected faulty components are narrowed down by executing manual tests to examine whether the objects are normal or abnormal based on design documents and personnel know-how, besides the use of self-diagnosis functions built into the control systems. In the present method, procedures of various tests including the know-how and checking of self-diagnosis functions are provided as knowledge of tests. The test to be executed is determined by considering failure probabilities of objects, and easiness and effectiveness of testing. Then, the suspects are narrowed down sequentially based on the test result. In checking feasibility of this diagnosis method for a simulated control system, intended faults are satisfactorily localized. This method is confirmed to be practicable for diagnosis of large scale digital control systems.
Chapter
Fault diagnosis (FD) of man-made systems lies in the core of modern technology and attracts increasing attention by both theoreticians and practitioners. Actually, FD is one of the major concerns in industrial and other technological systems operation. In recent years a great deal of work has been done in the direction of designing systems (hardware and software) that are able to automatically diagnose the faults and malfunctions of an industrial process on the basis of observed data and symptoms. FD provides the prerequisites for fault tolerance, reliability and safety that are fundamental design features in any complex engineering system. Complex automatic industrial and other systems usually consist of hundreds of interdependent working parts which are individually subject to malfunction or failure. Total failure of these systems can present unacceptable economic loss or hazards to personnel or to the system itself. Hence, most modern systems involve: (i) a plan of maintenance which replaces worn parts before they malfunction or fail and (ii) a monitoring mechanism that detects a fault as it occurs, identifies the malfunction of a faulty component, and compensates for the fault of the component by substituting a configuration of redundant elements so that the system continues to operate satisfactorily. FD is actually this monitoring function and involves four subfunctions, namely detection, prediction, identification, and correction of faults during the on-line operation of the technological system at hand.
Article
Full-text available
Diagnosis was among the first subjects investigated when digital computers became available. It still remains an important research area, in which several new developments have taken place in the last decade. One of these new developments is the use of detailed domain models in knowledge-based systems for the purpose of diagnosis, often referred to as diagnosis. Typically, such models embody knowledge of the normal or abnormal structure and behaviour of the modelled objects in a domain. Models of the structure and workings of technical devices, and causal models of disease processes in medicine are two examples. In this article, the most important notions of diagnosis and their formalisation are reviewed and brought in perspective. In addition, attention is focused on a number of general frameworks of diagnosis, which offer sufficient flexibility for expressing several types of diagnosis.
Conference Paper
Drug development is a complex and extremely risky process. It involves developing knowledge of the relevant biology at a variety of levels of detail in order to piece together a coherent model of the disease process and the potential point(s) of intervention. Ultimately, this model must be complete enough to support the prediction of effects and the explanation of clinical trial results. Unfortunately, such models of a disease process have been primarily developed and maintained in human minds. Consequently, such models are limited by a human's ability to store all of the needed information, structure it properly, and reason about its consequences involving both complex feedback and time dependencies. The following paper describes a knowledge-centered approach to the development of biological models that support the drug development process. This technique focuses on a multi-level, hierarchical approach that models various levels of the related biology using appropriate representation techniques based on both the type and availability of the knowledge.
Article
Full-text available
We describe several strategies used by expert troubleshooters performing a manufacturing screening task, the diagnosis of defects on a computer board. These strategies use "inexact models" of the components and connections on the board. A prototype expert system has been implemented that uses the strategies and models. The strategies and models are robust because they are applicable to a wide range of problems, including problems not previously encountered. The system saves useful data acquired during problem solving to assist in future problems. We also describe how the above strategies and models can be used in a sensorbased system that acquires information about the board through a vision camera and other sensing devices. This will further increase the productivity of human troubleshooters.
Article
Planners and public policy makers in recent years have become increasingly concerned with issues related to nuclear waste transportation. Rational planning and policy for nuclear waste transportation depends upon systematically assimilating into a coherent and meaningful whole, diverse information at several levels of analysis. The authors describe and demonstrate the rudiments of a deep knowledge architecture for evaluating alternative nuclear waste transshipment possibilities.
Article
This is a bibliography of specific expert system applications in the production and operations management (POM) area. Articles on expert system shells, expert system tools, and nonapplication theory are not included in the bibliography. The bibliography contains a total of 356 papers, extracted from (1) previous surveys in books and journals, (2) ABI/INFORM on-line database, (3) manual compilation by the author, and (4) major conference proceedings. The bibliography consists of 246 conference papers and 110 articles published in various English language journals and is organized in 15 sections based on the functional area of production and operations management.
Article
This book provides a comprehensive, up-to-date look at problem solving research and practice over the last fifteen years. The first chapter describes differences in types of problems, individual differences among problem-solvers, as well as the domain and context within which a problem is being solved. Part one describes six kinds of problems and the methods required to solve them. Part two goes beyond traditional discussions of case design and introduces six different purposes or functions of cases, the building blocks of problem-solving learning environments. It also describes methods for constructing cases to support problem solving. Part three introduces a number of cognitive skills required for studying cases and solving problems. Finally, Part four describes several methods for assessing problem solving. Key features includes: Teaching Focus - The book is not merely a review of research. It also provides specific research-based advice on how to design problem-solving learning environments. Illustrative Cases - A rich array of cases illustrates how to build problem-solving learning environments. Part two introduces six different functions of cases and also describes the parameters of a case. Chapter Integration - Key theories and concepts are addressed across chapters and links to other chapters are made explicit. The idea is to show how different kinds of problems, cases, skills, and assessments are integrated. Author expertise - A prolific researcher and writer, the author has been researching and publishing books and articles on learning to solve problems for the past fifteen years. This book is appropriate for advanced courses in instructional design and technology, science education, applied cognitive psychology, thinking and reasoning, and educational psychology. Instructional designers, especially those involved in designing problem-based learning, as well as curriculum designers who seek new ways of structuring curriculum will find it an invaluable reference tool.
Article
Full-text available
The purpose of this study is to develop instructional methods and processes for designing authentic contexts of blended learning in engineering education. Design strategies for the reproduction of professionals' authentic contexts are suggested with guidelines for the actual development of a blended learning course. According to the development research methodology, along with a prototype e-learning course, six design strategies were developed as follows: (1) select and identify authentic tasks that practitioners or experts can solve; (2) analyze the context of solving the authentic task; (3) model experts' cognitive and behavioral processes of solving the authentic task; (4) develop assessment tools for the authentic task; (5) apply instructional strategies to provide authentic contexts by using technologies; and (6) develop instructional resources and environments. Two task analysis methods, activity theory and PARI (Precursor-Action- Results-Interpretation), were employed to identify authentic troubleshooting problems of energy auditing. Constructivist learning models and strategies were implemented; they had been adopted from situated learning, anchored instruction, cognitive apprenticeship, and goal-based scenarios. The research implications and limitations are discussed for generalization in future studies.
Chapter
The capturing of human knowledge in expertsystems for solving highly complex tasks has to take into account not only the actual problem solving purpose, but also the process of knowledge elucidation as well as the expertsystem’s existence in an environment with differently trained interacting users, knowledge base updates and corrections. Thus an knowledge representation appropriate to this variety of purposes must convey a lot of information that seems superfluous from a pure problem solution point of view. Accordingly, we consider the explicit representation of knowledge for structuring the knowledge elucidation process, for high-quality explanation (based on deep domain models), for knowledge base updating (as automatic learning) and correction as essential. Consequently, we review knowledge representations from different perspectives in order to endow a knowledge engineer with orientation about the capabilities of the various representation schemes for building advanced expertsystems.
Article
This paper discusses the application of Artificial Intelligence (AI) to the development of cockpit aids for commercial transports. Flight tasks that are suitable candidates for intelligent aiding are identified through pilot interviews. Then, human factors issues relevant to the introduction of such aids are discussed. Finally, AI research needs as they relate to this application are noted.
Article
An introduction to second-generation expert systems is provided starting with a discussion of the limitations of first-generation systems. A few architectures proposed for the second-generation expert systems are reviewed, a tour on the expert systems approach to fault diagnostic/control system design is made, and some issues that must be addressed in future works are highlighted. A sketch of a deep expert system developed by the author’s group is also provided.
Article
System modelling is the first step of diagnosis research. Up to now, the automatic diagnosis domain was focused on technical applications, without taking into account human capacity reasoning. On the one hand, the difficulty to get an exhaustive expertise, and on the other hand, the high calculating time are the weak points of automated methods. This point argues to evolve a new vision of man-machine diagnosis.
Article
Two studies of experienced operators in a process-control plant aimed to improve diagnosis of unusual multiple faults through training. A process-tracing methodology analyzed operators' concurrent verbalizations and actions during simulated fault scenarios. In Study 1, training increased awareness of multiple faults and provided a heuristic for switching to a representation that included multiple-fault hypotheses. Training had no effect on diagnostic accuracy, although fewer incorrect single-fault hypotheses were regenerated. In Study 2, operators practiced identifying the inconsistencies between a single-fault hypothesis and fault symptoms and modifying this hypothesis into a consistent multiple-fault hypothesis. Training improved diagnostic accuracy because of improved hypothesis modification processes.
Chapter
Full-text available
Expert systems research recently has focused on the importance of theory-based "first principles." The term first principles refers to un­ derstanding the structure and function of problem solving. To date, research in tax-based expert systems has focused on developing pro­ totypes of observed empirical relationships or models of the tax law. This paper focuses on applying first principles to expert systems in taxation based on an expert systems paradigm. Those first principles are used to elicit some of the major research issues faced in developing expert systems in taxation. Research in tax-based expert systems is examined to determine the extent to which some of these issues have been previously addressed.
Thesis
Full-text available
L'idée de base qui soutend ce rapport est que les tâches de surveillance, de diagnostic et de commande des processus dynamiques s'articulent au sein d'un raisonnement qui exploite les connaissances contenues dans ce modèle tripartite. Ce qui est présenté dans la suite concerne donc ces trois tâches sans que pour autant une réponse ne soit apportée quant à la manière d'effectuer ce raisonnement. En revanche, est défendue l'idée que ce raisonnement est d'abord basé sur une analyse de ce qui est observé sur l'évolution du processus. De ce fait, surveillance, diagnostic et commande sont liées par 7 une tâche transversale et permanente : l'observation des changements perceptibles sur le processus au cours du temps. Ces changements sont observés à travers des suites de valeurs numériques produites par des capteurs. La fonction d'un capteur étant d'évaluer la valeur d'une grandeur physique à un instant donné, les mesures qu'il fournit au cours du temps, à intervalle régulier ou non, permettent de constituer une suite d'observations sur les valeurs successivement prises par une variable. Représentée graphiquement, cette suite forme une courbe qui correspond à une fonction dont on ne connaît pas a priori l'expression analytique. La question est alors la suivante : étant donné un processus dynamique considéré comme un ensemble de variables observées à partir d'un ensemble de capteurs, peut-on découvrir les relations qui lient ces variables entre elles ? Et dans l'affirmative, comment peut-on se servir des relations ainsi découvertes pour surveiller, diagnostiquer et commander le processus dynamique observé ? La réponse n'est bien sûr pas donnée dans ce rapport. Seuls sont proposés quelques éléments permettant tout au mieux d'aborder cette question. Et si un mérite devait être accordé à ce rapport, je pencherai pour qu'il fût celui de poser cette question telle qu'elle est posée par le contenu même des bases de données remplies par des systèmes de surveillance, de diagnostic ou de commande de processus dynamiques. Car ce qui est présenté dans la suite vient de l'observation de la réalité telle qu'elle s'impose à tous, opérateurs, ingénieurs, experts ou chercheurs, dans l'industrie ou les laboratoires, dès lors que se pose le problème d'atteindre un objectif au moyen du pilotage d'un processus dynamique. Et de cette observation est né le constat suivant : ni le concept d'événement discret, ni celui d'alarme ne permet de rendre compte de l'activité cognitive menée par celui qui cherche à atteindre un objectif de pilotage. Le premier parce que la signification de l'affectation à une date donnée d'une valeur entière à une variable qu'est l'occurrence d'un événement discret n'est donnée que par les transitions d'états que cette affectation provoque au sein d'un automate. Le second parce que la signification donnée par l'assignation du prédicat associé à une alarme est donnée par le programme qui génère l'alarme. D'un coté, l'accent est porté sur le processus, de l'autre l'accent est porté sur le programme. Aussi l'objet de ce rapport est-il d'examiner le lien entre cette affection et cette assignation afin de montrer comment un raisonnement de type abductif, basé sur des probabilités tirées des dates d'occurrences, permet de construire un modèle des relations les plus probables entre les variables du processus observé, c'est-à-dire un couple (Processus, Programmes). La notion d'observation est proposée à cette fin.
Chapter
In a typical scenario a human expert writes down a simulation model as a computer program. More demanding environments require that the computer itself will first ‘figure out’, or generate, the appropriate model. When the domain at hand is complicated, such a task may be non-trivial to program using conventional programing languages. Artificial intelligence techniques, and in particular expert systems, are of great assistance for achieving such goals which usually require deep on-hand experience and expertise of the domain at hand. A methodology for building such applications for the troubleshooting of processes is described through an actual example taken from the machining domain.
Chapter
The experience of building expert systems for a decade has revealed a number of problems and bottlenecks at each stage of the life cycle. Although the evidence is not abundant, because the number of fully operational expert systems is small with respect to the number of systems that have been developed, it appears that ad hoc or post hoc solutions for problems in early stages can create new and even bigger problems at later stages (McDermott, 1983, 1984): Problems propagate. The fact that few systems ever reach operational maturity is probably indicative of the fact that the art of knowledge engineering is not well established yet. The major problems are listed in the reverse order of their life cycle stages, because problems at later stages are more easily identified.
Chapter
EUROHELP is a shell for developing intelligent help systems. It is assumed that by specifying domain specific knowledge about an information processing system -e.g. Unix Mail or Vi- a help system for the particular application can be constructed. This leads to a number of requirements for the structure of the domain representation in EUROHELP. The application model should be generic -covering all kinds of applications- and multifunctional, i.e. it should serve different types of knowledge based modules. The various modules of the EUROHELP shell, like the plan recogniser, diagnoser, coach, etc. are specific interpreters of the representation of the target application (e.g. Unix-Vi). The core of the representation consists of descriptions of system procedures (‘commands’) and structures of objects. The system procedures consist of actions which identify objects (object reference) and/or manipulate objects. This core can be viewed as a conceptual translation (model) of the target application. A more operational view of the functions of the target application is provided by a hierarchical task layer in which the mode concept plays an important role. The task layer bottoms out in system procedures. This task layer supports planning processes in a top-down way. Although a principled approach has been taken to define the functionality of the EUROHELP system, the current design shows many pragmatic short cuts, which may make it also a practically feasible enterprise.
Article
This paper introduces an expert system model which is being developed with the objective of helping marketing managers to analyse the position of their company relative to their competitors, in a particular business or product area, and then suggesting ways in which this position might be improved. The authors also discuss a predevelopment test to assess management benefits, as well as the generation and elicitation of knowledge and expertise provided by managers themselves. In particular, this research considers the potential of an expert system to aid marketing managers in making competitive analysis evaluations.
Chapter
Considerable research is being conducted in the area of expert systems for diagnosis. Early work was concentrated in medical diagnostic systems (Clancey and Shortliffe, {21}). MYCIN (Buchanan and Shortliffe, {15}) appears to represent the first medical diagnostic expert system. Current efforts are expanding to the area of equipment maintenance and diagnostics, with numerous systems having been built during the past several years. We concentrate our effort in this survey on expert systems for diagnosis and refer the reader to (Hayes-Roth, Waterman, and Lenat, {42} Hayes-Roth, {43}), (Waterman, {101}), and (Charniak and McDermott, {19}) for introductions to expert systems. We remark that it is common for diagnostic systems to integrate concepts from artificial intelligence (expert systems), decision theory, and operations research.
Chapter
We are investigating the role that computer-based models can play in helping students to learn science. In the research reported in this chapter, we conducted experimental trials of a computer environment that provides linked models that represent circuit behavior from different perspectives (such as a microscopic versus a macroscopic perspective) and at differing levels of abstraction. In these trials, we varied the number of linked causal models that were given to different groups of students. Our objective was to determine whether working with reductionistic models (a) reduces students’ misconceptions, particularly their adherence to the commonly held “current-as-agent” misconception, and (b) increases the robustness and flexibility of students’ knowledge as they solve circuit problems and explain circuit phenomena. The first model that we developed and utilized, called the “particle model,” illustrates the behavior of mobile, charged particles within a conductive medium and their changes in position over time. The basic interaction among particles within this model is the Coulomb interaction (like charges repel, unlike charges attract). A second model that we developed depicts — at a higher level of abstraction — the properties of a system that incorporates such a mechanism. This model, called the “transport model,” incorporates more abstract representations of voltage and charge flow. The particle model can be used to provide an explanation or “unpacking” for processes that are considered primitives within the transport model. We conducted an experiment that examined students’ performance on a variety of circuit problems before and after they learned either (a) the transport model alone, or (b) the transport model augmented with explanations of its processes in terms of the particle model. We then compared performance on problems for which a current-as-agent conception is sufficient with performance on problems that require a full understanding of how voltages are created and distributed within a circuit. The posttest results revealed that both groups achieved a high level of performance on a wide range of problems. However, the subjects who received a particle model explanation for the basic concepts and processes of the transport model achieved a higher level of performance than the other group on problems that require an understanding of voltage and charge distribution. We conjecture that this is due to the particle model explanations providing students with a mechanistic model for voltage and charge distribution that is consistent with the behavior of the transport model and that inhibits the construction and use of the current-as-agent misconception.
Article
Training in troubleshooting in industry tends to take place from the school of hard knocks, by trial and error, on-the-job training, and gradually from the experience of solving problems as they occur, with no well-design program of instruction. This is relatively ineffective and it does little to develop self-confidence. Troubleshooting problem-solving is a higher-level cognitive process that could range from the identification, symptoms to determine, and action required to fix a problem. The knowledge and cognitive process skills needed for troubleshooting are becoming increasingly valuable. By developing problem-solving skills, engineers will become more adept at troubleshooting problems. Researchers have done studies on how to improve the troubleshooting performance of technicians in strategizing to solve a problem. Unfortunately, many engineers who decided to go to industry do not have developed troubleshooting skills. One may theorize that a lack of troubleshooting skill is a result of the lack of practical experience and understanding of equipment in an engineering students' educational preparation. There may also be a lack of the faculty's confidence in instructing students in using such open-ended experiences. To date, much of the research has not been implemented as a part of the curriculum of technical engineering careers. This article reviews and synthesizes more than 30 studies from 20 years (1987-2007) of research in troubleshooting problem-solving. The goals of the article are fourfold: First, to introduce the concept of troubleshooting problem-solving. Second, to present a description of problem-solving skills needed to succeed in troubleshooting. Third, to describe strategies for instruction of engineers and technicians. We conclude that troubleshooting problem-solving should be implemented as part of engineering curricula to build on students strengths to enhance their skills to succeed in the performance of troubleshooting process in industry.
Article
This article sets out how to develop diagnostics from measurements and model knowledge. We begin by outlining classic diagnostic methods based on learning and shape recognition (Bayesian method, closest-neighbour method, iterative methods), then go on to examine new methods based on modelling. Two key points emerge: the difference between surface and deep understanding, and surprising performance of the generation-elimination method.
Chapter
A framework is proposed for describing design as a synthesis process, based on generic components at different levels of abstraction. These levels range from the system level to the physical level and represent the technical vocabularies used to describe the design. The resulting framework is used for structuring the knowledge needed to execute a design. Furthermore, the possibility of using generic components as basic building blocks for constructing higher-level concepts, in particular design prototypes, is discussed. The approach is illustrated with examples from the field of bridge design.
Article
Full-text available
Diagnosis is a critical task in solving many quality problems. Since it is important in other fields of professional practice, diagnosis has been the target of considerable scientific study. This article brings the findings of research on diagnosis to the attention of quality practitioners and researchers. It presents a coherent, comprehensive account of diagnosis that is prescriptively oriented. Philosophical studies of the nature and types of causality are reviewed, as is research on the diagnostic process. Diagnostic tasks, strategies, and errors are discussed. Popular diagnostic techniques are assessed, informal heuristics that are central to diagnostician expertise are identified, and the use of taxonomies of causes is considered. The author lays the groundwork for practical and theoretical advances in the diagnosis of quality problems.
Article
This article differentiates between the four main knowledge levels of a human expert: namely, structural, conceptual, heuristic, and epistemological knowledge. The conceptual relationships between these various types of knowledge are constituent elements of the deep knowledge encountered in industrial systems. We present advice as to how to recognize such deep knowledge in order to produce a better quality knowledge base. Finally, a prototype developed in a industrial area is describe, in which the components that affirm the existence of deep knowledge are identified, detailing the concepts mentioned above.
Conference Paper
Full-text available
While expert systems have traditionally been built using large coliections of rules based on empirlcal associations, interest has grown recently in the use of systems that reason from representations of structure and function. Our work explores the use of such models in troubleshooting digital electronics. We describe our work to date on (i) a language for describing structure, (ii) a language for describing function, and (i/i) a set of prlnctples for troubleshooting that uses the two descriptions to guide its investigation. In discussing troubleshooting we show why the traditional approach --- test generation --- solves a different (JrdJklll dnti vve &SCllSS a Ilumber of its pIdC,hd ShOrt~Olllill~S. We consider next the style of debugging known as violated expectations and demonstrate why it is a fundclmental advance over traditional test generation. Further exploration of this approach. however, demonstrates that it is incapable of dealing with commonly known classes of faults. We explain the shortcoming as arisirlg from the use of a fault model that is both implicit and inseparable from the basic troubleshooting metl~odology. We argue for the importance of fault models that are explicit, separated from the troubleshooting mechanism, and retractable in much the same sense that inferences are retracted in current systems.
Conference Paper
Full-text available
First generation AI in Medicine programs have clearly demonstrated the usefulness of AI techniques However, il has also been recognized that the use. of notions such as causal relationships, temporal patterns, and aggregate disease categories in these programs has been too weak From our study of clinician's behavior we realized that a diagnostic or therapeutic program must consider a case at various levels of detail to integrate overall understanding with detailed knowledge, To explore these issues, we have undertaken a study of the problem of providing expert consultation for electrolyte and acid-base disturbances We have partly completed an implementation of ABEL, the diagnostic component of the overall effort. In this paper we concentrate on ABLL.s mechanism for describing a patient. Called the patient-specific model, this description includes data about the patient as well as the program's hypothetical interpretations of these data in a multilevel causal network. The lowest level of this description consists of pathophysiological knowledge about the patient, which is successively aggregated into higher level concepts and relations, gradually shifting the content from pathophysiological to syndromic knowledge The aggregate level of this description summarizes the patient data providing a global perspective for efficient exploration of the diagnostic possibilities The pathophysiological level description provides the ability to handle complex clinical situations arising in illnesses with multiple etiologies, to evaluate the physiological validity of diagnostic possibilities being explored, and to organize large amounts of seemingly unrelated facts into coherent causal descriptions.
Conference Paper
Full-text available
The DIPMETER ADVISOR program is an application of Al and Expert System techniques to the problem of inferring subsurface geologic structure. It synthesizes techniques developed in two previous lines of work, rulte-based systems and signal understanding programs. This report on the prototype system has four main concerns. First, we describe the task and characterize the various bodies of knowledge required. Second, we describe the design of the system we have built and the level of performance it has currently reached. Third, we use this task as a case study and examine it in the light of other, related efforts, showing how particular characteristics of this problem have dictated a number of design decisions. We consider the character of the interpretation hypotheses generated and the sources of the expertise involved. Finally, we discuss future directions of this early effort. We describe the problem of "shallow knowledge" in expert systems and explain why this task appears to provide an attractive setting for exploring the use of deeper models.
Article
Full-text available
This manual describes the Design Procedure Language (DPL) for LSI design. DPL creates and maintains a representation of a design in a hierarchically organized, object-oriented LISP data-base. Designing in DPL involves writing programs (Design Procedures) which construct and manipulate descriptions of a project. The programs use a call-by-keyword syntax and may be entered interactively or written by other programs. DPL is the layout language for the LISP-based Integrated Circuit design system (LISPIC) being developed at the Artificial Intelligence Laboratory at MIT. The LISPIC design environment will combine a large set of design tools that interact through a common data-base. This manual is for prospective users of the DPL and covers the information necessary to design a project with the language. The philosophy and goals of the LISPIC system as well as some details of the DPL data-base are also discussed.
Article
Work on expert systems has received extensive attention recently, prompting growing interest in a range of environments. Much has been made of the basic concept and of the rule-based system approach typically used to construct the programs. Perhaps this is a good time then to review what we know, asses the current prospects, and suggest directions appropriate for the next steps of basic research. I'd like to do that today, and propose to do it by taking you on a journey of sorts, a metaphorical trip through the State of the Art of Expert Systems. We'll wander about the landscape, ranging from the familiar territory of the Land of Accepted Wisdom, to the vast unknowns at the Frontiers of Knowledge. I guarantee we'll all return safely, so come along....
Conference Paper
To demonstrate that the performance of an expert knowledge-based system is comparable to that of the experts it emulates, it is useful to subject the system to an appropriate objective evaluation. The Prospector consultant system is intended to aid a geologist in evaluating the mineral potential of an exploration site. Here we report the results of a preliminary performance analysis of three Prospector ore deposit models. Using data from known deposits as test cases, we compare the system's performance in detail with analogous target values supplied by the model designer based on the same input data. These calibration results measure how well a model embodies the model designer's intentions, and identify particular sections of a model that would benefit from revision. We discuss limitations of the present experiments and future work. Put briefly, we report how work-a-day performance analysis instruments can accelerate the model design and refinement process in expert systems.
Article
The problem considered is the diagnosis of failures of automata, specifically, failures that manifest themselves as logical malfunctions. A review of previous methods and results is first given. A method termed the "calculus of D-cubes" is then introduced, which allows one to describe and compute the behavior of failing acyclic automata, both internally and externally. An algorithm, called the D-algorithm, is then developed which utilizes this calculus to compute tests to detect failures. First a manual method is presented, by means of an example. Thence, the D-algorithm is precisely described by means of a program written in Iverson notation. Finally, it is shown for the acyclic case in which the automaton is constructed from AND'S, NAND'S, OR'S and NOR'S that if a test exists, the D-algorithm will compute such a test.
Article
A rule-based system is presented for computer-aided circuit analysis. The set of rules, called EL, is written in a rule language called ARS. Rules are implemented by ARS as pattern-directed invocation demons monitoring an associative data base. Deductions are performed in an antecedent manner, giving EL's analysis a catch-as-catch-can flavor suggestive of the behavior of expert circuit analyzers. This style of circuit analysis is called propagation of constraints. The system threads deduced facts with justifications which mention the antecedent facts and the rule used. These justifications may be examined by the user to gain insight into the operation of the set of rules as they apply to a problem. The same justifications are used by the system to determine the currently active data-base context for reasoning in hypothetical situations. They are also used by the system in the analysis failures to reduce the search space. This leads to effective control of combinatorial search which is called dependency-directed backtracking.
Conference Paper
CRITTER is a system that reasons about digital hardware designs, using a a declarative representation that can represent components and signals at arbitrary levels of abstraction. CRITTER can derive the behaviors of a component's outputs given the behaviors of the inputs. it can derive the specifications a COmpOnent'S inpUtS must meet in order for some given specifications on the OUtpUtS to be met, and it can verify that a given signal behavior satisfies a given specification. By combining these operations, it evaluates both the correctness and the robustness of the overall design.*
Article
We present an interactive system organized around networks of constraints rather than the programs which manipulate them. We describe a language of hierarchical constraint networks. We describe one method of deriving useful consequences of a set of constraints which we call propagation. Dependency analysis is used to spot and track down inconsistent subsets of a constraint set. Propagation of constraints is most flexible and useful when coupled with the ability to perform symbolic manipulations on algebraic expressions. Such manipulations are in turn best expressed as alterations or augmentations of the constraint network.Almost-Hierarchical Constraint Networks can be constructed to represent the multiple viewpoints used by engineers in the synthesis and analysis of electrical networks. These multiple viewpoints are used in terminal equivalence and power arguments to reduce the apparent synergy in a circuit so that it can be attacked algebraically.
Article
Work on Expert Systems has received extensive attention recently, prompting growing interest in a range of environments. Much has been made of the basic concept and the rule-based system approach typically used to construct the programs. Perhaps this is a good time then to review what we know, assess the current prospects, and suggest directions appropriate for the next steps of basic research. I'd like to do that today and propose to do it by taking you on a journey of sorts, a metaphorical trip through the State of the Art of Expert Systems. We'll wander about the landscape, ranging from the familiar territory of the Land of Accepted Wisdom, to the vast unknowns at the Frontiers of Knowledge. I guarantee we'll all return safely, so come along...
The design procedure language manual. Massachusetts Institute of Technology, Artificial Intelligence Memo 598 Diagnosis and Reliable Design of Digital Systems
  • J Hartheimer
  • A Breuer
  • M Friedman
References BATAI.I, J. & HARTHEIMER, A. (1980). The design procedure language manual. Massachusetts Institute of Technology, Artificial Intelligence Memo 598. BREUER, M. & FRIEDMAN, A. (1976). Diagnosis and Reliable Design of Digital Systems. Rockville, Maryland: Computer Science Press. BROWN, J. S., BURTON, R. & DEKLEI-;
Local methods for localizing faults in electronic circuits Massachusetts Institute of Technology Assumptions and ambiguities in mechanistic mental models
  • J Dekleer
  • Dp-Ki
  • J Eer
  • J S Brown
DEKLEER, J. (1976). Local methods for localizing faults in electronic circuits. Massachusetts Institute of Technology, Artificial Intelligence Memo 394. DP-KI,EER, J. & BROWN, J. S. (1982). Assumptions and ambiguities in mechanistic mental models. Xerox PARC Report CIS-9.
Preliminary evaluation of the performance of the PROSPECTOR system for mineral exploration
  • J Gasctlnig
GASCtlNIG, J. (1979). Preliminary evaluation of the performance of the PROSPECTOR system for mineral exploration. Proceedings of the International Joint Conference on Artificial Intelligence, pp. 308-310.
The use of hierarchical models in the automated diagnosis of computer systems
  • Genesereth
GENESERETH, M. (1981). The use of hierarchical models in the automated diagnosis of computer systems. Stanford HPP Memo 81-20.
Heuristic methods for imposing structure on ill-structured problems Diagnosis of automata failures: a calculus and a method
  • H Pople
POPLE, H. (1982). Heuristic methods for imposing structure on ill-structured problems. In SZOLOVITS, P., Ed., Artificial Intelligence in Medicine. ( A A A S Selected Symposium 51.) ROTH, J. P. (1966). Diagnosis of automata failures: a calculus and a method. IBM Journal of Research and Development, 278-291.
Assumptions and ambiguities in mechanistic mental models
  • deKleer
The CRITTER system—analyzing digital circuits by propagating behaviors and specifications
  • Kelley