Archived project

CSE classics (well old papers)

Goal: ease access to older papers on joint cognitive systems, human-automation, design, safety that remain relevant to today's debates

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
26
Reads
5 new
241

Project log

David D Woods
added a research item
Work in Cognitive Systems Engineering (CSE) has abstracted basic patterns or regularities that recur in many specific settings but that transcend the details of that setting. These general patterns help focus new studies of Joint Cognitive Systems, help jump start design, and help avoid repeating past design failures. The laws form five families: Laws of Adaptation; Laws of Models; Laws of Collaboration; Laws of Responsibility; and Norbert's Contrast. Since these were developed in early 00s, many developments in Autonomous systems, AI, Ethical AI violate or misrepresnt these laws, and, as a result continue to repeat previous design mistakes. Since these Laws were collected, advances in Resilience Engineering have identified the hard constraints and formal principles that lead to these empirical laws.
David D Woods
added a research item
This paper provides an in depth example of a problem-driven approach to providing support systems. The results of the cognitive analysis led to a specification of what assistance would be useful independent of the technologies required to build systems. As a result, the kinds of support systems that were developed were very different from the solutions envisioned a priori and the required support could take on a number of specific forms via different tool-building technologies - graphic displays, advisory system, exploratory learning environments. Note the theme that cuts across all of the kinds of support systems discussed. They are all oriented to enhance the practitioner's skill and knowledge in solving his or her problem, rather than to substitute machine power for human skill (Woods, 1986; Woods & Roth, in press). Each support system developed exemplifies the concept that support systems should be instruments that extend the practitioner's ability to 'see', understand and control the target world. Finally, by exploring what is necessary to create effective support systems, this paper shows the role that cognitive engineering can play in utilizing the power afforded by new computational technologies.
David D Woods
added a research item
Cognitive demands and activities in dynamic fault management are best understood as a general form of abductive reasoning as disturbances spread in complex systems.
David D Woods
added a research item
still a decent introduction to what was new in 1983-1988 as Cognitive Systems Engineering emerged
David D Woods
added 27 research items
This chapter discusses some of the common pitfalls that arise in building intelligent support systems and describe a pragmatic knowledge acquisition approach for defining and building effective intelligent support systems. The cognitive task analysis provides an umbrella structure of domain semantics that organizes and makes explicit what particular pieces of knowledge mean about problem-solving in the domain. Acquiring and using such a domain semantics is essential (l) to specify what kinds of cognitive support functions are needed, (2) to specify what kinds of computational mechanisms are capable of providing such functions, (3) to clearly delineate machine performance boundaries, and (4) to build less brittle machine problem-solvers, for example, through features that enable the human problem-solver to extend and adapt the capability of the system to handle unanticipated situations. This is in contrast to technology-driven approaches where knowledge acquisition focuses on describing domain knowledge in terms of the syntax of particular computational mechanisms. In other words, the language of implementation is used as a substitute for a cognitive language of description. The cognitive task analysis approach redefines the knowledge acquisition problem: knowledge acquisition, first, is about deciding what kinds of intelligent systems would make a difference and, second, about what domain specific knowledge is needed to fuel those systems.
The growth of computational power has fueled attempts to "automate" more of the human role in complex problem solving domains, especially those where system faults have high consequences and where periods of high workload may saturate the performance capacity of human operators. Examples of these domains include flightdecks, space stations, air traffic control, nuclear power operation, ground satellite control rooms, and surgical operating rooms. Automation efforts may have unanticipated effects on human performance, particularly if they increase the workload at peak workload times or change the practitioners' strategies for coping with workload. Smooth and effective changes in automation requires detailed understanding of the cognitive tasks confronting the user --what has been called user centered automation I. We have observed the introduction of a new computerized technology in a group of hospital operating rooms used for heart surgery. The study has revealed how automation, especially clumsy automation 2,'3, effects practitioner work patterns and suggests that clumsy automation constrains users in specific and significant ways. Users tailor both the new system and their tasks in order to accommodate the needs of process and production. The study of this tailoring may prove a powerful tool for exposing previously hidden patterns of user data processing, integration, and decision making which may, in turn, be useful in the design of more effective human-machine systems.
This report documents the results of Phase II of a three phase research program to develop and validate improved methods to model the cognitive behavior of nuclear power plant (NPP) personnel. In Phase II a dynamic simulation capability for modeling how people form intentions to act in NPP emergency situations was developed based on techniques from artificial intelligence. This modeling tool, Cognitive Environment Simulation or CES, simulates the cognitive processes that determine situation assessment and intention formation. It can be used to investigate analytically what situations and factors lead to intention failures, what actions follow from intention failures, the ability to recover from errors or additional machine failures, and the effects of changes in the NPP person-machine system. The Cognitive Reliability Assessment Technique (or CREATE) was also developed in Phase II to specify how CES can be used to enhance the measurement of the human contribution to risk in probabilistic risk assessment (PRA) studies. The results are reported in three self-contained volumes that describe the research from different perspectives. Volume 1 provides an overview of both CES and CREATE. 30 refs., 6 figs.
David D Woods
added a project goal
ease access to older papers on joint cognitive systems, human-automation, design, safety that remain relevant to today's debates