ArticlePDF Available

Endsley, M.R.: Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors Journal 37(1), 32-64

Authors:
  • SA Technologies

Abstract

This paper presents a theoretical model of situation awareness based on its role in dynamic human decision making in a variety of domains. Situation awareness is presented as a predominant concern in system operation, based on a descriptive view of decision making. The relationship between situation awareness and numerous individual and environmental factors is explored. Among these factors, attention and working memory are presented as critical factors limiting operators from acquiring and interpreting information from the environment to form situation awareness, and mental models and goal-directed behavior are hypothesized as important mechanisms for overcoming these limits. The impact of design features, workload, stress, system complexity, and automation on operator situation awareness is addressed, and a taxonomy of errors in situation awareness is introduced, based on the model presented. The model is used to generate design implications for enhancing operator situation awareness and future directions for situation awareness research.
... The efficacy of enabling humans to control robots remotely hinges on the development of a robust human-robot interface. This interface is essential for providing operators with heightened situational awareness and intuitive control mechanisms that do not impose an additional cognitive load, forming the foundation for intelligent decision-making [19]. In collaborative environments between humans and robots, an intuitive control system translates to more efficient task execution and safer performance [20]. ...
Article
Full-text available
Over the past few years, the industry has experienced significant growth, leading to what is now known as Industry 4.0. This advancement has been characterized by the automation of robots. Industries have embraced mobile robots to enhance efficiency in specific manufacturing tasks, aiming for optimal results and reducing human errors. Moreover, robots can perform tasks in areas inaccessible to humans, such as hard-to-reach zones or hazardous environments. However, the challenge lies in the lack of knowledge about the operation and proper use of the robot. This work presents the development of a teleoperation system using HTC Vive Pro 2 virtual reality goggles. This allows individuals to immerse themselves in a fully virtual environment to become familiar with the operation and control of the KUKA youBot robot. The virtual reality experience is created in Unity, and through this, robot movements are executed, followed by a connection to ROS (Robot Operating System). To prevent potential damage to the real robot, a simulation is conducted in Gazebo, facilitating the understanding of the robot’s operation.
... The SAT model leverages Endsley (1995) Situation Awareness model to create methods for perception (level I), awareness (level II), and projection (level III) of agent rationale and behavior (Chen et al., 2018). Research has shown that interfaces that invoke level III transparency can improve performance and increase trust compared to lower levels of transparency (Mercado et al., 2016). ...
Article
Full-text available
Although there is a rich history of philosophical definitions of ethics when applied to human behavior, applying the same concepts and principles to AI may be fraught with problems. Anthropomorphizing AI to have characteristics such as “ethics” may promote a dangerous, unrealistic expectation that AI can be trained to have inherent, guaranteed ethical behavior. The authors instead advocate for increased research into the ethical use of AI from initial ideation and design through operational use and sustainment. The authors advocate for five key research areas: (1) education in ethics and core AI concepts for AI developers, leaders, and users, (2) development and use of model cards or datasheets for datasets to provide transparency into the strengths, limits, and potential biases of a trained model, (3) employing human-centered design that seeks to understand human value structures within a task context and enable effective human-machine interaction through intuitive and transparent interfaces, (4) targeted use of run time assurance that monitors and modifies the inputs or outputs of a trained model when necessary to enforce ethical principles such as safety or limiting bias, and (5) developing best practices for the use of a joint human-AI co-creation and training experience to enable a shared mental model and higher performance through potential emergent behavior.
Chapter
The ability to organize is our most valuable social technology and the successful organizational design of an enterprise can increase its efficiency, effectiveness, and ability to adapt. Modern organizations operate in increasingly complex, dynamic, and global environments, which puts a premium on rapid adaptation. Compared to traditional organizations, modern organizations are flatter and more open to their environments. Their processes are more generative and interactive – actors themselves generate and coordinate solutions rather than follow hierarchically devised plans and directives. They also search outside their boundaries for resources wherever they may exist, and co-produce products and services with suppliers, customers, and partners, collaborating – both internally and externally – to learn and become more capable. In this volume, leading voices in the field of organization design demonstrate how a combination of agile processes, artificial intelligence, and digital platforms can power adaptive, sustainable, and healthy organizations.
Chapter
The ability to organize is our most valuable social technology and the successful organizational design of an enterprise can increase its efficiency, effectiveness, and ability to adapt. Modern organizations operate in increasingly complex, dynamic, and global environments, which puts a premium on rapid adaptation. Compared to traditional organizations, modern organizations are flatter and more open to their environments. Their processes are more generative and interactive – actors themselves generate and coordinate solutions rather than follow hierarchically devised plans and directives. They also search outside their boundaries for resources wherever they may exist, and co-produce products and services with suppliers, customers, and partners, collaborating – both internally and externally – to learn and become more capable. In this volume, leading voices in the field of organization design demonstrate how a combination of agile processes, artificial intelligence, and digital platforms can power adaptive, sustainable, and healthy organizations.
Chapter
Serious games have long been used in domains like defense, management, finance, and environmental protection to improve plans and procedures. In the aftermath of the COVID-19 pandemic, public health and emergency management organizations are beginning to use such games to enhance their preparedness and readiness activities. In this paper, we present a Knowledge Acquisition Analytical Game (K2AG) focused on understanding and providing training for command, control, coordination, and communication (C3C) functions during an infectious disease outbreak. Unlike traditional game-based exercises, which target strategic, operational and tactical decision making, the K2AG games focus on the cognitive level at which decision-making under uncertainty takes place. Specifically, the C3C Game collects data reflecting the cognitive processes by which players gain situational awareness, make decisions, and take actions. The C3C Game was created through a community-centered design process and leverages methods from human factor engineering, including hierarchical task analysis. This paper describes the game, presents results from a pilot exercise conducting with public health and emergency response decision makers from a large US metropolitan area, and discusses the potential for such games to improve pandemic preparedness and resilience.
Article
Full-text available
Situation awareness is knowing what is going on in the situation. Clinicians working in the emergency medical services (EMS) encounter numerous situations in various conditions, and to be able to provide efficient and patient safe care they need to understand what is going on and possible projections of the current situation. The design of this study encompassed a Goal-Directed Task analysis where situation awareness information requirements were mapped in relation to goals related to various aspects of the EMS mission. A group of 30 EMS subject matter experts were recruited and answered a web-based survey in three rounds related to what they though themselves or a colleague might need to achieve situation awareness related to the specific goals of various situations. The answers were analysed using content analysis and descriptive statistics. Answers reached consensus at a predetermined level of 75%. Those who reached consensus were entered into the final goal-directed task analysis protocol. The findings presented that EMS clinicians must rely on their own, or their colleagues prior experience or knowledge to achieve situation awareness. This suggests that individual expertise plays a crucial role in developing situation awareness. There also seems to be limited support for situation awareness from organizational guidelines. Furthermore, achieving situation awareness also involves collaborative efforts from the individuals involved in the situation. These findings could add to the foundation for further investigation in this area which could contribute to the development of strategies and tools to enhance situation awareness among EMS clinicians, ultimately improving patient care and safety.
Conference Paper
Achieving climate neutrality will require a major transformation of the transportation sector, likely leading to a surge in demand for electric vehicles (EVs). This poses a challenge to grid stability due to supply fluctuations of renewable energy resources. At the same time, EVs offer the potential to improve grid stability through managed charging. The complexity of this charging process can limit user flexibility and require more cognitive effort. Smart charging agents powered by artificial intelligence (AI) can address these challenges by optimizing charging profiles based on grid load predictions, but users must trust such systems to attain collective goals in a collaborative manner. In this study, we focus on traceability as a prerequisite for understanding and predicting system behavior and trust calibration. Subjective information processing awareness (SIPA) differentiates traceability into transparency, understandability, and predictability. The study aims to investigate the relationship between traceability, trust, and prediction performance in the context of smart charging agents through an online experiment. N = 57 participants repeatedly observed cost calculations made by a schematic algorithm, while the amount of disclosed information that formed the basis of the cost calculations was varied. Results showed that higher amount of disclosed information was related to higher reported trust. Moreover, traceability was partially higher in the high-information group than the medium and low-information groups. Conversely, participants’ performance in estimating the booking costs did not vary with amount of disclosed information. This pattern of results might reflect an explainability pitfall: Users of smart charging agents might trust these systems more as traceability increases, regardless of how well they understand the system.
Article
The unexpected spread of the pandemic raised concerns regarding pilots' skill decay resulting from the significant drops in the frequency of flights by about 70%. This research retrieved 4,761 Flight Data Monitoring (FDM) occurrences based on the FDM programme containing 123,140 flights operated by an international airline between June 2019 and May 2021. The FDM severity index was analysed by event category, aircraft type, and flight phase. The results demonstrate an increase in severity score from the pre-pandemic level to the pandemic onset on events that occurred on different flight phases. This trend is not present in the third stage, which indicates that pilots and the safety management system of the airline demonstrated resilience to cope with the flight disruptions during the pandemic. Through the analysis of event severity, FDM enables safety managers to recommend measures to increase safety resilience and self-monitoring capabilities of both operators and regulators.
Article
This paper describes an effort to understand the nature of decision tasks in the cockpit, their underlying cognitive requirements, the types of errors associated with each, and how crews can best be trained or aided. A scheme based on cue clarity and response availability was used to identify the cognitive requirements associated with classes of decision situations and to predict types of errors. Data from flight crews in full-mission simulators and from NTSB accident reports were analyzed to validate the analytical scheme.
Article
Human factors practitioners are often concerned with defining and evaluating expertise in complex domains where there may be no agreed-upon expertise levels, no single right answers to problems, and where the observation and measurement of real-world expert performance is difficult. This paper reports the results of an experiment in which expertise was assessed in an extremely complex and demanding domain–military command decision making in tactical warfare. The hypotheses of the experiment were: 1) command decisionmaking expertise can be recognized in practice by domain experts; 2) differences in the command decisionmaking expertise of individuals can be identified even under conditions that do not fully replicate the real world; and 3) observers who are not domain experts can recognize the expert behaviors predicted by a mental-model theory about the nature of expertise. In the experiment, the expertise of military officers in developing tactical plans was assessed independently by three “super-expert” judges, and these expertise-level ratings were correlated with independent theory-based measures used by observers who were not domain experts. The results suggest that experts in a domain have a shared underlying concept of expertise in that domain even if they cannot articulate that concept, that this expertise can be elicited and measured in situations that do not completely mimic the real world, and that expertise measures based on a mental-model theory can be used effectively by observers who are not experts in the domain.
Article
This report presents an information processing framework for predicting the effects of stress manipulations on pilot decision making. The framework predicts that stressors related to anxiety, time pressure, and high risk situations will restrict the range of cue sampling and reduce the capacity of working memory, but will not affect decisions that are based upon direct retrieval of knowledge from long term memory. These predictions were tested on MIDIS, a microcomputer-based pilot decision simulator. Performance on a series of 38 decision problems was compared between ten subjects in a control group and ten subjects who had performed under conditions of noise, concurrent task loading, time pressure, and financial risk. The results indicated that the stress manipulation significantly reduced the optimality and confidence of decisions. The manipulations imposed their greatest effect on problems that were coded high on spatial demand and on problems requiring integration of information from the dynamic instrument panel. The effects of stress were relatively independent of problem demands associated with working memory and with the retrieval of knowledge from long term memory.
Expert system applications must be carefully selected, designed and integrated into the cockpit based on a full understanding of the pilot's tasks, requirements, and capabilities. In this paper, expert systems development issues in the following areas are identified and addressed utilizing processes, methodologies and knowledge from the human factors field: the selection of systems to automate, the elicitation of expert knowledge from pilots, role allocation between the pilot and the system, system design issues, and system evaluation. Considerations of pilot workload, situational awareness, performance and pilot acceptance are considered key to the successful design and implementation of expert systems which will truly enhance the pilot in the performance of his tasks.