Chapter

A one-day workshop for teaching cognitive systems engineering skills

Authors:
  • ShadowBox LLC & MacroCognition LLC
  • Applied Decision Science, LLC
  • Independent Researcher
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The core of the workshop would have to be an example application that called for cognitive systems engineering. We considered several different possibilities. One option was to design a kitchen. Another option was to redesign a global positioning system device for helping rental car customers navigate in unfamiliar cities. We initially selected the kitchen design exercise as one that would allow participants to immerse themselves in the design problem without any prior need for familiarization with the problem domain. We anticipated that workshop participants would be able to link kitchen design issues to the workshop exercises that were to be introduced through the remainder of the day.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
There is a lack of formalism for some key foundational concepts in systems engineering. One of the most recently acknowledged deficits is the inadequacy of systems engineering practices for engineering intelligent systems. In our previous works, we proposed that closed systems precepts could be used to accomplish a required paradigm shift for the systems engineering of intelligent systems. However, to enable such a shift, formal foundations for closed systems precepts that expand the theory of systems engineering are needed. The concept of closure is a critical concept in the formalism underlying closed systems precepts. In this paper, we provide formal, systems- and information-theoretic definitions of closure to identify and distinguish different types of closed systems. Then, we assert a mathematical framework to evaluate the subjective formation of the boundaries and constraints of such systems. Finally, we present a high-level overview of the engineering implications stemming from this formalism, with a specific focus on its application in engineering intelligent systems. In the main, this framework lays the groundwork for understanding and applying closed systems precepts in the engineering of complex systems, especially those characterized by high levels of intelligence.
Article
Ambiguity is pervasive in the complex sensemaking domains of risk assessment and prediction but there remains little research on how to design visual analytics tools to accommodate it. We report on findings from a qualitative study based on a conceptual framework of sensemaking processes to investigate how both new visual analytics designs and existing tools, primarily data tables, support the cognitive work demanded in avalanche forecasting. While both systems yielded similar analytic outcomes we observed differences in ambiguous sensemaking and the analytic actions either afforded. Our findings challenge conventional visualization design guidance in both perceptual and interaction design, highlighting the need for data interfaces that encourage reflection, provoke alternative interpretations, and support the inherently ambiguous nature of sensemaking in this critical application. We review how different visual and interactive forms support or impede analytic processes and introduce “gisting” as a significant yet unexplored analytic action for visual analytics research. We conclude with design implications for enabling ambiguity in visual analytics tools to scaffold sensemaking in risk assessment.
Thesis
Full-text available
Conference preparation (CP), a critical determinant of interpreting quality, has received increasing scholarly attention in recent years. Although existing research has extensively discussed its theoretical and prescriptive components, descriptive and empirical studies are still emerging to explore the practical aspects of CP. Furthermore, a review of the literature has revealed a need to explore the decisions and reasoning behind the actions taken throughout the CP process. As such, this study attempted to address this gap by employing the Critical Decision Audit (CDA) method to explore major decisions made by expert interpreters during CP from the perspective of Naturalistic Decision Making (NDM), a research paradigm that studies decision making in real-world, dynamic, and uncertain settings. The CDA method is a combination of the Critical Decision Method (CDM) and the Knowledge Audit (KA), both of which are derivatives of Cognitive Task Analysis (CTA), a family of knowledge elicitation methods used to elicit, analyze, and represent the cognitive expertise required to perform a task. A sample of 12 expert Mandarin-English interpreters was recruited to participate in the study. They first documented CP-oriented events during a chosen conference cycle in a diary. Eight main decision points were identified from these diaries, which served as the basis for the subsequent CDA interviews, where the researchers used pre-defined probe questions to delve deeper into the decision points. The interview results were analyzed and displayed in eight matrices to highlight decision-related cues, strategies, and potential expert-novice differences. In the end, the interviews revealed a total of 54 cues and strategies that the experts attended to in CP and 27 expert-novice differences, many of which were not documented in the interpreting literature. The study’s findings not only broadened existing declarative and procedural knowledge of CP but also revealed the subtle and cognitive aspects of decision making expertise in CP, yielding insights that may potentially benefit practitioners, trainers, and trainees. The study, as a preliminary endeavor to bridge NDM and interpreting research through CDA, has contributed to the literature and pedagogy by adding a cognitive perspective to the understanding of CP.
Article
Full-text available
Intelligent systems are increasingly entering the workplace, gradually moving away from technologies supporting work processes to artificially intelligent (AI) agents becoming team members. Therefore, a deep understanding of effective human-AI collaboration within the team context is required. Both psychology and computer science literature emphasize the importance of trust when humans interact either with human team members or AI agents. However, empirical work and theoretical models that combine these research fields and define team trust in human-AI teams are scarce. Furthermore, they often lack to integrate central aspects, such as the multilevel nature of team trust and the role of AI agents as team members. Building on an integration of current literature on trust in human-AI teaming across different research fields, we propose a multidisciplinary framework of team trust in human-AI teams. The framework highlights different trust relationships that exist within human-AI teams and acknowledges the multilevel nature of team trust. We discuss the framework’s potential for human-AI teaming research and for the design and implementation of trustworthy AI team members.
Article
Full-text available
Hospitals work to provide quality, safety, and availability to patients with a wide variety of care needs, which makes efficient prioritization and resource utilization essential. Anticipation of each patients' trajectory, while monitoring available resources across the hospital, are major challenges for patient flow management. This study focuses on how hospital patient flow management is realized in situ with the help of concepts from cognitive systems engineering. Five semi-structured interviews with high level managers and shadowing observations of seven full work-shifts with management teams were conducted to explore how patient flow is coordinated and communicated across the hospital. The data has been analysed using qualitative content analysis. The results describe patient flow management using an adapted Extended Control Model (ECOM) and reveal how authority and information might be better placed closer to clinical work for increased efficiency of patient flow.
Article
Different terms such as trust , certainty , and uncertainty are of great importance in the real world and play a critical role in artificial intelligence (AI) applications. The implied assumption is that the level of trust in AI can be measured in different ways. This principle can be achieved by distinguishing uncertainties in predicting AI methods used in medical studies. Hence, it is necessary to propose effective uncertainty quantification (UQ) and measurement methods to have trustworthy AI (TAI) clinical decision support systems (CDSSs). In this study, we present practical guidelines for developing and using UQ methods while applying various AI techniques for medical data analysis.
Article
Full-text available
Teams of human operators and artificial intelligent agents (AIAs) in multi-agent systems present a unique set of challenges to team coordination. This research endeavors to employ a machine learning framework to estimate a set of ranks among quality goals, where the quality goals are designed to help communicate important elements of operator intent to aid the development of a Shared Mental Model among members in a multi-agent team. Using a representation referred to as the Operationalized Intent model to capture quality goals relevant to “how” the operator would like to execute the team’s mission, this paper details the development and evaluation of a random forest algorithm to estimate operator priorities. Estimation is structured as a label ranking problem in which quality goals, which constrain “how” work is to be conducted, are ranked according to their priority. Modifying an existing label ranking algorithm, we demonstrate that the Operationalized Intent Estimator-Random Forest (OIE-RF) can estimate quality goal rankings more accurately than a situation baseline which is derived by observing the variability among operators. OIE-RF demonstrates stability in dynamic testing and the ability to use explicit communication and operator identity to increase accuracy. This exploratory research opens a new avenue for improving coordination and performance of human-agent teams.
Article
Full-text available
Many promising telemedicine innovations fail to be accepted and used over time, and there are longstanding questions about how to best evaluate telemedicine services and other health information technologies. In response to these challenges, there is a growing interest in how to take the sociotechnical complexity of health care into account during design, implementation, and evaluation. This paper discusses the methodological implications of this complexity and how the sociotechnical context holds the key to understanding the effects and outcomes of telemedicine. Examples from a work domain analysis of a surgical setting, where a telemedicine service for remote surgical consultation was to be introduced, are used to show how abstracted functional modeling can provide a structured and rigorous means to analyze and represent the implementation context in complex health care settings.
Article
Background A telemedicine service enabling remote surgical consultation had shown promising results. When the service was to be scaled up, it was unclear how contextual variations among different clinical sites could affect the clinical outcomes and implementation of the service. It is generally recognized that contextual factors and work system complexities affect the implementation and outcomes of telemedicine. However, it is methodologically challenging to account for context in complex health care settings. We conducted a work domain analysis (WDA), an engineering method for modeling and analyzing complex work environments, to investigate and represent contextual influences when a telemedicine service was to be scaled up to multiple hospitals. Objective We wanted to systematically characterize the implementation contexts at the clinics participating in the scale-up process. Conducting a WDA would allow us to identify, in a systematic manner, the functional constraints that shape clinical work at the implementation sites and set the sites apart. The findings could then be valuable for informed implementation and assessment of the telemedicine service. Methods We conducted observations and semistructured interviews with a variety of stakeholders. Thematic analysis was guided by concepts derived from the WDA framework. We identified objects, functions, priorities, and values that shape clinical procedures. An iterative “discovery and modeling” approach allowed us to first focus on one clinic and then readjust the scope as our understanding of the work systems deepened. Results We characterized three sets of constraints (ie, facets) in the domain: the treatment facet, administrative facet (providing resources for procedures), and development facet (training, quality improvement, and research). The constraints included medical equipment affecting treatment options; administrative processes affecting access to staff and facilities; values and priorities affecting assessments during endoscopic retrograde cholangiopancreatography; and resources for conducting the procedure. Conclusions The surgical work system is embedded in multiple sets of constraints that can be modeled as facets of the system. We found variations between the implementation sites that might interact negatively with the telemedicine service. However, there may be enough motivation and resources to overcome these initial disruptions given that values and priorities are shared across the sites. Contrasting the development facets at different sites highlighted the differences in resources for training and research. In some cases, this could indicate a risk that organizational demands for efficiency and effectiveness might be prioritized over the long-term outcomes provided by the telemedicine service, or a reduced willingness or ability to accept a service that is not yet fully developed or adapted. WDA proved effective in representing and analyzing these complex clinical contexts in the face of technological change. The models serve as examples of how to analyze and represent a complex sociotechnical context during telemedicine design, implementation, and assessment.
Article
This is a response providing some thoughts triggered by the paper “Issues in Human–Automation Interaction Modeling: Presumptive Aspects of Frameworks of Types and Levels of Automation,” by David Kaber. The key theme is that in order to debate the relative merits of different conceptual frameworks to guide human–automation interaction design efforts, we need a richer understanding of the psychology of design. We need to better understand how contributions by the field of cognitive engineering really affect the efforts of system designers.
ResearchGate has not been able to resolve any references for this publication.