Recent publications
Trust is imperative for safe and effective human-automation relationships, especially in complex systems. Transparency, which is the communication of the automation’s behavior and intent to the operator, can be used to build trust in human-automation teams. The present study investigated the impact of transparency across the Human-Automation Teaming (HAT) lifecycle (pre-, during, and post-task) on human-automation trust, communication, and reliance. Participants engaged in a counter small-unmanned air systems simulation. They were randomly assigned to one of six conditions with different configurations of transparency (or lack thereof) across the lifecycle phases. Overall, we found modest outcomes for the influence of automation transparency across the HAT lifecycle on trust, situation awareness, workload, and task performance. While trust levels did not significantly differ across lifecycle configurations, there were notable impacts on situation awareness, task performance, and workload, particularly when transparency was absent during critical phases such as training and the after-action reviews.
Technological advances of virtual environments offer training opportunities that can leverage solutions that extend from game-based systems to extended reality devices. However, careful consideration must be applied to advance the state of the possible and leverage innovation in other technical fields to increase the immersion and fidelity of virtual environments. This panel will provide perspectives on the current human factors limitations, challenges, and opportunities within domains seeking to leverage virtual environments for training. First, presenters will address the needs for front end analyses to underpin implementation decisions, as well as engineering a systems of systems learning environment through a human performance data strategy. Next, discussions will focus on complementary technologies such as artificial intelligence and machine learning that offer opportunities to advance the utility, fidelity, and effectiveness of virtual environments for training. Finally, panel members will discuss the need to evaluate resulting technologies to ensure successful adoption and implementation.
Transparency in automation and AI systems is the ability for the operator to know or see the agent’s working processes to be able to more accurately trust them. The problem is that automation frequently exists to offload an otherwise overly busy human operator. Requiring that operator to process, understand, and evaluate “transparency information” on the automation’s processing in the moment of execution is likely to be too big a task. This panel will explore the need, opportunity, methods and benefits stemming from spreading the transfer of “transparency information exchange” throughout the lifecycle of human–autonomy interaction, thereby taking advantage of best practices in training, pre-mission planning, explanation, and post-mission debriefing, after action reviews and even learning and initial design across organizational stakeholders to reduce the need and resources required to transfer transparency information “in the moment” in frequently high tempo periods of execution.
Machine learning (ML) is being widely adopted by organizations to assist in selecting personnel, commonly by scoring narrative information or by eliminating the inefficiencies of human scoring. This combined article presents six such efforts from operational selection systems in actual organizations. The findings show that ML can score narrative information collected from candidates either in writing or orally in response to assessment questions (called constructed response) as accurately and reliably as human judges, but much more efficiently, making such responses more feasible to include in personnel selection and often improving validity with little or no adverse impact. Moreover, algorithms can generalize across assessment questions, and algorithms can be created to predict multiple outcomes simultaneously (e.g., productivity and turnover). ML has even been demonstrated to make job analysis more efficient by determining knowledge and skill requirements based on job descriptions. Collectively, the studies in this article illustrate the likely major impact that ML will have on the practice and science of personnel selection from this point forward.
Self-regulated learning (SRL) is critical for learning across tasks, domains, and contexts. Despite its importance, research shows that not all learners are equally skilled at accurately and dynamically monitoring and regulating their self-regulatory processes. Therefore, learning technologies, such as intelligent tutoring systems (ITSs), have been designed to measure and foster SRL. This paper presents an overview of over 10 years of research on SRL with MetaTutor, a hypermedia-based ITS designed to scaffold college students’ SRL while they learn about the human circulatory system. MetaTutor’s architecture and instructional features are designed based on models of SRL, empirical evidence on human and computerized tutoring principles of multimedia learning, Artificial Intelligence (AI) in educational systems for metacognition and SRL, and research on SRL from our team and that of other researchers. We present MetaTutor followed by a synthesis of key research findings on the effectiveness of various versions of the system (e.g., adaptive scaffolding vs. no scaffolding of self-regulatory behavior) on learning outcomes. First, we focus on findings from self-reports, learning outcomes, and multimodal data (e.g., log files, eye tracking, facial expressions of emotion, screen recordings) and their contributions to our understanding of SRL with an ITS. Second, we elaborate on the role of embedded pedagogical agents (PAs) as external regulators designed to scaffold learners’ cognitive and metacognitive SRL strategy use. Third, we highlight and elaborate on the contributions of multimodal data in measuring and understanding the role of cognitive, affective, metacognitive, and motivational (CAMM) processes. Additionally, we unpack some of the challenges these data pose for designing real-time instructional interventions that scaffold SRL. Fourth, we present existing theoretical, methodological, and analytical challenges and briefly discuss lessons learned and open challenges.
The Twiner project provides a practice environment to support a learner in analysis and interpretation of an ongoing situation through a social media data framework. Within this framework exists a host practice environment complemented by a set of modular components for facilitating training, known as the GTRI Learner Assessment Engine, or GLAsE. GLAsE maps the evaluated learner activity to individual components of analysis, i.e., the learning objectives. It estimates learner proficiency for individual learning objectives based on observations of the learner actions while performing the learning tasks. GLAsE represents the state of the world using a Learner Model that provides the current state of the learner’s domain competency, and a Curriculum Model that represents the concepts, activities, and skills of an expert analyst in this domain. GLAsE may produce advice and hints for the learner at multiple points in the assessment process, which are then passed to the learner via the practice environment as a form of direct feedback for the learner’s performance. Through these capabilities, GLAsE reduces the time necessary for domain experts to dedicate to novice training and increases the agility and responsiveness of the training system.
This paper identifies challenges related to the optimal selection of adaptive instructional system (AIS) learner interventions. There are major challenges associated with accurately assessing learner performance, deciding when and how to intervene with learner, and evaluating the effectiveness of any intervention (e.g., feedback, support, direction, change in content level of difficulty) delivered during adaptive instruction, and improving intervention selection over time. AISs are computer-based systems that accommodate individual differences and tailor instruction to match learner capabilities to acquire knowledge through a guided learning process. While AISs are highly effective learning tools, they are costly and require very high-level skills to develop them and create complex, effective, and efficient adaptive courses. In some well-defined domains of instruction (e.g., mathematics), AIS developers have focused on the identification of errors and the interventions selected are focused on correcting these errors. Common errors are often identified in a misconception library that associates content and practice required to overcome them. In more complex domains, learner data may be sparse, and the simple identification of errors may not be sufficient to support optimal selection of learner interventions. Under these conditions, the challenge is to identify and address poor learner performance by testing plausible root causes and addressing them directly instead of focusing on errors that may only address symptoms and not root causes of learner performance. This paper identifies several challenge areas, discusses potential solutions, and provides recommendations for future research to overcome difficult challenges associated with machine selection of adaptive instructional interventions.
Prescription opioid misuse is an unintended consequence of acute pain management. Opioid-induced euphoria (OIE) with first therapeutic opioid exposure may influence opioid misuse. OIE is not assessed in clinical care and self-report measures of OIE have not been validated in adolescents. We (1) determined adolescents’ ability to understand existing self-reported OIE measures, (2) revised measures for better understanding by this population, and (3) established initial content validity of revised measures with adolescents. Using runner’s euphoria to simulate OIE in Study 1, 29 adolescents’ (14 males) understanding of the Drug Effects Questionnaire (DEQ-5), the Addiction Resource Center Inventory Morphine Benzedrine Group scale (ARCI-MBG), and the ARCI Lysergic Acid Diethylamide scale (ARCI-LSD) were tested. In Study 2, 29 additional adolescents (9 males) participated in a modified Delphi study with focus groups to revise survey items to improve understanding by peers. In Study 1, runners understood <40% of ARCI-MBG and ARCI-LSD statements. In Study 2, all but 7 survey items were revised. Revised measures of OIE for adolescents may help define at-risk OIE phenotypes and validate risk assessments using survey methodology. Additional studies are needed to validate the revised OIE self-report measures with opioid-naive adolescents receiving opioids to treat acute pain.
This paper reviews horizontal and vertical scaling methodologies for adaptive instructional system (AIS) software architectures. The term AIS refers to any instructional approach that accommodates individual differences to facilitate and optimize the acquisition of knowledge and/or skills. The authors propose a variety of scaling methods to enhance the interaction between AISs and low-adaptive training ecosystems with the goal of increasing adaptivity and thereby increasing learning and performance. Typically, low-adaptive training systems only accommodate differences in the learner’s in-situ performance during training and do not consider the impact of other factors (e.g., emotions, prior knowledge, goal-orientation, or motivation) that influence learning. AIS architectures such as the Generalize Intelligent Framework for Tutoring (GIFT) can accommodate individual differences and interact with low-adaptive training ecosystems to model a common operational picture of the training relative. These capabilities enable AISs to track progress toward learning objectives and to intervene and adapt the training ecosystem to needs and capabilities of each learner. Finding new methods to interface AISs with a greater number of low-adaptive training ecosystems will result in more efficient and effective instruction.
Team training in online, simulated environments can improve teamwork skills and task performance skills in a team setting. Teamwork assessment often relies on human observers. Instructors, team leaders, or other observers typically assess complex team competencies using checklists of observed behavior markers to infer performance. Automation can reduce training bottlenecks, provide evidence for objective assessment, and increase the impact of team training. A software capability is being developed to automate team assessments in dynamic online simulations. The simulations are dynamic to the extent that team actions and performance can change the progression of simulation events, assessment context, and the expected behavior of individuals contributing to team performance. A goal of automation design is to enhance usability for non-technical personnel to select, configure, reuse, and interpret team assessments in dynamic simulations. As a result of the reusable design, the assessments can generalize across different simulation software, settings, and scenarios. This paper describes work in progress on the research and development of an automated team assessment capability for the US Army’s Generalized Intelligent Framework for Tutoring (GIFT), an open source adaptive instructional architecture.
Today, various actors are exploiting and misusing online social media to spread disinformation and to create false narratives. This paper summarizes an education and training approach targeted to help people think more critically about potential disinformation. The approach we outline emphasizes the development and maturation of general critical-thinking skills, in contrast to technical skills (e.g., social network analysis). However, it also offers opportunity to apply these skills in a scaffolded, adaptive environment that supports the learner in putting concepts into use. The approach draws on the situated-learning paradigm to support skill development and reflects empirically-based best practices for pedagogy for critical thinking. This analysis and review provides context to inform the design of a learning environment to enable targeted practice of critical thinking skills. The paper outlines the high-level design, describes several specific “experiential lessons” and overviews a few technical challenges that remain to be overcome to make the training feasible for wide-scale use.
Patient handoffs are a common, yet frequently error prone occurrence, particularly in complex or challenging battlefield situations. Specific protocols exist to help simplify and reinforce conveying of necessary information during a combat-casualty handoff, and training can both reinforce correct behavior and protocol usage while providing relatively safe initial exposure to many of the complexities and variabilities of real handoff situations, before a patient’s life is at stake. Here we discuss a variety of mixed reality capabilities and training contexts that can manipulate many of these handoff complexities in a controlled manner. We finally discuss some future human-subject user study design considerations, including aspects of handoff training, evaluation or improvement of a specific handoff protocol, and how the same technology could be leveraged for operational use.
This paper describes the characteristics, design and architecture of a learner model, the GTRI Learner Assessment Engine (GLAsE), that is designed to operate within a practice environment for teaching the analysis of social media feeds. The purpose of GLAsE in the practice environment is to provide a state of the learner’s competency or proficiency for the information and use by the learner and instructor to guide the learning activities, e.g., the experience of the learner in the practice environment. The Learner Model is represented as a curriculum overlay model with additional annotations and background information. The contents of the Learner Model consist of items represented in the Curriculum Model as learning objectives: Concepts, Skills, and Problem-solving approaches. This Learner Model is called an overlay model because the Learner Model has the same representation as the expert domain knowledge, i.e., the Curriculum Model—it is overlaid on that representation. The Learner Model will provide mastery or proficiency scores for each of the learning objectives that represent the model’s best estimate of the state of the learner proficiency for that objective. The learner assessment process will identify and prioritize any concepts in which the learner has deficiency and will, whenever it is appropriate in a learning or practice system, provide information to other system components that could be used to direct the learner to particular activities to help increase proficiency in those concepts. Results of the learner assessment will be used to update the Learner Model.
Localization refers to the adaptation of a document’s content to meet the linguistic, cultural, and other requirements of a specific target market―a locale. Transcreation describes the process of adapting a message from one language to another, while maintaining its intent, style, tone, and context. In recent years, pre-trained language models have pushed the limits of natural language understanding and generation and dominated the NLP progress. We foresee that the AI-based pre-trained language models (e.g. masked language modeling) and other existing and upcoming language modeling techniques will be integrated as effective tools to support localization/transcreation efforts in the coming years. To support localization/transcreation tasks, we use AI-based Masked Language Modeling (MLM) to provide a powerful human-machine teaming tool to query language models for the most proper words/phrases to reflect the proper linguistical and cultural characteristics of the target language. For linguistic applications, we list examples on logical connectives, pronouns and antecedents, and unnecessary redundant nouns and verbs. For intercultural conceptualization applications, we list examples of cultural event schema, role schema, emotional schema, and propositional schema. There are two possible approaches to determine where to put masks: a human-based approach or an algorithm-based approach. For the algorithm-based approach, constituency parsing can be used to break a text into sub-phrases, or constituents, after which typical linguistic patterns can be detected and then finally masking tasks can be attempted on the related texts.
Today, information search and retrieval tools are ubiquitous and powerful. However, there is comparably little investment in helping users with analysis of materials found via search. This paper introduces a software tool designed to help individuals and teams improve their ability to examine and to use the results of search. The existing tool focuses on literature review and analysis and emphasizes support for users who are building an initial mental model of a domain new to them. The paper describe rationales and goals for helping users perform literature searches, current implementation and features, preliminary user feedback, and future work.
This paper describes the design and development of the Topic Map, a visualization and user interaction component of a cloud-based tool, Archimedes. Archimedes is designed to help individuals and teams examine and organize the results of a literature search and use them to understand the space that they are researching. The Topic Map is a document surrogate, designed to help the user visualize the topic space represented by the search corpus. It shows frequently occurring but generally uncommon topics in a user’s workspace corpus. The Topic Mapper component generates the Topic Map automatically by extracting a list of topic phrases from the papers in the workspace, filtering and prioritizing what is displayed to the user based on a set of rules. It then visually distributes them in a two-dimensional space. This paper describes the motivation and design of the topic extraction implementation and its user interaction capabilities within Archimedes.
This paper presents the analyses of data collected from four previous studies to compare the sensitivity of multiple physiological and subjective workload measures in detecting the workload changes induced by common nuclear power plant (NPP) main control room tasks in three types using three simulators. Analyses of effect sizes were used to quantify the magnitude of response or rating changes in the workload metrics. The results suggest that the majority of the workload measures utilized in the Human Performance Test Facility (HPTF) studies show practically relevant sensitivity to the workload changes induced by the experimental manipulations in the simulated NPP operations.
When creating a new labeled dataset, human analysts or data reductionists must review and annotate large numbers of images. This process is time consuming and a barrier to the deployment of new computer vision solutions, particularly for rarely occurring objects. To reduce the number of images requiring human attention, we evaluate the utility of images created from 3D models refined with a generative adversarial network to select confidence thresholds that significantly reduce false alarms rates. The resulting approach has been demonstrated to cut the number of images needing to be reviewed by 50% while preserving a 95% recall rate, with only 6 labeled examples of the target.
In response to calls for research to improve human-machine teaming (HMT), we present a “perspective” paper that explores techniques from computer science that can enhance machine agents for human-machine teams. As part of this paper, we (1) summarize the state of the science on critical team competencies identified for effective HMT, (2) discuss technological gaps preventing machines from fully realizing these competencies, and (3) identify ways that emerging artificial intelligence (AI) capabilities may address these gaps and enhance performance in HMT. We extend beyond extant literature by incorporating recent technologies and techniques and describing their potential for contributing to the advancement of HMT.
With data comes uncertainty, which is a widespread and frequent phenomenon in data science and analysis. The amount of information available to us is growing exponentially, owing to never-ending technological advancements. Data visualization is one of the ways to convey that information effectively. Since the error is intrinsic to data, users cannot ignore it in visualization. Failing to observe it in visualization can lead to flawed decision-making by data analysts. Data scientists know that missing out on uncertainty in data visualization can lead to misleading conclusions about data accuracy. In most cases, visualization approaches assume that the information represented is free from any error or unreliability; however, this is rarely true. The goal of uncertainty visualization is to minimize the errors in judgment and represent the information as accurately as possible. This survey discusses state-of-the-art approaches to uncertainty visualization, along with the concept of uncertainty and its sources. From the study of uncertainty visualization literature, we identified popular techniques accompanied by their merits and shortcomings. We also briefly discuss several uncertainty visualization evaluation strategies. Finally, we present possible future research directions in uncertainty visualization, along with the conclusion.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Ann Arbor, United States