Erin Chiou

Erin Chiou
  • Ph.D.
  • Professor (Assistant) at Arizona State University

About

54
Publications
23,344
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
974
Citations
Introduction
I am an Assistant Professor of Human Systems Engineering and the Director of the Automation Design Advancing People and Technology (ADAPT) Laboratory. In addition to my faculty role, I am an affiliate researcher of the Center for Human, AI, and Robot Teaming (CHART), part of the Global Security Initiative at Arizona State University. For more information please visit my lab website: https://adapt.engineering.asu.edu
Current institution
Arizona State University
Current position
  • Professor (Assistant)
Additional affiliations
July 2013 - May 2016
University of Wisconsin–Madison
Position
  • Graduate Research Fellow

Publications

Publications (54)
Article
Urban Search and Rescue (USAR) missions continue to benefit from the incorporation of human-robot teams (HRTs). USAR environments can be ambiguous , hazardous, and unstable. The integration of robot teammates into USAR missions has enabled human teammates to access areas of uncertainty, including hazardous locations. For HRTs to be effective, it is...
Article
Full-text available
Teams composed of human and machine members operating in complex task environments must effectively interact in response to information flow while adapting to environmental changes. This study investigates how interpersonal coordination dynamics between team members are associated with team performance and shared situation awareness in a simulated...
Conference Paper
Virtual testbeds are fundamental to the success of research on cognitive work in safety-critical domains. A testbed that can meet researchers' objectives and create a sense of reality for participants positively impacts the research process; they have the potential to allow researchers to address questions not achievable in physical environments. T...
Article
Full-text available
The National Academies recently issued a consensus study report summarizing the state of the art and research needs relating to Human-AI teaming, especially in the context of dynamic, high-risk environments such as multi-domain military operations. This consensus study was conducted by the National Academies Board on Human-Systems Integration (BOHS...
Article
This panel session is for anyone in human factors and ergonomics (HFE) or related disciplines interested in recruiting, hiring, admitting, retaining, and promoting people within organizations from the perspective of pursuing authentic diversity. Applicants and hiring managers may also gain insight on what makes a meaningful diversity statement, a t...
Preprint
Full-text available
Trust has emerged as a prevalent construct to describe relationships between people and between people and technology in myriad domains. Across disciplines, researchers have relied on many different questionnaires to measure trust. The degree to which these questionnaires differ has not been systematically explored. In this paper, we use a word-emb...
Preprint
Full-text available
Team communication contents provide insights into teammates' coordination processes and perceptions of each other. We investigated how personifying and objectifying contents in human-machine team communication relate to trust, anthropomorphism, and team performance in a simulated remotely piloted aircraft reconnaissance task. A total of 46 particip...
Preprint
This research examines the relationship between anticipatory pushing of information and trust in human-machine teaming in a synthetic task environment for remotely piloted aircraft systems. Two human participants and one Artificial Intelligent based agent, emulated by a confederate experimenter, executed a series of missions as a team of three, und...
Article
Full-text available
Autonomous robots have the potential to play a critical role in urban search and rescue (USAR) by allowing human counterparts of a response team to remain in remote, stable locations while the robots execute more dangerous work in the field. However, challenges remain in developing robot capabilities suitable for teaming with humans. Communicating...
Conference Paper
Urban Search and Rescue (USAR) missions often involve a need to complete tasks in hazardous environments. In such situations, human-robot teams (HRT) may be essential tools for future USAR missions. Transparency and explanation are two information exchange processes where transparency is real-time information exchange and explanation is not. For ef...
Conference Paper
Project Overview. Communication is a key ingredient of team cognition (Cooke, Gorman, Myers, & Duran, 2013), and is both a factor and a manifestation of trust in human-autonomy teams (HATs; Hou, Ho, & Dunwoody, 2021). Anthropomorphism, or the attribution of humanlike qualities to inanimate objects, is a distinct factor in human trust in automation...
Conference Paper
Full-text available
Trust in autonomous teammates has been shown to be a key factor in human-autonomy team (HAT) performance, and anthropomorphism is a closely related construct that is underexplored in HAT literature. This study investigates whether perceived anthropomorphism can be measured from team communication behaviors in a simulated remotely piloted aircraft s...
Article
Full-text available
This study investigates how human performance and trust are affected by the decision deferral rates of an AI-enabled decision support system in a high criticality domain such as security screening, where ethical and legal considerations prevent full automation. In such domains, deferring cases to a human agent becomes an essential process component...
Article
This paper takes a practitioner’s perspective on advancing bi-directional transparency in human-AI-robot teams (HARTs). Bi-directional transparency is important for HARTs because the better that people and artificially intelligent agents can understand one another’s capabilities, limits, inputs, outputs and contexts in a given task environment; the...
Article
Urban Search and Rescue (USAR) missions often involve a need to complete tasks in hazardous environments. In such situations, human-robot teams (HRT) may be essential tools for future USAR missions. Transparency and explanation are two information exchange processes where transparency is real-time information exchange and explanation is not. For ef...
Article
Full-text available
This study focuses on methodological adaptations and considerations for remote research on Human–AI–Robot Teaming (HART) amidst the COVID‐19 pandemic. Themes and effective remote research methods were explored. Central issues in remote research were identified, such as challenges in attending to participants' experiences, coordinating experimenter...
Article
Full-text available
This article is Part 1 of a two-part series reflecting on diversity within the Human Factors and Ergonomics Society (HFES) and how the pursuit of “authentic” diversity is essential to HFES’s overarching goals for inclusion and equity. In Part 1, authentic diversity is discussed – what it means and what it might look like. Through this lens of authe...
Article
Full-text available
This article is Part 2 of a two-part series reflecting on diversity within the Human Factors and Ergonomics Society (HFES). Part 1 discussed what it means to pursue authentic diversity and reported recent demographic characteristics of HFES. Part 2 discusses a brief history of relevant efforts in HFES and recent scholarship that suggests sustained...
Article
Full-text available
Objective This paper reviews recent articles related to human trust in automation to guide research and design for increasingly capable automation in complex work environments. Background Two recent trends—the development of increasingly capable automation and the flattening of organizational hierarchies—suggest a reframing of trust in automation...
Article
This study aims to better understand trust in human-autonomy teams, finding that trust is important to team performance. A wizard of oz methodology was used in an experiment to simulate an autonomous agent as a team member in a remotely piloted aircraft system environment. Specific focuses of the study were team performance and team social behavior...
Article
Researchers have investigated the extent to which features of virtual humans (e.g., voice) influence learning. However, only recently have researchers began to question how trust influences perceptions of virtual humans and learning with a virtual human. In this study, we use unsupervised machine learning (k-means clustering) to examine the extent...
Article
Full-text available
Global investments in artificial intelligence (AI) and robotics are on the rise, with the results to impact global economies, security, safety, and human well-being. The most heralded advances in this space are more often about the technologies that are capable of disrupting business-as-usual than they are about innovation that advances or supports...
Article
Full-text available
Service robots are becoming increasingly popular in the world where they interact with humans on a semi- or routine basis. It is essential to understand human perceptions of these robots, as they affect use, adoption, and interaction. The primary goal of this brief literature review was to learn about public perceptions of service robots, particula...
Article
Full-text available
Accountability is an ill-defined and underexplored concept in job design, particularly in highly proceduralized environments that must operate under both high throughput and high-security expectations. Using x-ray images from the Airport Scanner game, this paper investigates two mechanisms of accountability: an active condition, and a passive condi...
Chapter
Any functional human-AI-robot team consists of multiple stakeholders, as well as one or more artificial agents (e.g., AI agents and embodied robotic agents). Each stakeholder's trust in the artificial agent matters because it not only impacts their performance on tasks with human teammates and artificial agents but also influences their trust in ot...
Article
Full-text available
Accountability pressures have been found to increase worker engagement and reduce adverse biases in people interacting with automated technology, but it is unclear if these effects can be observed in a more laterally controlled human-AI task. To address this question, 40 participants were asked to coordinate with an AI agent on a resource-managemen...
Article
Full-text available
Reciprocal cooperation is prevalent in human society. Understanding human reciprocal cooperation in human-agent interaction can help design human-agent systems that promote cooperation and joint performance. Studies have found that people reciprocate cooperative behavior when interacting with computer agents in social dilemma games. However, few st...
Article
Full-text available
Research has shown that creating environments in which social cues are present (social agency) benefits learning. One way to create these environments is to incorporate a virtual human as a pedagogical agent in computer-based learning environments. However, essential questions remain about virtual human design, such as what voice should the virtual...
Article
Full-text available
Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after a surprise, or unexpected event. These biases further influence peopl...
Poster
Full-text available
In future urban search and rescue teams, robots may be expected to conduct cognitive tasks. As the capabilities of robots change, so too will their interdependence with human teammates. Human factors and cognitive engineering are well-positioned to guide the design of autonomy for effective teaming. Previous work in the urban search and rescue synt...
Article
Full-text available
The current study investigates if a virtual human’s voice can impact the user’s trust in interacting with the virtual human in a learning setting. It was hypothesized that trust is a malleable factor impacted by the quality of the virtual human’s voice. A randomized alternative treatments design with a pretest placed participants in either a low-qu...
Article
Full-text available
Project overview As a team explores interactions, they may find opportunities to expand and refine teamwork over time. This can have consequences for team effectiveness in normal and unexpected situations (Woods, 2018). Understanding the role of exploratory team interactions may be relevant for human-autonomy team (HAT) resilience in the face of sy...
Conference Paper
Full-text available
Measuring trust in technology is a mainstay in Human Factors research. While trust may not perfectly predict reliance on technology or compliance with alarm signals, it is routinely used as a design consideration and assessment goalpost. Several methods of measuring trust have been employed in the past decades, but one self-report measure stands ou...
Article
Full-text available
Smooth and efficient human-machine coordination in joint physical tasks may be realized through greater sensing and prediction of a human partner's intention to apply force to an object. In this paper, we define compliance and reliance in the context of physical human-machine coordination (pHMC) to characterize human responses in a joint object tra...
Preprint
Full-text available
The current study investigates if a virtual human's voice can impact the user's trust in interacting with the virtual human in a learning setting. It was hypothesized that trust is a malleable factor impacted by the quality of the virtual human's voice. A randomized alternative treatments design with a pretest placed participants in either a low-qu...
Preprint
In this work we show, using a large sample size, that there is the potential for biased responses when participants rate systems using the Jian, Bisantz, & Drury 2000 "Trust in Automated Systems" survey. This bias appears to be positively skewing some of the responses. Future work will expand this assessment.
Article
Full-text available
Engineering grand challenges and big ideas not only demand innovative engineering solutions, but also typically involve and affect human thought, behavior, and quality of life. To solve these types of complex problems, multidisciplinary teams must bring together experts in engineering and psychological science, yet fusing these distinct areas can b...
Conference Paper
Full-text available
This study aims to better understand trust in human-autonomy teams, finding that trust is related to team performance. A wizard of oz methodology was used in an experiment to simulate an autonomous agent as a team member in a remotely piloted aircraft system environment. Specific focuses of the study were team performance and team social behaviors...
Article
Full-text available
Future autonomous vehicle systems will be diverse in design and functionality because they will be produced by different brands. It is possible these brand differences yield different levels of trust in the automation, therefore different expectations for vehicle performance. Perceptions of system safety, trustworthiness, and performance are import...
Article
Full-text available
In human-automation systems, where high situation awareness is associated with better decision-making, understanding accountability may be crucial to preventing automation complacency. In supervisory control automation, there is some evidence that accountability increases human-automation performance; however, with increasingly intelligent automate...
Article
In light of increasing automation capability, Social Exchange Theory may help guide the design of automated digital interlocutors in human-agent teaming to enhance joint performance. The effect of two social exchange structures, negotiated exchange and reciprocal exchange, was assessed using a joint scheduling microworld environment. Negotiated exc...
Article
Full-text available
Detrimental effects of interruptions have been widely reported in the literature, particularly with laboratory-based studies. However, recent field-based studies suggest interruptions can be beneficial, even vital to maintaining or enhancing system performance. The literature seems to be at critical juncture; how do practitioners reconcile these pe...
Article
The Human Factors and Ergonomics Society Diversity Committee met initially in January 2017, and on a regular basis thereafter to assess and improve diversity and inclusion in the society, profession, and discipline. Charged by president Bill Marras in 2016, the Committee replaced the Diversity Task Force established in 1994, and formally became a p...
Article
Full-text available
Objective: This study uses a dyadic approach to understand human-agent cooperation and system resilience. Background: Increasingly capable technology fundamentally changes human-machine relationships. Rather than reliance on or compliance with more or less reliable automation, we investigate interaction strategies with more or less cooperative a...
Article
Increasingly autonomous machines may lead to issues in human-automation systems that go beyond the typical concerns of reliance and compliance. This study used an interaction-oriented approach that considers interdependence in coordinating and cooperating on a joint task. A shared-resource microworld environment was developed to assess how changes...
Article
The aim of this study is to examine the utility of physiological compliance (PC) to understand shared experience in a multiuser technological environment involving active and passive users. Common ground is critical for effective collaboration and important for multiuser technological systems that include passive users since this kind of user typic...
Chapter
Full-text available
Trust is a multifaceted construct in complex work systems. It affects relationships between individuals, the organizations they work in, and how they use information and communication technologies. Trust mediates the acceptance of information and communication technologies; studying people’s trust and how they trust can thus identify appropriate de...
Article
Full-text available
Many older adults find that they must manage one or more chronic illnesses entailing multiple medication regimens. These regimens can be daunting, with consequences for medication adherence and health outcomes. To promote adherence to medication regimens, we used contextual design to develop paper and digital prototypes of a medication management d...
Article
Full-text available
Computerized decision aids could facilitate shared decision-making at the point of outpatient clinical care. The objective of this study was to investigate whether a computerized shared decision aid would be feasible to implement in an inner-city clinic by evaluating the current practices in shared decision-making, clinicians' use of computers, pat...
Article
Full-text available
The aim of this study was to understand technology and system characteristics that contribute to nurses' ratings of trust in a smart intravenous pump. Nurses' trust in new technologies can influence how technologies are used. Trust in technology is defined as a person's belief that a technology will not fail them. Potential outcomes of trust in tec...

Network

Cited By