Article

Trust-Aware Decision Making for Human-Robot Collaboration: Model Learning and Planning

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Trust in autonomy is essential for effective human-robot collaboration and user adoption of autonomous systems such as robot assistants. This article introduces a computational model that integrates trust into robot decision making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human trust, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table clearing task in simulation (201 participants) and with a real robot (20 participants). In our studies, the robot builds human trust by manipulating low-risk objects first. Interestingly, the robot sometimes fails intentionally to modulate human trust and achieve the best team performance. These results show that the trust-POMDP calibrates trust to improve human-robot team performance over the long term. Further, they highlight that maximizing trust alone does not always lead to the best performance.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The classical approach to studying the impact of robot reliability, transparency, and human workload on human trust treats trust as a static parameter [14]. However, recent studies have focused on understanding the dynamic evolution of human-automation trust, focusing on factors such as the quality of robot performance, transparency of the robot's operation, and the human agent's attitude and characteristics, such as forgetfulness and self-confidence, on the trust dynamics [1,3,8,9,18,20,24,39,42,44]. Hancock et. ...
... Besides stochastic linear models, many of the stochastic trust models treat the trust as a discrete (usually binary) hidden state and estimate its distribution conditioned on variables such as robot performance and human actions within a Bayesian framework [21,43]. A subclass of these models are POMDP-based models [2,3,8,38,45]. These models treat human mental states, such as trust and workload, as discrete hidden variables. ...
... The state transition can depend on robot actions, human actions, and contextual information such as environmental state and task outcome. These models have been applied to design task schedules in humansupervised robotic operations, minimizing human interventions [8] and monitoring rates [45]. Additionally, they have been employed to develop optimal recommendations and associated levels of explanation based on human trust and workload [3,38]. ...
Preprint
Our goal is to model and experimentally assess trust evolution to predict future beliefs and behaviors of human-robot teams in dynamic environments. Research suggests that maintaining trust among team members in a human-robot team is vital for successful team performance. Research suggests that trust is a multi-dimensional and latent entity that relates to past experiences and future actions in a complex manner. Employing a human-robot collaborative task, we design an optimal assistance-seeking strategy for the robot using a POMDP framework. In the task, the human supervises an autonomous mobile manipulator collecting objects in an environment. The supervisor's task is to ensure that the robot safely executes its task. The robot can either choose to attempt to collect the object or seek human assistance. The human supervisor actively monitors the robot's activities, offering assistance upon request, and intervening if they perceive the robot may fail. In this setting, human trust is the hidden state, and the primary objective is to optimize team performance. We execute two sets of human-robot interaction experiments. The data from the first experiment are used to estimate POMDP parameters, which are used to compute an optimal assistance-seeking policy evaluated in the second experiment. The estimated POMDP parameters reveal that, for most participants, human intervention is more probable when trust is low, particularly in high-complexity tasks. Our estimates suggest that the robot's action of asking for assistance in high-complexity tasks can positively impact human trust. Our experimental results show that the proposed trust-aware policy is better than an optimal trust-agnostic policy. By comparing model estimates of human trust, obtained using only behavioral data, with the collected self-reported trust values, we show that model estimates are isomorphic to self-reported responses.
... Meanwhile, high-quality recommendations, in turn, can foster growing trust throughout the decision-making process. One concrete example can be found in human-robot interaction (HRI), where autonomous systems such as robots are employed to assist humans in performing tasks (Chen et al., 2020). Although the robot is fully capable of completing tasks, a novice user may not trust the robot and refuse its suggestion, leading to inefficient collaboration. ...
... The formulation considered in the current paper belongs to the proactive approach, which integrates human trust into the design of decision-making algorithms. In addition to MABs, Markov decision processes (MDPs) and Partially Observable Markov decision processes (POMDPs) are also widely used in modeling the trust-aware decision-making process (Chen et al., 2018(Chen et al., , 2020Bhat et al., 2022;Akash et al., 2019). This proactive approach allows the learner to adapt its recommendations in response to human trust. ...
... Decision implementation deviation. To characterize deviations in decision implementation, we focus on the disuse trust behavior model, which is widely recognized by empirical studies in human-robot interaction scenarios (Chen et al., 2020;Bhat et al., 2024). Specifically, the implementer is assumed to have an own policy π own , which is unknown to the policymaker. ...
Preprint
Full-text available
Multi-armed bandit (MAB) algorithms have achieved significant success in sequential decision-making applications, under the premise that humans perfectly implement the recommended policy. However, existing methods often overlook the crucial factor of human trust in learning algorithms. When trust is lacking, humans may deviate from the recommended policy, leading to undesired learning performance. Motivated by this gap, we study the trust-aware MAB problem by integrating a dynamic trust model into the standard MAB framework. Specifically, it assumes that the recommended and actually implemented policy differs depending on human trust, which in turn evolves with the quality of the recommended policy. We establish the minimax regret in the presence of the trust issue and demonstrate the suboptimality of vanilla MAB algorithms such as the upper confidence bound (UCB) algorithm. To overcome this limitation, we introduce a novel two-stage trust-aware procedure that provably attains near-optimal statistical guarantees. A simulation study is conducted to illustrate the benefits of our proposed algorithm when dealing with the trust issue.
... For example, one particular line of inquiry is concentrated on generating explanations for robotic actions [59,58,41,42,21], which often results in increased perception of reliability and, consequently, greater trust in these machines. Other research areas are focused on developing algorithms that can predict trust in real-time [61,24,26,57,64], understanding the dynamics of trust [64,20,63] devising strategies to mend broken trust [31,22,27], and creating planning methods that take trust into consideration [8,17,2,16,67,51]. ...
... Trust-aware decision making has also been made possible by modeling the interaction as a Partially Observable Markov Decision Process (POMDP) with trust as the partially observable state variable [1,2,17,65]. Akash et al. [1] present a trustworkload POMDP model that can be learned through interaction data and can be solved to generate optimal policies for the robot that control the level of transparency of the robot's interface [2]. ...
... Akash et al. [1] present a trustworkload POMDP model that can be learned through interaction data and can be solved to generate optimal policies for the robot that control the level of transparency of the robot's interface [2]. Chen et al. [17] provide a trust-POMDP model that can be solved for getting optimal policies for a robot. In a collaborative pick and place task with a robotic arm, they use their model to show trust-gaining behaviors learnt by the robot when it senses that the human's trust is low. ...
Preprint
Full-text available
With the advent of AI technologies, humans and robots are increasingly teaming up to perform collaborative tasks. To enable smooth and effective collaboration, the topic of value alignment (operationalized herein as the degree of dynamic goal alignment within a task) between the robot and the human is gaining increasing research attention. Prior literature on value alignment makes an inherent assumption that aligning the values of the robot with that of the human benefits the team. This assumption, however, has not been empirically verified. Moreover, prior literature does not account for human's trust in the robot when analyzing human-robot value alignment. Thus, a research gap needs to be bridged by answering two questions: How does alignment of values affect trust? Is it always beneficial to align the robot's values with that of the human? We present a simulation study and a human-subject study to answer these questions. Results from the simulation study show that alignment of values is important for trust when the overall risk level of the task is high. We also present an adaptive strategy for the robot that uses Inverse Reinforcement Learning (IRL) to match the values of the robot with those of the human during interaction. Our simulations suggest that such an adaptive strategy is able to maintain trust across the full spectrum of human values. We also present results from an empirical study that validate these findings from simulation. Results indicate that real-time personalized value alignment is beneficial to trust and perceived performance by the human when the robot does not have a good prior on the human's values.
... Trust calibration is one strategy to mitigate negative events caused by over-reliance and under-utilisation of technology [15]. Trust calibration is timely for human-agent interaction given the adoption of agents within collaborative team environments in industries like education [37,51,67] healthcare [20,24,39], and defense [59,60]. ...
... Trust calibration is timely for human-agent interaction given the adoption of agents within collaborative team environments in industries like education [37,51,67] healthcare [20,24,39], and defense [59,60]. Notably, trust calibration models have been employed in Human-Robot Interaction (HRI) contexts to improve team performance [14,15,59,60], as well as agent-agent [53] and human-agent interaction [1,48]. ...
... The proposed trust calibration methodology utilises the relationship between user perception of a collaborative agent and their trust to it to improve computational calibration models. Given its success estimating [53] and calibrating [15] trust, a computational partially observable markov decision processes (POMDP) framework is employed to adapt agent features in-task with the goal of optimising collaborative task outcome. The current work extends upon previously proposed models of trust calibration [15,41,45] by adopting signal detection theory (SDT) modelling to estimate user trust in-task in an unobtrusive way [22,23]. ...
Article
Full-text available
Appropriately calibrated human trust is essential for successful Human-Agent collaboration. Probabilistic frameworks using a partially observable Markov decision process (POMDP) have been previously employed to model the trust dynamics of human behavior, optimising the outcomes of a task completed with a collaborative recommender system. A POMDP model utilising signal detection theory to account for latent user trust is presented, with the model working to calibrate user trust via the implementation of three distinct agent features: disclaimer message, request for additional information, and no additional feature. A simulation experiment is run to investigate the efficacy of the proposed POMDP model compared against a random feature model and a control model. Evidence demonstrates that the proposed POMDP model can appropriately adapt agent features in-task based on human trust belief estimates in order to achieve trust calibration. Specifically, task accuracy is highest with the POMDP model, followed by the control and then the random model. This emphasises the importance of trust calibration, as agents that lack considered design to implement features in an appropriate way can be more detrimental to task outcome compared to an agent with no additional features.
... Human trust plays a crucial role in human-robot interaction (HRI) as it mediates the human's reliance on the robot, thus directly affecting the effectivenes of the human-robot team [1], [2], [3]. As a result, researchers have proposed trust-aware human-robot planning [4], which equips a robot with the ability to estimate and anticipate human trust and enables it to strategically plan its actions to foster better cooperation, improve teamwork, and ultimately enhance the overall performance of the human-robot team. ...
... Trust-aware HRI explicitly consider human trust in the robot's decision-making processes [4], [5], [6]. Chen et al. [4] modeled the sequential HRI process as a partially observable Markov decision process (POMDP) and incorporated human trust into the state transition function, allowing the robot to optimize its objectives in the interaction process. ...
... Trust-aware HRI explicitly consider human trust in the robot's decision-making processes [4], [5], [6]. Chen et al. [4] modeled the sequential HRI process as a partially observable Markov decision process (POMDP) and incorporated human trust into the state transition function, allowing the robot to optimize its objectives in the interaction process. However, the authors discovered a dilemma -maximizing task rewards often leads to decreased human trust, while maximizing human trust would compromise task performance. ...
Preprint
Full-text available
Trust-aware human-robot interaction (HRI) has received increasing research attention, as trust has been shown to be a crucial factor for effective HRI. Research in trust-aware HRI discovered a dilemma -- maximizing task rewards often leads to decreased human trust, while maximizing human trust would compromise task performance. In this work, we address this dilemma by formulating the HRI process as a two-player Markov game and utilizing the reward-shaping technique to improve human trust while limiting performance loss. Specifically, we show that when the shaping reward is potential-based, the performance loss can be bounded by the potential functions evaluated at the final states of the Markov game. We apply the proposed framework to the experience-based trust model, resulting in a linear program that can be efficiently solved and deployed in real-world applications. We evaluate the proposed framework in a simulation scenario where a human-robot team performs a search-and-rescue mission. The results demonstrate that the proposed framework successfully modifies the robot's optimal policy, enabling it to increase human trust at a minimal task performance cost.
... • (ℐ): [Ba-hutair et al., 2021], [Bao and Chen, 2012], [Chen et al., 2011], [Chen et al., 2014], [Chen et al., 2019], [Falcone and Sapienza, 2018], [Fortino et al., 2018] [Jayasinghe et al., 2018], ; • (ℳ): [Falcone et al., 2015], [Guo et al., 2016], [Liu et al., 2013], [Messina et al., 2016], [Ruan et al., 2017], [Zhong et al., 2014]; • (ℋ): [Chen et al., 2020], [Edmonds et al., 2019], [Hu et al., 2018], [Lee et al., 2013], [Nikolaidis et al., 2016], [Vinanzi et al., 2019], [Hu et al., 2018]. ...
... This does not apply to HMI, which refers only to direct experience. [Chen et al., 2020], [Edmonds et al., 2019], [Hu et al., 2018], [Lee et al., 2013], [Nikolaidis et al., 2016], [Vinanzi et al., 2019], [Xu and Dudek, 2015] D-R ℐ): [Bao and Chen, 2012], [Chen et al., 2011], [Chen et al., 2019] ℳ): [Guo et al., 2016], [Messina et al., 2016], [Zhong et al., 2014] D-R-C ℐ): [Ba-hutair et al., 2021], [Chen et al., 2014], [Fortino et al., 2018], [Jayasinghe et al., 2018] ℳ): [Falcone et al., 2015], [Liu et al., 2013], [Ruan et al., 2017] ...
... Therefore, although the specific dimensions vary according to the application domain, it is important to understand that they always refer to these aforementioned components of trust. [Chen et al., 2014] user satisfaction, friendship, social contact, community of interest indistinguishable from competence [Chen et al., 2019] performance it identifies fault and malicious behaviors willingness [Falcone and Sapienza, 2018] efficiency, autonomy, control (feedback/explainability, intervention) positive willingness taken for granted autonomy, control (feedback-/explainability, intervention) [Fortino et al., 2018] reliability, distance, helpfulness indistinguishable from competence [Jayasinghe et al., 2018] knowledge (co-work, co-location, cooperativeness, frequency, duration, reward, mutuality, centrality, community of interest), experience cooperativeness cooperativeness data collection behavior, communication behavior semantically inconsistent [Falcone et al., 2015] competence, willingness, certainty about the belief, certainty about the source, plausibility willingness willingness, certainty about the belief, certainty about the source, plausibility [Guo et al., 2016] competence [Liu et al., 2013] performance [Messina et al., 2016] reliability indistinguishable from competence [Ruan et al., 2017] impression, confidence indistinguishable from competence [Zhong et al., 2014] competence, integrity, context integrity integrity [Chen et al., 2020] performance the robot may intentionally change its performance to modulate human trust the robot may intentionally change its performance to modulate human trust [Edmonds et al., 2019] competence, Explainability Explainability Explainability [Hu et al., 2018] performance, cumulative trust, expectation bias, nationality, gender expectation bias [Lee et al., 2013] non verbal behaviors [Nikolaidis et al., 2016] performance, adaptability adaptability adaptability [Vinanzi et al., 2019] human action, human belief maliciousness human belief, maliciousness [Xu and Dudek, 2015] competence, intervention, direct feedback positive willingness taken for granted adaptability ...
... Human-robot Collaboration (HRC) could enhance the joint performance of the human operators and the robots to complete the assigned work [31,32]. A key factor that facilitates the collaboration of human operators and robots is the trust that they develop in each other [2,7,11,44]. Trust development and analysis have been investigated in literature for different HRC settings such as decentralized control of multiple unmanned vehicles for complex missions [9,49], performing surgical tasks [39], supervised control of robotic swarms [29], and the approaching behavior of human operator and robots when they collaborate in close proximity [10]. ...
... The authors in [45] discussed trust as a dynamic measure, which is captured using Markov Decision Process (MDP) to model how the robot performance affects the human. A Partially Observable Markov Decision Process (POMDP) has been used in [7] to infer the trust of a human through interactions with a robot that maximizes the overall team's performance. A trust architecture, named HST3-Architecture, is presented in [15] that uses explainable Artificial Intelligence for human swarm teaming. ...
Article
Full-text available
In this study, a novel time-driven mathematical model for trust is developed considering human-multi-robot performance for a Human-robot Collaboration (HRC) framework. For this purpose, a model is developed to quantify human performance considering the effects of physical and cognitive constraints and factors such as muscle fatigue and recovery, muscle isometric force, human (cognitive and physical) workload and workloads due to the robots’ mistakes, and task complexity. The performance of multi-robot in the HRC setting is modeled based upon the rate of task assignment and completion as well as the mistake probabilities of the individual robots. The human trust in HRC setting with single and multiple robots are modeled over different operation regions, namely unpredictable region, predictable region, dependable region, and faithful region. The relative performance difference between the human operator and the robot is used to analyze the effect on the human operator’s trust in robots’ operation. The developed model is simulated for a manufacturing workspace scenario considering different task complexities and involving multiple robots to complete shared tasks. The simulation results indicate that for a constant multi-robot performance in operation, the human operator’s trust in robots’ operation improves whenever the comparative performance of the robots improves with respect to the human operator performance. The impact of robot hypothetical learning capabilities on human trust in the same HRC setting is also analyzed. The results confirm that a hypothetical learning capability allows robots to reduce human workloads, which improves human performance. The simulation result analysis confirms that the human operator’s trust in the multi-robot operation increases faster with the improvement of the multi-robot performance when the robots have a hypothetical learning capability. An empirical study was conducted involving a human operator and two collaborator robots with two different performance levels in a software-based HRC setting. The experimental results closely followed the pattern of the developed mathematical models when capturing human trust and performance in terms of human-multi-robot collaboration.
... The software triggered controversial affairs, inducing mixed reactions (Pavlik 2023), with many being skeptical of future progress and potential threats to employment, employability, or sensibility (Yang 2022). This leads to the critical question of how to absorb AI by stressing the benefits and opening up public discourse for positive prospects that would be pro-humanity and significantly improve life quality (Oswald et al. 2020;Chen et al. 2020). The expansion gradually shifted the culture, flared up the public debate of the multiple application possibilities, and paved the way to a whole new dimension transcending science fiction and exceeding all expectations of its everyday usability. ...
... The use of generative artificial intelligence (AI) in occupational psychology is an emerging area with potential for various applications (Eyssel 2017;Oswald et al. 2020;Chen et al. 2020;Mazzone and Elgammal 2019;Makarius et al. 2020). Generally speaking, scholars in AI can be differentiated by their intrinsically mechanistic or illusionistic perspective (Coeckelbergh 2018;Bryson 2010), thus taking either a solid or weak AI stance (Duffy 2003). ...
Article
Full-text available
The revolution of artificial intelligence (AI), particularly generative AI, and its implications for human-robot interaction (HRI) opened up the debate on crucial regulatory, business, societal, and ethical considerations. This paper explores essential issues from the anthropomorphic perspective, examining the complex interplay between humans and AI models in societal and corporate contexts. We provided a comprehensive review of existing literature on HRI, with a special emphasis on the impact of generative models such as ChatGPT. The scientometric study posits that due to their advanced linguistic capabilities and ability to mimic human-like behavior, generative AIs like ChatGPT will continue to grow in popularity in pair with human rational empathy, tendency for personification and their advanced linguistic capabilities and ability to mimic human-like behavior. As they blur the boundaries between humans and robots, these models introduce fresh moral and philosophical dilemmas. Our research aims to extrapolate key trends and unique factors in HRI and to elucidate the technical aspects of generative AI that enhance its effectiveness in this field compared to traditional rule-based AI systems. We further discuss the challenges and limitations of applying generative AI in HRI, providing a future research agenda for AI optimization in diverse applications, including education, entertainment, and healthcare.
... For instance, one specific area of research focuses on providing explanations of the robots' behaviors [14,28,29,37,38] typically leading to higher perceived trustworthiness and, subsequently, trust in the robot. Other research directions include the developing realtime trust prediction algorithms [18,19,36,39,41], modeling trust dynamics [12,40,41], developing trust repair strategies [15,22], and developing trust-aware planning [2,5,8,9,33,44]. ...
... Chen et al. [9] propose a trust-POMDP that is solved to generate optimal trust-based policies for the robot. They demonstrate it in a human-subject experiment involving pick-and-place tasks for a human-robotic arm team. ...
... Similarly, in [19], they studied the effects of human trust towards a robot to evaluate the effectiveness of a robot in patient care assistance. Human trust towards the robot [20] was also integrated into the robot's decision-making to improve the performance of the Human-Robot Team [21]. Conlon et al. [22] proposed a self-assessment method that provided a metric of the robot's confidence in completing the task. ...
Preprint
Full-text available
This paper presents a novel method to quantify Trust in HRI. It proposes an HRI framework for estimating the Robot Trust towards the Human in the context of a narrow and specified task. The framework produces a real-time estimation of an AI agent's Artificial Trust towards a Human partner interacting with a mobile teleoperation robot. The approach for the framework is based on principles drawn from Theory of Mind, including information about the human state, action, and intent. The framework creates the ATTUNE model for Artificial Trust Towards Human Operators. The model uses metrics on the operator's state of attention, navigational intent, actions, and performance to quantify the Trust towards them. The model is tested on a pre-existing dataset that includes recordings (ROSbags) of a human trial in a simulated disaster response scenario. The performance of ATTUNE is evaluated through a qualitative and quantitative analysis. The results of the analyses provide insight into the next stages of the research and help refine the proposed approach.
... Without this trust, the integration and effectiveness of robots in critical settings remain limited. human-robot interaction is an area under active research, where a person's trust in a robotic agent is explicitly incorporated in robot planning and decision-making (Bhat et al., 2022;Chen et al., 2018Chen et al., , 2020Xu and Dudek, 2016;Guo et al., 2021). While significant efforts have been made in understanding trust within dyadic partnerships of one human and one robot (National Academies of Sciences, Engineering, and Medicine, 2022), the dynamics of trust in multi-human multi-robot settings is largely unexplored. ...
Article
Full-text available
Trust is a crucial factor for effective human–robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human and robotic agents. To fill this important research gap, we present the Trust Inference and Propagation (TIP) model to model and estimate human trust in multi-human multi-robot teams. In a multi-human multi-robot team, we postulate that there exist two types of experiences that a human agent has with a robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams.
... Their work further proposed a trust-seeking reward function that balances team performance and trust preferences. Chen et al. [28] proposed trust-POMDP as a means of introducing trust variables into human-robot interaction in ways that improve performance. They tested the performance of their approach using simulation and reallife experiments and found that their approach of factoring in trust improves human-robot team performance. ...
Preprint
Full-text available
Trust is essential in human-robot collaboration. Even more so in multi-human multi-robot teams where trust is vital to ensure teaming cohesion in complex operational environments. Yet, at the moment, trust is rarely considered a factor during task allocation and reallocation in algorithms used in multi-human, multi-robot collaboration contexts. Prior work on trust in single-human-robot interaction has identified that including trust as a parameter in human-robot interaction significantly improves both performance outcomes and human experience with robotic systems. However, very little research has explored the impact of trust in multi-human multi-robot collaboration, specifically in the context of task allocation. In this paper, we introduce a new trust model, the Expectation Comparison Trust (ECT) model, and employ it with three trust models from prior work and a baseline no-trust model to investigate the impact of trust on task allocation outcomes in multi-human multi-robot collaboration. Our experiment involved different team configurations, including 2 humans, 2 robots, 5 humans, 5 robots, and 10 humans, 10 robots. Results showed that using trust-based models generally led to better task allocation outcomes in larger teams (10 humans and 10 robots) than in smaller teams. We discuss the implications of our findings and provide recommendations for future work on integrating trust as a variable for task allocation in multi-human, multi-robot collaboration.
... Understanding human attitudes towards collaboration with robots is needed for shaping effective collaborative strategies and optimizing task allocation [19]. By considering factors such as human trust and decision-making processes, researchers have aimed to create fluent and efficient human-robot collaboration environments [20]. ...
Article
Full-text available
This study explores the potential of training robots using reinforcement learning (RL) to adapt their behavior based on human comfort levels during tasks. An experimental environment has been developed and made available to the research community, facilitating the replication of these experiments. The results demonstrate that adjusting a single comfort-related input parameter during training leads to significant variations in the robot’s behavior. Detailed discussions of the reward functions and obtained results validate these behavioral adaptations, confirming that robots can dynamically respond to human needs, thereby enhancing human-robot interaction. While the study highlights the effectiveness of this approach, it also raises the question of real-time comfort measurement, suggesting various systems for future exploration. These findings contribute to the development of more intuitive and emotionally responsive robots, offering new possibilities for future research in advancing human-robot interaction.
... Another study by Mainprice and Berenson [21] developed a human motion library using a Gaussian Mixture Model (GMM) for early prediction of human motion, although temporal aspects were not fully considered, affecting early prediction accuracy. Hawkins et al. in 2020 [22] predicted subsequent behaviors using Bayesian network inference, and Nikolaidis and Shah [23] utilized a Markov decision process to infer roles and subsequent actions in HRC scenarios. Despite advancements, the performance of action sequence prediction models remains constrained by the need for extensive, diverse labeled data. ...
Article
Full-text available
Real-time visual image prediction, crucial for directing robotic arm movements, represents a significant technique in artificial intelligence and robotics. The primary technical challenges involve the robot’s inaccurate perception and understanding of the environment, coupled with imprecise control of movements. This study proposes ForGAN-MCTS, a generative adversarial network-based action sequence prediction algorithm, aimed at refining visually guided rearrangement planning for movable objects. Initially, the algorithm unveils a scalable and robust strategy for rearrangement planning, capitalizing on the capabilities of a Monte Carlo Tree Search strategy. Secondly, to enable the robot’s successful execution of grasping maneuvers, the algorithm proposes a generative adversarial network-based real-time prediction method, employing a network trained solely on synthetic data for robust estimation of multi-object workspace states via a single uncalibrated RGB camera. The efficacy of the newly proposed algorithm is corroborated through extensive experiments conducted by using a UR-5 robotic arm. The experimental results demonstrate that the algorithm surpasses existing methods in terms of planning efficacy and processing speed. Additionally, the algorithm is robust to camera motion and can effectively mitigate the effects of external perturbations.
... In Akash et al. (2019), a trust-workload partially observable Markov decision process (POMDP) model was trained and solved to generate optimal policies for a robot to control its transparency to improve the performance of the human-robot team. Further, Chen et al. (2020) presented a trust-POMDP model that can be solved to generate optimal policies for the robot to calibrate the human's trust and improve team performance. presented a reverse psychology model of human trust behavior and compared it with the disuse model. ...
... For instance, biases in available training data can lead to inaccurate transcriptions for nonnative English speakers or other underrepresented populations [34], and it's well known that LLMs are prone to hallucination, which in this context may produce content that adheres to protocol but results in incorrect or unsafe actions [31,35]. Intelligent systems that acknowledge mistakes and double-check safety-critical operations can maintain trust with their human collaborators [5,36,37]. In the future OR, an Xray language interface may be one component of a broader human-robot collaborative system. ...
Article
Full-text available
The expanding capabilities of surgical systems bring with them increasing complexity in the interfaces that humans use to control them. Robotic C-arm X-ray imaging systems, for instance, often require manipulation of independent axes via joysticks, while higher-level control options hide inside device-specific menus. The complexity of these interfaces hinder “ready-to-hand” use of high-level functions. Natural language offers a flexible, familiar interface for surgeons to express their desired outcome rather than remembering the steps necessary to achieve it, enabling direct access to task-aware, patient-specific C-arm functionality. We present an English language voice interface for controlling a robotic X-ray imaging system with task-aware functions for pelvic trauma surgery. Our fully integrated system uses a large language model (LLM) to convert natural spoken commands into machine-readable instructions, enabling low-level commands like “Tilt back a bit,” to increase the angular tilt or patient-specific directions like, “Go to the obturator oblique view of the right ramus,” based on automated image analysis. We evaluate our system with 212 prompts provided by an attending physician, in which the system performed satisfactory actions 97% of the time. To test the fully integrated system, we conduct a real-time study in which an attending physician placed orthopedic hardware along desired trajectories through an anthropomorphic phantom, interacting solely with an X-ray system via voice. Voice interfaces offer a convenient, flexible way for surgeons to manipulate C-arms based on desired outcomes rather than device-specific processes. As LLMs grow increasingly capable, so too will their applications in supporting higher-level interactions with surgical assistance systems.
... While the utilization of deep learning-cobot systems in manufacturing tasks has been explored in previous works [20], [21], further work can be done for medical and health assistance applications to further enhance human-robot interactions. This study serves as a proof of concept for the integration of DL and cobots in collaborative assembly tasks. ...
Conference Paper
Full-text available
The rising demand for adaptable and user-friendly forms of interaction in the field of manufacturing and assembly tasks has led to increased attention on human-robot collaboration. Collaborative robots (cobots) have emerged as a promising solution to address this demand. In this study, we propose the integration and application of cobots along with a pre-trained deep learning model to assist users in assembly activities, specifically part handover and storage. The human-robot interaction is facilitated through a hand tracking system that enables a close approach to the user's hand. Testing on the system yielded 99% accuracy. The incorporation of deep learning models in cobot applications holds substantial potential for industry transformation, with implications spanning manufacturing, healthcare, and assistive technologies. This research serves as a compelling proof of concept, showcasing the effective implementation of deep learning models to facilitate close human-robot interactions.
... Many methods has been proposed for AI systems to affect human reliance (behavioral aspect) and trust (attitudinal and psychological aspect) [13]. Chen et al. proposed a decisionmaking model that allows an AI to influence the level of human trust by actions [14]. Ribeiro et al. proposed LIME, a method for generating explanations of AI predictions and demonstrated its performance for a variety of models in trustrelated tasks [15]. ...
... Chen et al. focused on influencing human reliance by changing an agent's action. Their trust-POMDP is a computational model for deciding an action with awareness of human trust [12]. This paper considers the situation of calibrating reliance by explicitly communicating an AI's capability through RCCs. ...
Article
Full-text available
Understanding what an AI system can and cannot do is necessary for end-users to use the AI properly without being over- or under-reliant on it. Reliance calibration cues (RCCs) communicate an AI’s capability to users, resulting in optimizing their reliance on it. Previous studies have typically focused on continuously presenting RCCs, and although providing an excessive amount of RCCs is sometimes problematic, limited consideration has been given to the question of how an AI can selectively provide RCCs. This paper proposes vPred-RC, an algorithm in which an AI decides whether to provide an RCC and which RCC to provide. It evaluates the influence of an RCC on user reliance with a cognitive model that predicts whether a human will assign a task to an AI agent with or without an RCC. We tested vPred-RC in a human-AI collaborative task called the collaborative CAPTCHA (CC) task. First, our reliance prediction model was trained on a dataset of human task assignments for the CC task and found to achieve 83.5% accuracy. We further evaluated vPred-RC’s dynamic RCC selection in a user study. As a result, the RCCs selected by vPred-RC enabled participants to more accurately assign tasks to an AI when and only when the AI succeeded compared with randomly selected ones, suggesting that vPred-RC can successfully calibrate human reliance with a reduced number of RCCs. The selective presentation of RCCs has the potential to enhance the efficiency of collaboration between humans and AIs with fewer communication costs.
... from augmenting human capabilities (e.g., human-swarm interaction [1]), to providing aids in decision making (e.g., medical decision making [2]), to fully automating tasks under human supervision (e.g. autonomous driving [3]). [4]. ...
Conference Paper
Full-text available
For a wide variety of envisioned humanitarian and commercial applications that involve a human user commanding a swarm of robotic systems, developing human-swarm interaction (HSI) principles and interfaces calls for systematic virtual environments to study such HSI implementations. Specifically, such studies are fundamental to achieving HSI that is operationally efficient and can facilitate trust calibration through the collection-use-modeling of cognitive information. However, there is a lack of such virtual environments, especially in the context of studying HSI in different operationally relevant contexts. Building on our previous work in swarm simulation and computer game-based HSI, this paper develops a comprehensive virtual environment to study HSI under varying swarm size, swarm compliance, and swarm-to-human feedback. This paper demonstrates how this simulation environment informs the development of an indoor physical (experimentation) environment to evaluate the human cognitive model. New approaches are presented to simulate physical assets based on physical experiment-based calibration and the effects that this presents on the human users. Key features of the simulation environment include medium fidelity simulation of large teams of small aerial and ground vehicles (based on the Pybullet engine), a graphical user interface to receive human command and provide feedback (from swarm assets) to human in the case of non-compliance with commands, and a lab-streaming layer to synchronize physiological data collection (e.g., related to brain activity and eye gaze) with swarm state and human commands.
... Robot adaptation to human factors such as trust [12], physical fatigue [13], and mental workload [14] has also been studied. However, robot adaptation to cognitive fatigue, a human factor that contributes significantly to human's ability to perform HRC tasks [2], and a root cause of several human factors such as trust, situation awareness and dissatisfaction (which leads to high mental workload) is quite important. ...
... As a result, operators must interact with them daily while promoting their wellbeing (European Commission, 2021). It is, therefore, important that they become partners in their everyday lives in a symbiotic manner so that they can have a natural, fluid, and satisfying experience (Kahn Jr et al., 2007;Boden et al., 2017;Chen et al., 2020;Lindblom and Alenljung, 2020). In recent years, the potential for humans and robots to work together has been recognized as a viable approach to support human workers by taking on hazardous and physically demanding tasks (Colim et al., 2021). ...
Article
Full-text available
Humans and robots will increasingly have to work together in the new industrial context. Therefore, it is necessary to improve the User Experience, Technology Acceptance, and overall wellbeing to achieve a smoother and more satisfying interaction while obtaining the maximum performance possible out of it. For this reason, it is essential to analyze these interactions to enhance User Experience. The heuristic evaluation is an easy-to-use, low-cost method that can be applied at different stages of a design process in an iterative manner. Despite these advantages, there is rarely a list of heuristics in the current literature that evaluates Human-Robot interactions both from a User Experience, Technology Acceptance, and Human-Centered approach. Such an approach should integrate key aspects like safety, trust, and perceived safety, ergonomics and workload, inclusivity, and multimodality, as well as robot characteristics and functionalities. Therefore, a new set of heuristics, namely, the HEUROBOX tool, is presented in this work in the form of the HEUROBOX tool to help practitioners and researchers in the assessment of human-robot systems in industrial environments. The HEUROBOX tool clusters design guidelines and methodologies as a logic list of heuristics for human-robot interaction and comprises four categories: Safety, Ergonomics, Functionality, and Interfaces. They include 84 heuristics in the basic evaluation, while the advanced evaluation lists a total of 228 heuristics in order to adapt the tool to the evaluation of different industrial requirements. Finally, the set of new heuristics has been validated by experts using the System Usability Scale (SUS) questionnaire and the categories has been prioritized in order of their importance in the evaluation of Human-Robot Interaction through the Analytic Hierarchy Process (AHP).
... -This model is not dynamic -True/false/collision alarms don't fully represent real world situations -Only one demographic (young students) -Only one non-driving task examined Chen et. al. (2020) [33] -Higher performance than myopic robot -Succeeded at calibrating human trust to match robot's capability -In high trust, robot intentionally failed to match user's performance -In low trust, prioritized low risk -Computationally expensive -Modeled trust as a single real-valued variable rather than a combination of multiple factors -Individual differences of human users were not considered [241] ...
Thesis
Full-text available
Embodied Virtual Agents (EVAs) are human-like computer agents which can serve as assistants and companions in different tasks. They have numerous applications such as interfaces for social robots, educational tutors, game counterparts, medical assistants, and companions for the elderly and/or individuals with psychological or behavioral conditions. Forming a reliable and trustworthy interaction is critical to the success and acceptability of this new type of user interface. This dissertation explores the interaction between humans and EVAs in cooperative and uncooperative conditions to increase understanding of how trust operates in these interactions. It also investigates how interactions with one agent influences the perception of other agents. In addition to participants achieving significantly higher performance and having higher trust for the cooperative agent, we found that participants’ trust for the cooperative agent was significantly higher if they interacted with an uncooperative agent in one of the sets, compared to working with cooperative agents in both sets. The results suggest that the trust for an EVA is relative and it is dependent on agent behavior and user history of interaction with different agents. We found out that biases such as primacy bias, can contribute into humans trusting one agent over the other even if they look similar and serve the same purpose. Primacy bias can also be responsible for having higher trust for the first agent when working with multiple cooperative agents having the same behavior and performing the same task. We also observed that working with one agent will have a significant effect on users’ initial trust for other agents within the same system, even before collaborating with the agent in an actual task. Based on lessons learnt through conducting the experiments, specifically through users’ personal reflections on their interactions with EVAs, we discuss ethical issues that arise in interactions with virtual worlds. Based on the experimental results obtained in the user experiments, and the findings in previous literature in the field of trust between humans and virtual agents, we suggest guidelines for trust-adaptive virtual agents. We provide justifications for each guideline to increase transparency and provide additional resources to researchers and developers who are interested in these suggestions. The results of this dissertation provide insights into interaction between humans and virtual agents in scenarios which require the collaboration of humans and computers under uncertainty in a timely and efficient way. It also provides directions for future research to use EVAs as primary user interfaces due to the similarity of interaction with such agents to natural human-human interaction and possibility of building high-level, resilient trust toward them.
... In HRI, factors affecting trust are classified into three categories [123]: 1) Human-related factors (i.e., including ability-based factors, characteristics); 2) Robotrelated factors (i.e., including performance-based factors and attribute-based factors); 3) Environmental factors (i.e., including team collaboration, tasking, etc.). Although there is no unified definition of trust in the literature, researchers take a utilitarian approach to defining trust for HRI adopting a trust definition that gives robots practical benefits in developing appropriate behaviors through planning and control [124], [125], [126]. ...
Preprint
As a unifying concept in economics, game theory, and operations research, even in the Robotics and AI field, the utility is used to evaluate the level of individual needs, preferences, and interests. Especially for decision-making and learning in multi-agent/robot systems (MAS/MRS), a suitable utility model can guide agents in choosing reasonable strategies to achieve their current needs and learning to cooperate and organize their behaviors, optimizing the system's utility, building stable and reliable relationships, and guaranteeing each group member's sustainable development, similar to the human society. Although these systems' complex, large-scale, and long-term behaviors are strongly determined by the fundamental characteristics of the underlying relationships, there has been less discussion on the theoretical aspects of mechanisms and the fields of applications in Robotics and AI. This paper introduces a utility-orient needs paradigm to describe and evaluate inter and outer relationships among agents' interactions. Then, we survey existing literature in relevant fields to support it and propose several promising research directions along with some open problems deemed necessary for further investigations.
Chapter
With the advent of AI technologies, humans and robots are increasingly teaming up to perform collaborative tasks. To enable smooth and effective collaboration, the topic of value alignment (operationalized herein as the degree of dynamic goal alignment within a task) between the robot and the human is gaining increasing research attention. Prior literature on value alignment makes an inherent assumption that aligning the values of the robot with that of the human benefits the team. This assumption, however, has not been empirically verified. Moreover, prior literature does not account for human’s trust in the robot when analyzing human-robot value alignment. Thus, a research gap needs to be bridged by answering two questions: How does alignment of values affect trust? Is it always beneficial to align the robot’s values with that of the human? We present a simulation study and a human-subject study to answer these questions. Results from the simulation study show that alignment of values is important for trust when the overall risk level of the task is high. We also present an adaptive strategy for the robot that uses Inverse Reinforcement Learning (IRL) to match the values of the robot with those of the human during interaction. Our simulations suggest that such an adaptive strategy is able to maintain trust across the full spectrum of human values. We also present results from an empirical study that validate these findings from simulation. Results indicate that real-time personalized value alignment is beneficial to trust and perceived performance by the human when the robot does not have a good prior on the human’s values.
Article
For robots to seamlessly interact with humans, we first need to make sure that humans and robots understand one another. Diverse algorithms have been developed to enable robots to learn from humans (i.e., transferring information from humans to robots). In parallel, visual, haptic, and auditory communication interfaces have been designed to convey the robot’s internal state to the human (i.e., transferring information from robots to humans). Prior research often separates these two directions of information transfer, and focuses primarily on either learning algorithms or communication interfaces. By contrast, in this survey we take an interdisciplinary approach to identify common themes and emerging trends that close the loop between learning and communication. Specifically, we survey state-of-the-art methods and outcomes for communicating a robot’s learning back to the human teacher during human-robot interaction. This discussion connects human-in-the-loop learning methods and explainable robot learning with multimodal feedback systems and measures of human-robot interaction. We find that—when learning and communication are developed together—the resulting closed-loop system can lead to improved human teaching, increased human trust, and human-robot co-adaptation. The paper includes a perspective on several of the interdisciplinary research themes and open questions that could advance how future robots communicate their learning to everyday operators. Finally, we implement a selection of the reviewed methods in a case study where participants kinesthetically teach a robot arm. This case study documents and tests an integrated approach for learning in ways that can be communicated, conveying this learning across multimodal interfaces, and measuring the resulting changes in human and robot behavior.
Preprint
The growing interest in human-robot collaboration (HRC), where humans and robots cooperate towards shared goals, has seen significant advancements over the past decade. While previous research has addressed various challenges, several key issues remain unresolved. Many domains within HRC involve activities that do not necessarily require human presence throughout the entire task. Existing literature typically models HRC as a closed system, where all agents are present for the entire duration of the task. In contrast, an open model offers flexibility by allowing an agent to enter and exit the collaboration as needed, enabling them to concurrently manage other tasks. In this paper, we introduce a novel multiagent framework called oDec-MDP, designed specifically to model open HRC scenarios where agents can join or leave tasks flexibly during execution. We generalize a recent multiagent inverse reinforcement learning method - Dec-AIRL to learn from open systems modeled using the oDec-MDP. Our method is validated through experiments conducted in both a simplified toy firefighting domain and a realistic dyadic human-robot collaborative assembly. Results show that our framework and learning method improves upon its closed system counterpart.
Article
Objective This study examines the extent to which cybersecurity attacks on autonomous vehicles (AVs) affect human trust dynamics and driver behavior. Background Human trust is critical for the adoption and continued use of AVs. A pressing concern in this context is the persistent threat of cyberattacks, which pose a formidable threat to the secure operations of AVs and consequently, human trust. Method A driving simulator experiment was conducted with 40 participants who were randomly assigned to one of two groups: (1) Experience and Feedback and (2) Experience-Only. All participants experienced three drives: Baseline, Attack, and Post-Attack Drive. The Attack Drive prevented participants from properly operating the vehicle in multiple incidences. Only the “Experience and Feedback” group received a security update in the Post-Attack drive, which was related to the mitigation of the vehicle’s vulnerability. Trust and foot positions were recorded for each drive. Results Findings suggest that attacks on AVs significantly degrade human trust, and remains degraded even after an error-less drive. Providing an update about the mitigation of the vulnerability did not significantly affect trust repair. Conclusion Trust toward AVs should be analyzed as an emergent and dynamic construct that requires autonomous systems capable of calibrating trust after malicious attacks through appropriate experience and interaction design. Application The results of this study can be applied when building driver and situation-adaptive AI systems within AVs.
Article
When humans and robots work together to accomplish tasks with dynamic uncertainty, the robots should perceive human motion intentions so as to cooperate with humans and increase efficacy. In this study, we propose a hierarchical motion intention prediction model for human-robot collaboration, in which the bottom level acquires human motion information, the middle level recognizes motion states and the high level predicts motion intentions. Compared with existing methods, our model fuses task-level human behavioral pattern prediction with instantaneous continuous motion intent decoding. Therefore, the robot controller can generate a collaborative trajectory in advance and adjust the key parameters (forces and velocities, etc.) in real time according to human motions. We quantitatively verify the proposed model with 10 subjects in the human-robot sawing task. The results show that the hierarchical model can effectively reduce human energy consumption and improve the average speed of the task. Meanwhile, subjective metrics indicate that subjects believe robots employing hierarchical models as capable of fostering improved cooperation and delivering greater assistance. Our study systematically proves that the proposed hierarchical model significantly enhanced the efficiency of human-robot co-manipulation, marking a step forward compared with existing works. Future studies will be focused on investigating more complex and general tasks.
Conference Paper
Full-text available
Artificial Intelligence (AI) has shown a tremendous impact on civilian and military applications. However, traditional AI will remain inadequate because of issues such as explicit and implicit biases and explainability for operating independently in dynamic and complex environments for the foreseeable future. For human-autonomy teaming (HAT), trustworthy AI is crucial since machines/autonomy and humans work in collaboration for shared learning and joint reasoning for a given mission with combat speed and high accuracy, trust, and assurance. In this paper, we present a brief survey of recent advances, some key challenges, and future research directions for neuro-symbolic-reinforcement-learning-enabled trustworthy HAT.
Article
This article introduces the notion of carelessness level into robot action planners such that the safety and efficiency are optimized. The core idea is to make the robot’s plan less sensitive to the behavior of careless humans who may inattentively violate safety constraints and degrade efficiency. More precisely, our planner reduces the opportunities given to the careless humans to put themselves in danger and hamper the efficiency of the robot’s plan. The effectiveness of the proposed planner is demonstrated through simulation studies on a packaging line and on a collaborative assembly line. Results show that the proposed scheme can improve efficiency and safety in both examples.
Article
Full-text available
Human–robot collaboration has gained attention in the field of manufacturing and assembly tasks, necessitating the development of adaptable and user-friendly forms of interaction. To address this demand, collaborative robots (cobots) have emerged as a viable solution. Deep Learning has played a pivotal role in enhancing robot capabilities and facilitating their perception and understanding of the environment. This study proposes the integration of cobots and Deep Learning to assist users in assembly tasks such as part handover and storage. The proposed system includes an object classification system to categorize and store assembly elements, a voice recognition system to classify user commands, and a hand-tracking system for close interaction. Tests were conducted for each isolated system and for the complete application as used by different individuals, yielding an average accuracy of 91.25%. The integration of Deep Learning into cobot applications has significant potential for transforming industries, including manufacturing, healthcare, and assistive technologies. This work serves as a proof of concept for the use of several neural networks and a cobot in a collaborative task, demonstrating communication between the systems and proposing an evaluation approach for individual and integrated systems.
Article
Full-text available
Trust model is a topic that first gained interest in organizational studies and then human factors in automation. Thanks to recent advances in human-robot interaction (HRI) and human-autonomy teaming, human trust in robots has gained growing interest among researchers and practitioners. This article focuses on a survey of computational models of human-robot trust and their applications in robotics and robot controls. The motivation is to provide an overview of the state-of-the-art computational methods to quantify trust so as to provide feedback and situational awareness in HRI. Different from other existing survey papers on human-robot trust models, we seek to provide in-depth coverage of the trust model categorization, formulation, and analysis, with a focus on their utilization in robotics and robot controls. The paper starts with a discussion of the difference between human-robot trust with general agent-agent trust, interpersonal trust, and human trust in automation and machines. A list of impacting factors for human-robot trust and different trust measurement approaches, and their corresponding scales are summarized. We then review existing computational human-robot trust models and discuss the pros and cons of each category of models. These include performance-centric algebraic, time-series, Markov decision process (MDP)/Partially Observable MDP (POMDP)-based, Gaussian-based, and dynamic Bayesian network (DBN)-based trust models. Following the summary of each computational human-robot trust model, we examine its utilization in robot control applications, if any. We also enumerate the main limitations and open questions in this field and discuss potential future research directions.
Chapter
Full-text available
A Collaborative Artificial Intelligence System (CAIS) is a cyber-physical system that learns actions in collaboration with humans in a shared environment to achieve a common goal. In particular, a CAIS is equipped with an AI model to support the decision-making process of this collaboration. When an event degrades the performance of CAIS (i.e., a disruptive event), this decision-making process may be hampered or even stopped. Thus, it is of paramount importance to monitor the learning of the AI model, and eventually support its decision-making process in such circumstances. This paper introduces a new methodology to automatically support the decision-making process in CAIS when the system experiences performance degradation after a disruptive event. To this aim, we develop a framework that consists of three components: one manages or simulates CAIS’s environment and disruptive events, the second automates the decision-making process, and the third provides a visual analysis of CAIS behavior. Overall, our framework automatically monitors the decision-making process, intervenes whenever a performance degradation occurs, and recommends the next action. We demonstrate our framework by implementing an example with a real-world collaborative robot, where the framework recommends the next action that balances between minimizing the recovery time (i.e., resilience), and minimizing the energy adverse effects (i.e., greenness).
Conference Paper
Full-text available
For effective collaboration between humans and intelligent agents that employ machine learning for decision-making, humans must understand what agents can and cannot do to avoid over/under-reliance. A solution to this problem is adjusting human reliance through communication using reliance calibration cues (RCCs) to help humans assess agents' capabilities. Previous studies typically attempted to calibrate reliance by continuously presenting RCCs, and when an agent should provide RCCs remains an open question. To answer this, we propose Pred-RC, a method for selectively providing RCCs. Pred-RC uses a cognitive reliance model to predict whether a human will assign a task to an agent. By comparing the prediction results for both cases with and without an RCC, Pred-RC evaluates the influence of the RCC on human reliance. We tested Pred-RC in a human-AI collaboration task and found that it can successfully calibrate human reliance with a reduced number of RCCs.
Article
Full-text available
Purpose – The purpose of this article is to present factors affecting the quality of human–robot collaboration (HRI) in the workplace. The impact of trust is described in detail. According to the Polish Economic Institute, between 1993 and 2018, the number of industrial robots in the world increased from 557,000 to 2.4 million. Implemented robots, cease to be just tools in human hands. Thanks to their autonomy, they are becoming known as teammates. Research method – The article is based on a literature review conducted to identify factors, particularly trust, that affect human–robot collaboration. Results – It has been revealed that research conducted in the HRI area proves that human trust in robots, is one of the most important determinants of proper cooperation. Above that, the relevance of areas such as reliability, predictability of robots, the personality of people, or appearance of robots was pointed out. The mentioned areas have a direct impact on the formation and level of trust in human–robot teams. Originality / value / implications / recommendations – The publication highlights the relevance of the area of human–robot collaboration (HRC), as part of the implementation of robots in modern enterprises. The article details which elements in the formation of HRC play the most important role. The topic of HRC is vaguely known in Poland, and as the referenced literature indicates, the correct cooperation of humans and robots is an important element affecting the success of robotization. Learning how HRC works is one of the challenges facing today’s organizations.
Conference Paper
Full-text available
Existing research assessing human operators' trust in automation and robots has primarily examined trust as a steady-state variable, with little emphasis on the evolution of trust over time. With the goal of addressing this research gap, we present a study exploring the dynamic nature of trust. We defined trust of entirety as a measure that accounts for trust across a human's entire interactive experience with automation, and first identified alternatives to quantify it using real-time measurements of trust. Second, we provided a novel model that attempts to explain how trust of entirety evolves as a user interacts repeatedly with automation. Lastly, we investigated the effects of automation transparency on momentary changes of trust. Our results indicated that trust of entirety is better quantified by the average measure of "area under the trust curve" than the traditional post-experiment trust measure. In addition, we found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology. Finally, we observed that a higher level of automation transparency may mitigate the "cry wolf" effect -- wherein human operators begin to reject an automated system due to repeated false alarms.
Article
Full-text available
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.
Article
Full-text available
POMDPs provide a principled framework for planning under uncertainty, but are computationally intractable, due to the "curse of dimensionality" and the "curse of history". This paper presents an online POMDP algorithm that alleviates these difficulties by focusing the search on a set of randomly sampled scenarios. A Determinized Sparse Partially Observable Tree (DESPOT) compactly captures the execution of all policies on these scenarios. Our Regularized DESPOT (R-DESPOT) algorithm searches the DESPOT for a policy, while optimally balancing the size of the policy and its estimated value obtained under the sampled scenarios. We give an output-sensitive performance bound for all policies derived from a DESPOT, and show that R-DESPOT works well if a small optimal policy exists. We also give an anytime algorithm that approximates R-DESPOT. Experiments show strong results, compared with two of the fastest online POMDP algorithms. Source code along with experimental settings are available at http://bigbird.comp.nus.edu.sg/pmwiki/farm/appl/.
Article
Full-text available
An important aspect of a robot's social behavior is to convey the right amount of trustworthiness. Task performance has shown to be an important source for trustworthiness judgments. Here, we argue that factors such as a robot's behavioral style can play an important role as well. Our approach to studying the effects of a robot's performance and behavioral style on human trust involves experiments with simulated robots in video human-robot interaction (VHRI) and immersive virtual environments (IVE). Although VHRI and IVE settings cannot substitute for the genuine interaction with a real robot, they can provide useful complementary approaches to experimental research in social human robot interaction. VHRI enables rapid prototyping of robot behaviors. Simulating human-robot interaction in IVEs can be a useful tool for measuring human responses to robots and help avoid the many constraints caused by real-world hardware. However, there are also difficulties with the generalization of results from one setting (e.g., VHRI) to another (e.g. IVE or the real world), which we discuss. In this paper, we use animated robot avatars in VHRI to rapidly identify robot behavioral styles that affect human trust assessment of the robot. In a subsequent study, we use an IVE to measure behavioral interaction between humans and an animated robot avatar equipped with behaviors from the VHRI experiment. Our findings reconfirm that a robot's task performance influences its trustworthiness, but the effect of the behavioral style identified in the VHRI study did not influence the robot's trustworthiness in the IVE study.
Article
Full-text available
As robots venture into new application domains as autonomous vehicles on the road or as domestic helpers at home, they must recognize human intentions and behaviors in order to operate effectively. This paper investigates a new class of motion planning problems with uncertainty in human intention. We propose a method for constructing a practical model by assuming a finite set of unknown in-tentions. We first construct a motion model for each intention in the set and then combine these models together into a single Mixed Observability Markov Decision Process (MOMDP), which is a structured variant of the more common Partially Ob-servable Markov Decision Process (POMDP). By leveraging the latest advances in POMDP/MOMDP approximation algorithms, we can construct and solve moder-ately complex models for interesting robotic tasks. Experiments in simulation and with an autonomous vehicle show that the proposed method outperforms common alternatives because of its ability in recognizing intentions and using the information effectively for decision making.
Conference Paper
Full-text available
Much robotics research explores how robots can clearly communicate true information. Here, we focus on the counterpart: communicating false information, or hiding information altogether – in one word, deception. Robot deception is useful in conveying intentionality, and in making games against the robot more engaging. We study robot deception in goal-directed motion, in which the robot is concealing its actual goal. We present an analysis of deceptive motion, starting with how humans would deceive, moving to a mathematical model that enables the robot to autonomously generate deceptive motion, and ending with a study on the implications of deceptive motion for human robot interactions.
Article
Full-text available
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a ‘trust transfer function’ is developed using lime series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Article
Full-text available
One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A 3-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study, was performed to better understand similarities and differences in the concepts of trust and distrust, and among the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation.
Article
Full-text available
Tested a theoretical model of interpersonal trust in close relationships with 47 dating, cohabiting, or married couples (mean ages were 31 yrs for males and 29 yrs for females). The validity of the model's 3 dimensions of trust—predictability, dependability, and faith—was examined. Ss completed scales designed to measure liking and loving, trust, and motivation for maintaining the relationship. An analysis of the instrument measuring trust was consistent with the notion that the predictability, dependability, and faith components represent distinct and coherent dimensions. The perception of intrinsic motives in a partner emerged as a dimension, as did instrumental and extrinsic motives. As expected, love and happiness were closely tied to feelings of faith and the attribution of intrinsic motivation to both self and partner. Women appeared to have more integrated, complex views of their relationships than men: All 3 forms of trust were strongly related, and attributions of instrumental motives in their partners seemed to be self-affirming. There was a tendency for Ss to view their own motives as less self-centered and more exclusively intrinsic than their partner's motives. (25 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Investigated the effects of varying distributions of success and failure on attributions of intellectual ability. In Exp. I-IV undergraduate Ss confronted a stimulus person who solved 15 out of 30 problems in a random, descending, or ascending success pattern. In Exp. V only the descending and ascending patterns were compared. Contrary to prediction, the performer who showed improvement (ascending success) was not consistently judged to be more able than the performer with randomly spaced successes. The performer with a descending success rate, however, was consistently judged to be more intelligent and was expected to outperform those with either ascending or random patterns. Memory for past performance was uniformly distorted in favor of recalling more success for the descending performer and less success for the ascending and random performers. Neither this measure nor ratings of intelligence required, for their discriminating effects, that S himself solve the problems in parallel with the person being judged. In the final experiment S himself performed in an improving, deteriorating, or random but stable fashion, and estimated his future performance. Under these circumstances, the ascending performer was more confident about his ability than the descending or random performer, reversing the picture of the 1st 5 experiments. Results are discussed in terms of the salience of early information in attributing ability and the role of social comparison processes. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
The influence of teammates' shared mental models on team processes and performance was tested using 56 undergraduate dyads who "flew" a series of missions on a personal-computer-based flight-combat simulation. The authors both conceptually and empirically distinguished between teammates' task- and team-based mental models and indexed their convergence or "sharedness" using individually completed paired-comparisons matrices analyzed using a network-based algorithm. The results illustrated that both shared-team- and task-based mental models related positively to subsequent team process and performance. Furthermore, team processes fully mediated the relationship between mental model convergence and team effectiveness. Results are discussed in terms of the role of shared cognitions in team effectiveness and the applicability of different interventions designed to achieve such convergence. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Conference Paper
Full-text available
This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent's belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, Monte-Carlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 × 10 battleship and partially observable PacMan, with approximately 10 18 and 10 56 states respectively. Our Monte-Carlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.
Conference Paper
Full-text available
Motion planning in uncertain and dynamic environments is an essential capability for autonomous robots. Partially observable Markov decision processes (POMDPs) provide a principled mathematical framework for solving such problems, but they are often avoided in robotics due to high computational complexity. Our goal is to create practical POMDP algorithms and software for common robotic tasks. To this end, we have developed a new point-based POMDP algorithm that exploits the notion of optimally reachable belief spaces to improve computational efficiency. In simulation, we successfully applied the algorithm to a set of common robotic tasks, including instances of coastal navigation, grasping, mobile robot exploration, and target tracking, all modeled as POMDPs with a large number of states. In most of the instances studied, our algorithm substantially outperformed one of the fastest existing point-based algorithms. A software package implementing our algorithm will soon be released at
Article
Full-text available
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been applied to various robotic tasks. However, solving POMDPs exactly is computationally intractable. A major challenge is to scale up POMDP algorithms for complex robotic tasks. Robotic systems often have mixed observability : even when a robot’s state is not fully observable, some components of the state may still be so. We use a factored model to represent separately the fully and partially observable components of a robot’s state and derive a compact lower-dimensional representation of its belief space. This factored representation can be combined with any point-based algorithm to compute approximate POMDP solutions. Experimental results show that on standard test problems, our approach improves the performance of a leading point-based POMDP algorithm by many times.
Article
Full-text available
Decision making in an uncertain environment poses a conflict between the opposing demands of gathering and exploiting information. In a classic illustration of this 'exploration-exploitation' dilemma, a gambler choosing between multiple slot machines balances the desire to select what seems, on the basis of accumulated experience, the richest option, against the desire to choose a less familiar option that might turn out more advantageous (and thereby provide information for improving future decisions). Far from representing idle curiosity, such exploration is often critical for organisms to discover how best to harvest resources such as food and water. In appetitive choice, substantial experimental evidence, underpinned by computational reinforcement learning (RL) theory, indicates that a dopaminergic, striatal and medial prefrontal network mediates learning to exploit. In contrast, although exploration has been well studied from both theoretical and ethological perspectives, its neural substrates are much less clear. Here we show, in a gambling task, that human subjects' choices can be characterized by a computationally well-regarded strategy for addressing the explore/exploit dilemma. Furthermore, using this characterization to classify decisions as exploratory or exploitative, we employ functional magnetic resonance imaging to show that the frontopolar cortex and intraparietal sulcus are preferentially active during exploratory decisions. In contrast, regions of striatum and ventromedial prefrontal cortex exhibit activity characteristic of an involvement in value-based exploitative decision making. The results suggest a model of action selection under uncertainty that involves switching between exploratory and exploitative behavioural modes, and provide a computationally precise characterization of the contribution of key decision-related brain systems to each of these functions.
Article
Collaborative fluency is the coordinated meshing of joint activities between members of a well-synchronized team. In recent years, researchers in human–robot collaboration have been developing robots to work alongside humans aiming not only at task efficiency, but also at human–robot fluency. As part of this effort, we have developed a number of metrics to evaluate the level of fluency in human–robot shared-location teamwork. While these metrics are being used in existing research, there has been no systematic discussion on how to measure fluency and how the commonly used metrics perform and compare. In this paper, we codify subjective and objective human–robot fluency metrics, provide an analytical model for four objective metrics, and assess their dynamics in a turn-taking framework. We also report on a user study linking objective and subjective fluency metrics and survey recent use of these metrics in the literature.
Conference Paper
Trust is essential for human-robot collaboration and user adoption of autonomous systems, such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human behaviors, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). The results show that the trust-POMDP improves human-robot team performance in this task. They further suggest that maximizing trust in itself may not improve team performance.
Article
Adaptation is critical for effective team collaboration. This paper introduces a computational formalism for mutual adaptation between a robot and a human in collaborative tasks. We propose the Bounded-Memory Adaptation Model, which is a probabilistic finite-state controller that captures human adaptive behaviors under a bounded-memory assumption. We integrate the Bounded-Memory Adaptation Model into a probabilistic decision process, enabling the robot to guide adaptable participants towards a better way of completing the task. Human subject experiments suggest that the proposed formalism improves the effectiveness of human-robot teams in collaborative tasks, when compared with one-way adaptations of the robot to the human, while maintaining the human’s trust in the robot.
Chapter
This paper proposes a new approach to both characterize inter-robot trust in multi-robot systems and adapt trust online in response to the relative performance of the robots. The approach is applied to a multi-robot coverage control scenario, in which a team of robots must spread out over an environment to provide sensing coverage. A decentralized algorithm is designed to control the positions of the robots, while simultaneously adapting their trust weightings. Robots with higher quality sensors take charge of a larger region in the environment, while robots with lower quality sensors have their regions reduced. Using a Lyapunov-type proof, it is proven that the robots converge to locally optimal positions for sensing that are as good as if the robots’ sensor qualities were known beforehand. The algorithm is demonstrated in Matlab simulations.
Chapter
We are interested in enhancing the efficiency of human–robot collaborations, especially in “supervisor-worker” settings where autonomous robots work under the supervision of a human operator. We believe that trust serves a critical role in modeling the interactions within these teams, and also in streamlining their efficiency. We propose an operational formulation of human–robot trust on a short interaction time scale, which is tailored to a practical tele-robotics setting. We also report on a controlled user study that collected interaction data from participants collaborating with an autonomous robot to perform visual navigation tasks. Our analyses quantify key correlations between real-time human–robot trust assessments and diverse factors, including properties of failure events reflecting causal trust attribution, as well as strong influences from each user’s personality. We further construct and optimize a predictive model of users’ trust responses to discrete events, which provides both insights on this fundamental aspect of real-time human–machine interaction, and also has pragmatic significance for designing trust-aware robot agents.
Conference Paper
We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially observable variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p
Conference Paper
We present OPTIMo: an Online Probabilistic Trust Inference Model for quantifying the degree of trust that a human supervisor has in an autonomous robot "worker". Represented as a Dynamic Bayesian Network, OPTIMo infers beliefs over the human's moment-to-moment latent trust states, based on the history of observed interaction experiences. A separate model instance is trained on each user's experiences, leading to an interpretable and personalized characterization of that operator's behaviors and attitudes. Using datasets collected from an interaction study with a large group of roboticists, we empirically assess OPTIMo's performance under a broad range of configurations. These evaluation results highlight OPTIMo's advances in both prediction accuracy and responsiveness over several existing trust models. This accurate and near real-time human-robot trust measure makes possible the development of autonomous robots that can adapt their behaviors dynamically, to actively seek greater trust and greater efficiency within future human-robot collaborations.
Article
Past research has investigated a number of methods for coordinating teams of agents, but with the growing number of sources of agents, it is likely that agents will encounter teammates that do not share their coordination methods. Therefore, it is desirable for agents to adapt to these teammates, forming an effective ad hoc team. Past ad hoc teamwork research has focused on cases where the agents do not directly communicate. However when teammates do communicate, it can provide a valuable channel for coordination. Therefore, this paper tackles the problem of communication in ad hoc teams, introducing a minimal version of the multiagent, multiarmed bandit problem with limited communication between the agents. The theoretical results in this paper prove that this problem setting can be solved in polynomial time when the agent knows the set of possible teammates. Furthermore, the empirical results show that an agent can cooperate with a variety of teammates following unknown behaviors even when its models of these teammates are imperfect.
Conference Paper
On typical multi-robot teams, there is an implicit assumption that robots can be trusted to effectively perform assigned tasks. The multi-robot patrolling task is an example of a domain that is particularly sensitive to reliability and performance of robots. Yet reliable performance of team members may not always be a valid assumption even within homogeneous teams. For instance, a robot's performance may deteriorate over time or a robot may not estimate tasks correctly. Robots that can identify poorly performing team members as performance deteriorates, can dynamically adjust the task assignment strategy. This paper investigates the use of an observation based trust model for detecting unreliable robot team members. Robots can reason over this model to perform dynamic task reassignment to trusted team members. Experiments were performed in simulation and using a team of indoor robots in a patrolling task to demonstrate both centralized and decentralized approaches to task reassignment. The results clearly demonstrate that the use of a trust model can improve performance in the multi-robot patrolling task.
Article
This paper presents an intention-aware online planning approach for autonomous driving amid many pedestrians. To drive near pedestrians safely, efficiently, and smoothly, autonomous vehicles must estimate unknown pedestrian intentions and hedge against the uncertainty in intention estimates in order to choose actions that are effective and robust. A key feature of our approach is to use the partially observable Markov decision process (POMDP) for systematic, robust decision making under uncertainty. Although there are concerns about the potentially high computational complexity of POMDP planning, experiments show that our POMDP-based planner runs in near real time, at 3 Hz, on a robot golf cart in a complex, dynamic environment. This indicates that POMDP planning is improving fast in computational efficiency and becoming increasingly practical as a tool for robot planning under uncertainty.
Article
We present POMCoP, a system for online planning in collab-orative domains that reasons about how its actions will af-fect its understanding of human intentions, and demonstrate its use in building sidekicks for cooperative games. POM-CoP plans in belief space. It explicitly represents its uncer-tainty about the intentions of its human ally, and plans actions which reveal those intentions or hedge against its uncertainty. This allows POMCoP to reason about the usefulness of in-corporating information gathering actions into its plans, such as asking questions, or simply waiting to let humans reveal their intentions. We demonstrate POMCoP by constructing a sidekick for a cooperative pursuit game, and evaluate its effectiveness relative to MDP-based techniques that plan in state space, rather than belief space.
Conference Paper
A key requirement for seamless human-robot collaboration is for the robot to make its intentions clear to its human collaborator. A collaborative robot's motion must be legible, or intent-expressive. Legibility is often described in the literature as and effect of predictable, unsurprising, or expected motion. Our central insight is that predictability and legibility are fundamentally different and often contradictory properties of motion. We develop a formalism to mathematically define and distinguish predictability and legibility of motion. We formalize the two based on inferences between trajectories and goals in opposing directions, drawing the analogy to action interpretation in psychology. We then propose mathematical models for these inferences based on optimizing cost, drawing the analogy to the principle of rational action. Our experiments validate our formalism's prediction that predictability and legibility can contradict, and provide support for our models. Our findings indicate that for robots to seamlessly collaborate with humans, they must change the way they plan their motion.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.
Conference Paper
The assistant interface metaphor has the potential to shield the human user from low-level, task-specific details, while allowing the automation of the many idiosyncratic, mundane tasks falling between the capabilities of commercial software packages. However, a user will not willingly put resources (money, privacy, information) at risk unless the assistant can be trusted to carry out the task in accord with the user's goals and priorities. This risk is significant, because assistant behaviors, being idiosyncratic and highly customized, will not be as well supported or documented as is commercial software. This paper describes a solution to this problem, allowing the assistant to safely execute partially trusted behaviors and to interactively increase the user's trust in the behavior so that more of the steps can be carried out autonomously. The approach is independent of how the behavior was acquired and is based on using incremental formal validation to populate a trust library for the behavior
Modeling Trust to Improve Human-Robot Interaction
  • Munjal Desai
Intention-aware online POMDP planning for autonomous driving in a crowd
  • H Y Bai
  • S J Cai
  • D Hsu
  • W S Lee
  • Bai H. Y.
Trust-guided behavior adaptation using case-based reasoning
  • Michael W Floyd
  • Michael Drinkwater
  • David W Aha
  • Floyd Michael W.
Intention-aware motion planning
  • Tirthankar Bandyopadhyay
  • Emilio Kok Sung Won
  • David Frazzoli
  • Hsu
  • Bandyopadhyay Tirthankar