Conference PaperPDF Available

Expressing Robot Incapability

Authors:

Abstract and Figures

Our goal is to enable robots to express their incapability, and to do so in a way that communicates both what they are trying to accomplish and why they are unable to accomplish it. We frame this as a trajectory optimization problem: maximize the similarity between the motion expressing incapability and what would amount to successful task execution, while obeying the physical limits of the robot. We introduce and evaluate candidate similarity measures, and show that one in particular generalizes to a range of tasks, while producing expressive motions that are tailored to each task. Our user study supports that our approach automatically generates motions expressing incapability that communicate both what and why to end-users, and improve their overall perception of the robot and willingness to collaborate with it in the future.
Content may be subject to copyright.
A preview of the PDF is not available
... One of the ways that we can improve human-robot interaction is by expanding the communication mechanisms available for exchanging information between humans and robots. Prior research has explored a variety of communicative channels, including gaze (Mulu, 2006;Andrist et al., 2012;Andrist et al., 2014;Admoni, 2016;Oliveira et al., 2018), implicit motion (Dragan et al., 2013;Szafir et al., 2014;Sadigh et al., 2016;Zhou et al., 2017;Kwon et al., 2018), gesture (Waldherr et al., 2000), sound (Cha and Matarić, 2016;Cha et al., 2018a), visual displays, lights (Szafir et al., 2015;Baraka et al., 2016;Song and Yamada, 2018), haptics (Guerreiro et al., 2019;Guinness et al., 2019), projection (Pierce et al., 2012;Cauchard et al., 2019), and augmented reality Walker et al., 2018;Cao et al., 2019;Szafir, 2019;Walker et al., 2019). Expandable structures represent a promising new signalling medium to add to this collection of methods for supporting human-robot information exchange, which may take the form of functional cues regarding the robot and task or affective signals that communicate emotional information. ...
Article
Full-text available
In this paper, we survey the emerging design space of expandable structures in robotics, with a focus on how such structures may improve human-robot interactions. We detail various implementation considerations for researchers seeking to integrate such structures in their own work and describe how expandable structures may lead to novel forms of interaction for a variety of different robots and applications, including structures that enable robots to alter their form to augment or gain entirely new capabilities, such as enhancing manipulation or navigation, structures that improve robot safety, structures that enable new forms of communication, and structures for robot swarms that enable the swarm to change shape both individually and collectively. To illustrate how these considerations may be operationalized, we also present three case studies from our own research in expandable structure robots, sharing our design process and our findings regarding how such structures enable robots to produce novel behaviors that may capture human attention, convey information, mimic emotion, and provide new types of dynamic affordances.
... In contrast, interaction failures arise from uncertainties in the robot's interaction with the environment, other agents, and humans [2]. In the human-robot interaction (HRI) domain, studies have examined types of failures and how they affect human users [6,[21][22][23][24][25][26][27][28]. Most investigated failures are technical failures, and only a few studies looked at interaction-related failures [2]. ...
Article
Full-text available
In three laboratory experiments, we examine the impact of personally relevant failures (PeRFs) on users’ perceptions of a collaborative robot. PeR is determined by how much a specific issue applies to a particular person, i.e., it affects one's own goals and values. We hypothesized that PeRFs would reduce trust in the robot and the robot's Likeability and Willingness to Use (LWtU) more than failures that are not personal to participants. To achieve PeR in human–robot interaction, we utilized three different manipulation mechanisms: (A) damage to property, (B) financial loss, and (C) first-person versus third-person failure scenarios. In total, 132 participants engaged with a robot in person during a collaborative task of laundry sorting. All three experiments took place in the same experimental environment, carefully designed to simulate a realistic laundry sorting scenario. Results indicate that the impact of PeRFs on perceptions of the robot varied across the studies. In experiments A and B, the encounters with PeRFs reduced trust significantly relative to a no failure session. But not entirely for LWtU. In experiment C, the PeR manipulation had no impact. The work highlights challenges and adjustments needed for studying robotic failures in laboratory settings. We show that PeR manipulations affect how users perceive a failing robot. The results bring about new questions regarding failure types and their perceived severity on users' perception of the robot. Putting PeR aside, we observed differences in the way users perceive interaction failures compared (experiment C) to how they perceive technical ones (A and B).
Article
Robot Learning from Demonstration (RLfD) allows non-expert users to teach a robot new skills or tasks directly through demonstrations. Although modeled after human-human learning and teaching, existing RLfD methods make robots act as passive observers without the feedback of their learning statuses in the demonstration gathering stage. To facilitate a more transparent teaching process, we propose two mechanisms of Learning Engagement , Z2O-Mode and D2O-Mode, to dynamically adapt robots’ attentional and behavioral engagement expressions to their actual learning status. Through an online user experiment with 48 participants, we find that, compared with two baselines, the two kinds of Learning Engagement can lead to users’ more accurate mental models of the robot’s learning progress, more positive perceptions of the robot, and better teaching experience. Finally, we provide implications for leveraging engagement expression to facilitate transparent human-AI (robot) communication based on our key findings.
Article
Robots need to explain their behavior to gain trust. Existing research has focused on explaining a robot’s current behavior, yet it remains unknown yet challenging how to provide explanations of past actions in an environment that might change after a robot’s actions, leading to critical missing causal information due to moved objects. We conducted an experiment (N=665) investigating how a robot could help participants infer the missing causal information by replaying the past behavior physically, using verbal explanations, and projecting visual information onto the environment. Participants watched videos of the robot replaying its completion of an integrated mobile kitting task. During the replay, the objects are already gone, so participants needed to infer where an object was picked, where a ground obstacle had been, and where the object was placed. Based on the results, we recommend combining physical replay with speech and projection indicators (Replay-Project-Say) to help infer all the missing causal information (picking, navigation, and placement) from the robot’s past actions. This condition had the best outcome in both task-based – effectiveness, efficiency, and confidence – and team-based metrics – workload and trust. If one’s focus is efficiency, we recommend projection markers for navigation inferences and verbal markers for placing inferences.
Chapter
Due to the continuous development of intelligent technology, the form of active interaction appears in the working mode of smart products. With this change of human-computer interaction mode, the concept of human-computer interaction is gradually migrating to human-computer collaboration. One of the idiosyncrasies of the human-computer synergy process is the active nature of the intelligent system in the interaction process, and therefore the design of the emotional expression of the intelligent terminal becomes a key factor affecting the user experience. In this regard, this paper focuses on the use and experiential effects of active interactive feedback in outdoor, indoor, and self-driving service scenarios, as well as the differences in user experience in terms of action attraction, action empathy, and trust conveyed by different behavior patterns, ability expressions and image forms for users in these scenarios. The analysis summarizes the key factors affecting user experience during the active interaction of intelligent devices and provides a reference for the design of interaction forms for intelligent service robots. KeywordsHuman-computer interactionHuman-machine collaborationArtificial emotional expressionUser experience
Article
As development of robots with the ability to self-assess their proficiency for accomplishing tasks continues to grow, metrics are needed to evaluate the characteristics and performance of these robot systems and their interactions with humans. This proficiency-based human-robot interaction (HRI) use case can occur before, during, or after the performance of a task. This paper presents a set of metrics for this use case, driven by a four stage cyclical interaction flow: 1) robot self-assessment of proficiency (RSA), 2) robot communication of proficiency to the human (RCP), 3) human understanding of proficiency (HUP), and 4) robot perception of the human’s intentions, values, and assessments (RPH). This effort leverages work from related fields including explainability, transparency, and introspection, by repurposing metrics under the context of proficiency self-assessment. Considerations for temporal level ( a priori , in situ , and post hoc ) on the metrics are reviewed, as are the connections between metrics within or across stages in the proficiency-based interaction flow. This paper provides a common framework and language for metrics to enhance the development and measurement of HRI in the field of proficiency self-assessment.
Article
Robots can learn from humans by asking questions. In these questions the robot demonstrates a few different behaviors and asks the human for their favorite. But how should robots choose which questions to ask? Today’s robots optimize for informative questions that actively probe the human’s preferences as efficiently as possible. But while informative questions make sense from the robot’s perspective, human onlookers may find them arbitrary and misleading . For example, consider an assistive robot learning to put away the dishes. Based on your answers to previous questions this robot knows where it should stack each dish; however, the robot is unsure about right height to carry these dishes. A robot optimizing only for informative questions focuses purely on this height: it shows trajectories that carry the plates near or far from the table, regardless of whether or not they stack the dishes correctly. As a result, when we see this question, we mistakenly think that the robot is still confused about where to stack the dishes! In this paper we formalize active preference-based learning from the human’s perspective. We hypothesize that — from the human’s point-of-view — the robot’s questions reveal what the robot has and has not learned. Our insight enables robots to use questions to make their learning process transparent to the human operator. We develop and test a model that robots can leverage to relate the questions they ask to the information these questions reveal. We then introduce a trade-off between informative and revealing questions that considers both human and robot perspectives: a robot that optimizes for this trade-off actively gathers information from the human while simultaneously keeping the human up to date with what it has learned. We evaluate our approach across simulations, online surveys, and in-person user studies. We find that robots which consider the human’s point of view learn just as quickly as state-of-the-art baselines while also communicating what they have learned to the human operator. Videos of our user studies and results are available here: https://youtu.be/tC6y_jHN7Vw.
Article
Full-text available
When pedestrians encounter vehicles, they typically stop and wait for a signal from the driver to either cross or wait. What happens when the car is autonomous and there isn't a human driver to signal them? This paper seeks to address this issue with an intent communication system (ICS) that acts in place of a human driver. This intent system has been developed to take into account the psychology behind what pedestrians are familiar with and what they expect from machines. The system integrates those expectations into the design of physical systems and mathematical algorithms. The goal of the system is to ensure that communication is simple, yet effective without leaving pedestrians with a sense of distrust in autonomous vehicles. To validate the ICS, two types of experiments have been run: field tests with an autonomous vehicle to determine how humans actually interact with the ICS and simulations to account for multiple potential behaviors.The results from both experiments show that humans react positively and more predictably when the intent of the vehicle is communicated compared to when the intent of the vehicle is unknown. In particular, the results from the simulation specifically showed a 142 percent difference between the pedestrian's trust in the vehicle's actions when the ICS is enabled and the pedestrian has prior knowledge of the vehicle than when the ICS is not enabled and the pedestrian having no prior knowledge of the vehicle.
Article
Full-text available
Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, robots often have difficulty efficiently communicating their motion intent to humans via these methods. Many existing methods for robot motion intent communication rely on 2D displays, which require the human to continually pause their work and check a visualization. We propose a mixed reality head-mounted display visualization of the proposed robot motion over the wearer's real-world view of the robot and its environment. To evaluate the effectiveness of this system against a 2D display visualization and against no visualization, we asked 32 participants to labeled different robot arm motions as either colliding or non-colliding with blocks on a table. We found a 16% increase in accuracy with a 62% decrease in the time it took to complete the task compared to the next best system. This demonstrates that a mixed-reality HMD allows a human to more quickly and accurately tell where the robot is going to move than the compared baselines.
Article
Full-text available
We conducted a user study for which we purposefully programmed faulty behavior into a robot's routine. It was our aim to explore if participants rate the faulty robot different from an error-free robot and which reactions people show in interaction with a faulty robot. The study was based on our previous research on robot errors where we detected typical error situations and the resulting social signals of our participants during social human-robot interaction. In contrast to our previous work, where we studied video material in which robot errors occurred unintentionally, in the herein reported user study, we purposefully elicited robot errors to further explore the human interaction partners' social signals following a robot error. Our participants interacted with a human-like NAO, and the robot either performed faulty or free from error. First, the robot asked the participants a set of predefined questions and then it asked them to complete a couple of LEGO building tasks. After the interaction, we asked the participants to rate the robot's anthropomorphism, likability, and perceived intelligence. We also interviewed the participants on their opinion about the interaction. Additionally, we video-coded the social signals participants showed during their interaction with the robot as well as the answers they provided the robot with. Our results show that participants liked the faulty robot significantly better than the robot that interacted flawlessly. We did not find significant differences in people's rating of the robot's anthropomorphism and perceived intelligence. The qualitative data confirmed the questionnaire results in showing that although the participants recognized the robot's mistakes, they did not necessarily reject the erroneous robot. The annotations of the video data further showed that gaze shifts (e.g., from and object to the robot or vice versa) and laughter are typical reactions to unexpected robot behavior. In contrast to existing research, we assess dimensions of user experience that have not been considered so far and we analyze the reactions users express when a robot makes a mistake. Our results show that decoding a human's social signals can help the robot understand that there is an error and subsequently react accordingly.
Conference Paper
Our goal is to enable robots to time their motion in a way that is purposefully expressive of their internal states, making them more transparent to people. We start by investigating what types of states motion timing is capable of expressing, focusing on robot manipulation and keeping the path constant while systematically varying the timing. We find that users naturally pick up on certain properties of the robot (like confidence), of the motion (like naturalness), or of the task (like the weight of the object that the robot is carrying). We then conduct a hypothesis-driven experiment to tease out the directions and magnitudes of these effects, and use our findings to develop candidate mathematical models for how users make these inferences from the timing. We find a strong correlation between the models and real user data, suggesting that robots can leverage these models to autonomously optimize the timing of their motion to be expressive.
Conference Paper
In human-robot teams, humans often start with an inaccurate model of the robot capabilities. As they interact with the robot, they infer the robot's capabilities and partially adapt to the robot, i.e., they might change their actions based on the observed outcomes and the robot's actions, without replicating the robot's policy. We present a game-theoretic model of human partial adaptation to the robot, where the human responds to the robot's actions by maximizing a reward function that changes stochastically over time, capturing the evolution of their expectations of the robot's capabilities. The robot can then use this model to decide optimally between taking actions that reveal its capabilities to the human and taking the best action given the information that the human currently has. We prove that under certain observability assumptions, the optimal policy can be computed efficiently. We demonstrate through a human subject experiment that the proposed model significantly improves human-robot team performance, compared to policies that assume complete adaptation of the human to the robot.
Conference Paper
Perfect memory, strong reasoning abilities, and flawless performance are typical cognitive traits associated with robots. In contrast, forgetting and erroneous reasoning are typical cognitive patterns of humans. This discrepancy may fundamentally affect the way how robots and humans interact and collaborate together and is today still little explored. In this paper, we investigate the effect of differences between erroneous and perfect robots in a competitive scenario in which humans and robots solve reasoning tasks and memorize numbers. Participants are randomly assigned to one of two groups: in the first group they interact with a perfect, flawless robot, while in the second, they interact with a human-like robot with occasional errors and imperfect memory abilities. Participants rated attitude, sympathy, and attributes in a questionnaire and we measured their task performance. The results show that the erroneous robot triggered more positive emotions but lead to a lower human performance than the perfect one. Effects of both conditions on the group of students with and without technical background are reported.