Conference Paper

Influence of Different Explanation Types on Robot-Related Human Factors in Robot Navigation Tasks

Authors:
  • Karl-Franzens-Universität Graz and Joanneum Research
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI model’s decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.
Article
Full-text available
Recent applications of autonomous agents and robots have brought attention to crucial trust-related challenges associated with the current generation of artificial intelligence (AI) systems. AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes. Without symbolic interpretation capabilities, they are ’black boxes’, which renders their choices or actions opaque, making it difficult to trust them in safety-critical applications. The recent stance on the explainability of AI systems has witnessed several approaches to eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences. Studies addressing the increasingly pervasive goal-driven agents and robots are sparse at this point in time. This paper reviews approaches on explainable goal-driven intelligent agents and robots, focusing on techniques for explaining and communicating agents’ perceptual functions (e.g., senses, vision) and cognitive reasoning (e.g., beliefs, desires, intention, plans, and goals) with humans in the loop. The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability. Finally, the paper presents requirements for explainability and suggests a road map for the possible realization of effective goal-driven explainable agents and robots.
Chapter
Full-text available
This paper presents a user-centered approach to translating techniques and insights from AI explainability research to developing effective explanations of complex issues in other fields, on the example of COVID-19. We show how the problem of AI explainability and the explainability problem in the COVID-19 pandemic are related: as two specific instances of a more general explainability problem, occurring when people face in-transparent, complex systems and processes whose functioning is not readily observable and understandable to them (“black boxes”). Accordingly, we discuss how we applied an interdisciplinary, user-centered approach based on Design Thinking to develop a prototype of a user-centered explanation for a complex issue regarding people’s perception of COVID-19 vaccine development. The developed prototype demonstrates how AI explainability techniques can be adapted and integrated with methods from communication science, visualization and HCI to be applied to this context. We also discuss results from a first evaluation in a user study with 88 participants and outline future work. The results indicate that it is possible to effectively apply methods and insights from explainable AI to explainability problems in other fields and support the suitability of our conceptual framework to inform that. In addition, we show how the lessons learned in the process provide new insights for informing further work on user-centered approaches to explainable AI itself.
Conference Paper
Full-text available
Trust is vital for effective human-robot teams. Trust is unstable, however, and it changes over time, with decreases in trust occurring when robots make mistakes. In such cases, certain strategies identified in the human-human literature can be deployed to repair trust, including apologies, denials, explanations, and promises. Whether these strategies work in the human-robot domain, however, remains largely unknown. This is primarily because of the fragmented and dispersed state of the current literature on trust repair in HRI. As a result, this paper brings together studies on trust repair in HRI and presents a more cohesive view of when apologies, denials, explanations, and promises have been seen to repair trust. In doing so, this paper also highlights possible gaps and proposes future work. This contributes to the literature in several ways but primarily provides a starting point for future research and recommendations for studies seeking to determine how trust can be repaired in HRI.
Preprint
Full-text available
People are increasingly subject to algorithmic decisions, and it is generally agreed that end-users should be provided an explanation or rationale for these decisions. There are different purposes that explanations can have, such as increasing user trust in the system or allowing users to contest the decision. One specific purpose that is gaining more traction is algorithmic recourse. We first propose that recourse should be viewed as a recommendation problem, not an explanation problem. Then, we argue that the capability approach provides plausible and fruitful ethical standards for recourse. We illustrate by considering the case of diversity constraints on algorithmic recourse. Finally, we discuss the significance and implications of adopting the capability approach for algorithmic recourse research.
Article
Full-text available
Objective: In this review, we investigate the relationship between agent transparency, Situation Awareness, mental work-load, and operator performance for safety critical domains. Background: The advancement of highly sophisticated automation across safety critical domains poses a challenge for effective human oversight. Automation transparency is a design principle that could support humans by making the automation's inner workings observable (i.e., "seeing-into"). However, experimental support for this has not been systematically documented to date. Method: Based on the PRISMA method, a broad and systematic search of the literature was performed focusing on identifying empirical research investigating the effect of transparency on central Human Factors variables. Results: Our final sample consisted of 17 experimental studies that investigated transparency in a controlled setting. The studies typically employed three human-automation interaction types: responding to agent-generated proposals, supervisory control of agents, and monitoring only. There is an overall trend in the data pointing towards a beneficial effect of transparency. However, the data reveals variations in Situation Awareness, mental workload, and operator performance for specific tasks, agent-types, and level of integration of transparency information in primary task displays. Conclusion: Our data suggests a promising effect of automation transparency on Situation Awareness and operator performance , without the cost of added mental workload, for instances where humans respond to agent-generated proposals and where humans have a supervisory role. Application: Strategies to improve human performance when interacting with intelligent agents should focus on allowing humans to see into its information processing stages, considering the integration of information in existing Human Machine Interface solutions.
Article
Full-text available
Advanced communication protocols are critical for the coexistence of autonomous robots and humans. Thus, the development of explanatory capabilities in robots is an urgent first step toward realizing autonomous robots. This survey provides an overview of the various types of ‘explainability’ discussed in the machine learning literature. The definition of ‘explainability’ in the context of autonomous robots is then discussed by exploring the question: ‘What is an explanation?’ We further conduct a survey based on this definition and present relevant topics for future research in this paper.
Article
Full-text available
Standstill behavior by a robot is deemed to be ineffective and inefficient to convey a robot's intention to yield priority to another party in spatial interaction. Instead, robots could convey their intention and thus their next action via motion. We developed a back-off (BO) movement to communicate the intention of yielding priority to pedestrians at bottlenecks. To evaluate human sensory perception and subjective legibility, the BO is compared to three other motion strategies in a video study with 167 interviewees at the university and public spaces, where it excels regarding legibility. Implemented in a real encounter, objective motion behavior of 78 participants as a reaction to a stop-and-wait strategy, and two versions of BO (short and long), shows an improvement of the pedestrians' efficiency in the second encounter with the robot's short BO version compared to the stop strategy. Eventually, in the third encounter with all motion strategies, interaction causes only a small time consumption still required by the cognitive process of perceiving an object in the visual field. Hence, the design of kinematic parameters, BO path and time, exhibits the potential to increase the fluency of an interaction with robots at bottlenecks.
Article
Full-text available
The role of intelligent agents becomes more social as they are expected to act in direct interaction, involvement and/or interdependency with humans and other artificial entities, as in Human-Agent Teams (HAT). The highly interdependent and dynamic nature of teamwork demands correctly calibrated trust among team members. Trust violations are an inevitable aspect of the cycle of trust and since repairing damaged trust proves to be more difficult than building trust initially, effective trust repair strategies are needed to ensure durable and successful team performance. The aim of this study was to explore the effectiveness of different trust repair strategies from an intelligent agent by measuring the development of human trust and advice taking in a Human-Agent Teaming task. Data for this study were obtained using a task environment resembling a first-person shooter game. Participants carried out a mission in collaboration with their artificial team member. A trust violation was provoked when the agent failed to detect an approaching enemy. After this, the agent offered one of four trust repair strategies, composed of the apology components explanation and expression of regret (either one alone, both or neither). Our results indicated that expressing regret was crucial for effective trust repair. After trust declined due to the violation by the agent, trust only significantly recovered when an expression of regret was included in the apology. This effect was stronger when an explanation was added. In this context, the intelligent agent was the most effective in its attempt of rebuilding trust when it provided an apology that was both affective, and informational. Finally, the implications of our findings for the design and study of Human-Agent trust repair are discussed.
Article
Full-text available
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence , namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
Conference Paper
Full-text available
A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.
Conference Paper
Full-text available
This abstract presents a preliminary evaluation of the usability of a novel system for cognitive testing, which is based on the multimodal interfaces of the social robot "Pepper" and the IBM cloud AI "Watson". Thirty-six participants experienced the system without assistance and filled the System Usability Scale questionnaire. Results show that the usability of the system is highly reliable.
Conference Paper
Full-text available
The increasing number of interactions with automated systems has sparked the interest of researchers in trust in automation because it predicts not only whether but also how an operator interacts with an automation. In this work, a theoretical model of trust in automation is established and the development and evaluation of a corresponding questionnaire (Trust in Automation, TiA) are described. Building on the model of organizational trust by Mayer, Davis, and Schoorman (1995) and the theoretical account by Lee and See (2004), a model for trust in automation containing six underlying dimensions was established. Following a deductive approach, an initial set of 57 items was generated. In a first online study, these items were analyzed and based on the criteria item difficulty, standard deviation, item-total correlation, internal consistency, overlap with other items in content, and response quote, 40 items were eliminated and two scales were merged, leaving six scales (Reliability/Competence, Understandability/Predictability, Propensity to Trust, Intention of Developers, Familiarity, and Trust in Automation) containing a total of 19 items. The internal structure of the resulting questionnaire was analyzed in a subsequent second online study by means of an exploratory factor analysis. The results show sufficient preliminary evidence for the proposed factor structure and demonstrate that further pursuit of the model is reasonable but certain revisions may be necessary. The calculated omega coefficients indicated good to excellent reliability for all scales. The results also provide evidence for the questionnaire's criterion validity: Consistent with the expectations, an unreliable automated driving system received lower trust ratings as a reliably functioning system. In a subsequent empirical driving simulator study, trust ratings could predict reliance on an automated driving system and monitoring in form of gaze behavior. Possible steps for revisions are discussed and recommendations for the application of the questionnaire are given. It has become impossible to evade automation: Thanks to the technological progress made, many functions that were previously carried out by humans can now be fully or partially replaced by machines (Parasuraman, Sheridan, & Wickens, 2000). As a consequence, they are taking over more and more functions in work and leisure environments of all kinds in our day-today lives. The resulting increase in the number of interactions with automated systems has sparked the interest of human factors researchers to investigate trust in automation with the overall goal to ensure safe and
Article
Full-text available
In the last years many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness sometimes at the cost of scarifying accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation. The aim of this paper is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
Conference Paper
Full-text available
In cooperation, the workers must know how co-workers behave. However, an agent's policy, which is embedded in a statistical machine learning model, is hard to understand, and requires much time and knowledge to comprehend. Therefore, it is difficult for people to predict the behavior of machine learning robots, which makes Human Robot Cooperation challenging. In this paper, we propose Instruction-based Behavior Explanation (IBE), a method to explain an autonomous agent's future behavior. In IBE, an agent can autonomously acquire the expressions to explain its own behavior by reusing the instructions given by a human expert to accelerate the learning of the agent's policy. IBE also enables a developmental agent, whose policy may change during the cooperation, to explain its own behavior with sufficient time granularity.
Article
Full-text available
As AI is increasingly being adopted into application solutions, the challenge of supporting interaction with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but also it is a necessary step in the process of building trust as humans migrate greater responsibility to such systems. The challenge is to find effective ways to communicate the foundations of AI-driven behaviour, when the algorithms that drive it are far from transparent to humans. In this paper we consider the opportunities that arise in AI planning, exploiting the model-based representations that form a familiar and common basis for communication with users, while acknowledging the gap between planning algorithms and human problem-solving.
Article
Full-text available
Trust has been identified as a key element for the successful cooperation between humans and robots. However, little research has been directed at understanding trust development in industrial human-robot collaboration (HRC). With industrial robots becoming increasingly integrated into production lines as a means for enhancing productivity and quality, it will not be long before close proximity industrial HRC becomes a viable concept. Since trust is a multidimensional construct and heavily dependent on the context, it is vital to understand how trust develops when shop floor workers interact with industrial robots. To this end, in this study a trust measurement scale suitable for industrial HRC was developed in two phases. In phase one, an exploratory study was conducted to collect participants’ opinions qualitatively. This led to the identification of trust related themes relevant to the industrial context and a related pool of questionnaire items was generated. In the second phase, three human-robot trials were carried out in which the questionnaire items were applied to participants using three different types of industrial robots. The results were statistically analysed to identify the key factors impacting trust and from these generate a trust measurement scale for industrial HRC.
Article
Full-text available
Trust plays a critical role when operating a robotic system in terms of both acceptance and usage. Considering trust is a multidimensional context dependent construct, the differences and common themes were examined to identify critical considerations within human–robot interaction (HRI). In order to examine the role of trust within HRI, a measurement tool was generated based on five attributes: team configuration, team processes, context, task, and system (Yagoda in Human Factors and Ergonomics Society Annual Meeting, San Francisco, CA, pp. 304–308, 2010). The HRI trust scale was developed based on two studies. The first study conducts a content validity assessment of preliminary items generated, based on a review of previous research within HRI and automation, using subject matter experts (SMEs). The second study assesses the quality of each trust scale item derived from the first study. The results were then compiled to generate the HRI trust measurement tool.
Article
Full-text available
The purpose of this article is to offer a conceptualization of rapport that has utility for identifying the nonverbal correlates associated with rapport. We describe the nature of rapport in terms of a dynamic structure of three interrelating components: mutual attentiveness, positivity, and coordination. We propose that the relative weighting of these components in the experience of rapport changes over the course of a developing relationship between individuals. In early interactions, positivity and attentiveness are more heavily weighted than coordination, whereas in later interactions, coordination and attentiveness are the more heavily weighted components. Because of the gestalt nature of the experience of rapport, it is not easy to identify nonverbal behavioral correlates of the components. We discuss two approaches to nonverbal measurement, molecular and molar, along with recommendations for their appropriate application in the study of rapport at different stages of an interpersonal relationship. We present a meta-analytic study that demonstrates the effect of nonverbal behavior, measured at the molecular level, on the positivity component of rapport, and we conclude with an outline of hypotheses relevant to the investigation of the nonverbal correlates of rapport.
Article
Full-text available
Usability does not exist in any absolute sense; it can only be defined with reference to particular contexts. This, in turn, means that there are no absolute measures of usability, since, if the usability of an artefact is defined by the context in which that artefact is used, measures of usability must of necessity be defined by that context too. Despite this, there is a need for broad general measures which can be used to compare usability across a range of contexts. In addition, there is a need for "quick and dirty" methods to allow low cost assessments of usability in industrial systems evaluation. This chapter describes the System Usability Scale (SUS) a reliable, low-cost usability scale that can be used for global assessments of systems usability.
Article
Full-text available
This study emphasizes the need for standardized measurement tools for human robot interaction (HRI). If we are to make progress in this field then we must be able to compare the results from different studies. Aliterature review has been performed on the measurements of five key concepts in HRI: anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. The results have been distilled into five consistent questionnaires using semantic differential scales. We report reliability and validity indicators based on several empirical studies that used these questionnaires. It is our hope that these questionnaires can be used by robot developers to monitor their progress. Psychologists are invited to further develop the questionnaires by adding new concepts, and to conduct further validations where it appears necessary.
Article
Full-text available
This paper describes recent research in subjective usability measurement at IBM. The focus of the research was the application of psychometric methods to the development and evaluation of questionnaires that measure user satisfaction with system usability. The primary goals of this paper are to (1) discuss the psychometric characteristics of four IBM questionnaires that measure user satisfaction with computer system usability, and (2) provide the questionnaires, with administration and scoring instructions. Usability practitioners can use these questionnaires with confidence to help them measure users' satisfaction with the usability of computer systems.
Article
Full-text available
We present a cross-cultural user evaluation of an organization-based product recommender interface, by comparing it with the traditional list view. The results show that it performed significantly better, for all study participants, in improving on their competence perceptions, including perceived recommendation quality, perceived ease of use and perceived usefulness, and positively impacting users' behavioral intentions such as intention to save effort in the next visit. Additionally, oriental users were observed reacting more significantly strongly to the organization interface regarding some subjective aspects, compared to western subjects. Through this user study, we also identified the dominating role of the recommender system's decision-aiding competence in stimulating both oriental and western users' return intention to an e-commerce website where the system is applied.
Article
Human users who execute an automatically generated plan want to understand the rationale behind it. Knowledge-rich plans are particularly suitable for this purpose, because they provide the means to give reason for causal, temporal, and hierarchical relationships between actions. Based on this information, focused arguments can be generated that constitute explanations on an appropriate level of abstraction. In this paper, we present a formal approach to plan explanation. Information about plans is represented as first-order logic formulae and explanations are constructed as proofs in the resulting axiomatic system. With that, plan explanations are provably correct w.r.t. the planning system that produced the plan. A prototype plan explanation system implements our approach and first experiments give evidence that finding plan explanations is feasible in real-time.
Article
There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop. However, the work in this area has typically lacked coherence on the topic, with proposed solutions for “explicable”, “legible”, “predictable” and “transparent” planning with overlapping, and sometimes conflicting, semantics all aimed at some notion of understanding what intentions the observer will ascribe to an agent by observing its behavior. This is also true for the recent works on “security” and “privacy” of plans which are also trying to answer the same question, but from the opposite point of view – i.e. when the agent is trying to hide instead of reveal its intentions. This paper attempts to provide a workable taxonomy of relevant concepts in this exciting and emerging field of inquiry.
Article
Collective robotic systems are biologically inspired and advantageous due to their apparent global intelligence and emergent behaviors. Many applications can benefit from the incorporation of collectives, including environmental monitoring, disaster response missions, and infrastructure support. Transparency research has primarily focused on how the design of the models, visualizations, and control mechanisms influence human-collective interactions. Traditionally most transparency research has evaluated one system design element. This article analyzed two models and visualizations to understand how the system design elements impacted human-collective interactions, to quantify which model and visualization combination provided the best transparency, and provide design guidance, based on remote supervision of collectives. The consensus decision-making and baseline models, as well as an individual collective entity and abstract visualizations, were analyzed for sequential best-of- n decision-making tasks involving four collectives, composed of 200 entities each. Both models and visualizations provided transparency and influenced human-collective interactions differently. No single combination provided the best transparency.
Article
As development of robots with the ability to self-assess their proficiency for accomplishing tasks continues to grow, metrics are needed to evaluate the characteristics and performance of these robot systems and their interactions with humans. This proficiency-based human-robot interaction (HRI) use case can occur before, during, or after the performance of a task. This paper presents a set of metrics for this use case, driven by a four stage cyclical interaction flow: 1) robot self-assessment of proficiency (RSA), 2) robot communication of proficiency to the human (RCP), 3) human understanding of proficiency (HUP), and 4) robot perception of the human’s intentions, values, and assessments (RPH). This effort leverages work from related fields including explainability, transparency, and introspection, by repurposing metrics under the context of proficiency self-assessment. Considerations for temporal level ( a priori , in situ , and post hoc ) on the metrics are reviewed, as are the connections between metrics within or across stages in the proficiency-based interaction flow. This paper provides a common framework and language for metrics to enhance the development and measurement of HRI in the field of proficiency self-assessment.
Article
Inspired by the role of therapist-patient relationship in fostering behaviour change, agent-human relationship has been an active research area. This trusted relationship could be a result of the agent’s behavioural cues or the content it delivers that shows its knowledge. However, the impact of the resulting relationship using the various strategies on behaviour change is understudied. In this paper, we investigate the role of two strategies (empathic and social dialogue and explanation) in building agent-user rapport and whether the level of behaviour change intentions are due to the use of empathy or to trusting the agent’s understanding and recommendations through explanation. Hence, we designed two versions of a virtual advisor, empathic and neutral, to reduce study stress among university students and measured students’ rapport levels and intentions to change their behaviour. Some recommended behaviours had explanations based on the user’s beliefs. Our results showed that the agent could build a trusting relationship with the user with the help of the explanation regardless of the level of rapport. The results further showed that nearly all of the recommendations provided by the agent highly significantly increased the intention of the user to change their behavior related to these recommendations. However, we also found that it is important for the agent to obtain and reason about the user’s intentions concerning the specific behaviour before recommending a certain behavior change.
Chapter
This work documents a pilot user study evaluating the effectiveness of contrastive, causal and example explanations in supporting human understanding of AI in a hypothetical commonplace human-robot interaction (HRI) scenario. In doing so, this work situates “explainable AI” (XAI) in the context of the social sciences and suggests that HRI explanations are improved when informed by the social sciences.
Chapter
The increasing number of interactions with automated systems has sparked the interest of researchers in trust in automation because it predicts not only whether but also how an operator interacts with an automation. In this work, a theoretical model of trust in automation is established and the development and evaluation of a corresponding questionnaire (Trust in Automation, TiA) are described.
Conference Paper
Human users who execute an automatically generated plan want to understand the rationale behind it. Knowledge-rich plans are particularly suitable for this purpose, because they provide the means to give reason for causal, temporal, and hierarchical relationships between actions. Based on this information, focused arguments can be generated that constitute explanations on an appropriate level of abstraction. In this paper, we present a formal approach to plan explanation. Information about plans is represented as first-order logic formulae and explanations are constructed as proofs in the resulting axiomatic system. With that, plan explanations are provably correct w.r.t. the planning system that produced the plan. A prototype plan explanation system implements our approach and first experiments give evidence that finding plan explanations is feasible in real-time.
Conference Paper
This paper examines 29 papers that have proposed or applied metrics for human-robot interaction. The 42 metrics are categorized as to the object being directly measured: the human (7), the robot (6), or the system (29). Systems metrics are further subdivided into productivity, efficiency, reliability, safety, and coactivity. While 42 seems to be a large set, many metrics do not have a functional, or generalizable, mechanism for measuring that feature. In practice, metrics for system interactions are often inferred through observations of the robot or the human, introducing noise and error in analysis. The metrics do not completely capture the impact of autonomy on HRI as they typically focus on the agents, not the capabilities. As a result the current metrics are not helpful for determining what autonomous capabilities and interactions are appropriate for what tasks.
Article
A study was designed to examine the relationship between rapport and trust in a service context, and to gain a better understanding of the factors that contribute to these relational constructs.Consistent with the literature, both expertise and dependability were directly related to trust of the service provider. However, familiarity, which had previously been found to be directly related to trust, was found to be indirectly related through rapport, which was related to trust. All four hypotheses examining the antecedents of rapport were supported. The research suggests that building rapport may be an important intermediate step in building customer trust.
Article
Amartya Sen’s writings have articulated the importance of human agency, and identified the need for information on agency freedom to inform our evaluation of social arrangements. Many approaches to poverty reduction stress the need for empowerment. This paper reviews subjective quantitative measures of human agency at the individual level. It introduces large-scale cross-cultural psychological studies of self-direction, of autonomy, of self-efficacy, and of self-determination. Such studies and approaches have largely developed along an independent academic path from economic development and poverty reduction literature, yet may be quite significant in crafting appropriate indicators of individual empowerment or human agency. The purpose of this paper is to note avenues of collaborative enquiry that might be fruitful to develop.
Challenges in explanation quality evaluation
  • H Schuff
  • H Adel
  • P Qi
  • N T Vu
H. Schuff, H. Adel, P. Qi, and N. T. Vu, "Challenges in explanation quality evaluation," 2022. [Online]. Available: https: //arxiv.org/pdf/2210.07126
Reason explanation for encouraging behaviour change intention
  • A Abdulrahman
  • D Richards
  • A A Bilgin
A. Abdulrahman, D. Richards, and A. A. Bilgin, "Reason explanation for encouraging behaviour change intention," in Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, ser. AAMAS '21. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems, 2021, pp. 68-77.
Challenges in explanation quality evaluation
  • Schuff
Explainable ai for robot failures: Generating explanations that improve user assistance in fault recovery
  • Das