ArticlePDF Available

Abstract

One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A 3-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study, was performed to better understand similarities and differences in the concepts of trust and distrust, and among the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation.
... Ultimately, the most direct way to measure trust in ADS is by asking participants via a standardized or adapted scale. An often-used scale is the "Trust in Automation Scale" proposed by Jian et al. (2000), which classified trust into competency, reliability, predictability, and faith. Cramer et al. (2008) proposed a scale containing 12 items, six of which were adapted from Jian et al. (2000). ...
... An often-used scale is the "Trust in Automation Scale" proposed by Jian et al. (2000), which classified trust into competency, reliability, predictability, and faith. Cramer et al. (2008) proposed a scale containing 12 items, six of which were adapted from Jian et al. (2000). On the basis of these, Holthausen et al. (2020) developed the "Situational Trust Scale for Automated Driving", which incorporates important elements of Hoff and Bashir (2015)'s model, such as the driving context's potential risks and benefits, or the driver's self-efficacy for operating an automated vehicle. ...
Article
Full-text available
System reliability promotes trust, but may also impair human monitoring performance and in turn affects trust. This effect varies across different errors. This study examined the effect of automation reliability (100%, 75%, and 50%) and its framing (negative and positive description of reliability), and error bias (false alarm and miss) on user trust and its related factors in the automated driving system (ADS). Each participant completed 16 trials with human-vehicle collaboration task in a static driving simulator. The results showed that ADS with higher reliability positively impact user trust, but negatively impact situation awareness. Users' trust was higher in false alarm (FA) events than in miss events, but task success and situation awareness were higher in miss events. This study revealed an unusual negative correlation between trust and situational awareness in human-vehicle collaboration and provided possible insights into the internal factors of error bias in automation. Our finding has implications for reliability disclosure strategies and trust calibration.
... Human-machine trust ( HMT ) : Human-machine trust was assessed using a six-item scale informed by the works of McAllister [47] and Jian [48]. This variable is divided into two dimensions: cognitive trust and affective trust. ...
Article
Full-text available
Intelligent and virtualized transformation of the enterprise requires employees to undertake both technological breakthroughs and in-depth development, two innovative activities that seem to compete for resources. How employees, guided by algorithmically integrated decision-making, utilize massive information and technology to catalyze creativity in this path is not yet known. Based on research data from 198 corporate innovators, the enabling mechanisms of human-machine collaborative decision-making for dualistic innovations are examined. The study finds that: human-machine collaborative decision-making positively promotes both exploratory innovations and exploitative innovations; human-machine collaborative decision-making stimulates both types of dualistic innovations by enhancing the level of human-machine trust among employees; and corporate innovation culture positively moderates the direct effect of human-machine collaborative decision-making on dualistic innovations. The findings broaden the application of human-machine cooperation theory in the field of management and provide new ideas for enterprises to accurately identify employees’ attitudes and tendencies towards human-machine cooperation and formulate targeted strategies to stimulate dualistic innovations.
Article
The increasing popularity of Artificial Intelligence (AI) emphasizes the need for a deeper understanding of the factors that can influence its use. Providing accurate explanations may be a critical factor to help establish trust and increase the likelihood of the operator following an AI agent’s recommendations. However, the impact of explanations on human performance, trust, mental workload (MW) and situation awareness (SA), and confidence are not fully understood. In this paper, we investigate the effects of providing explanations about the agent’s recommendations (of varying levels of accuracy) on human operator’s performance, trust, MW, SA, and confidence compared to an agent that provides no explanations. Thirty individuals were divided into three groups and randomly assigned to an agent accuracy level (high, medium, low). Each participant completed two sessions using the AI agent: one with explanations, one without explanations. In each session, the user’s performance was measured objectively (number of anomalies correctly diagnosed and time to diagnosis), trust, MW, SA, and confidence in their response were measured using surveys. Participants’ performance, trust, SA, and confidence were higher in the sessions when the agent provided explanations for its recommendations. Accuracy had a significant effect on performance, MW and user confidence but not on trust, or SA.
Article
Full-text available
Existing research on human–automated vehicle (AV) interactions has largely focused on auditory explanations, with less attention to how voice characteristics shape user trust. This paper explores the influence of gender similarity between users and AV voices and the role of gender-role congruity rooted in societal stereotypes on cognitive and affective trust in AVs. Findings reveal that gender-role congruity moderates the relationship between gender similarity and trust. When an AV’s voice gender aligns with its expected role, gender similarity enhances cognitive and affective trust. However, when gender roles were not congruent, the trust-enhancing effect of gender similarity diminishes. These findings highlight the importance of considering gender in AV voice design for conveying critical driving information and reveal how societal stereotypes shape AV design. The study offers insights for enhancing user trust and acceptance of AV technology, suggesting future research directions for developing AV systems that avoid reinforcing social biases.
Preprint
Full-text available
This volume includes a selection of papers presented at the Workshop on Advancing Artificial Intelligence through Theory of Mind held at AAAI 2025 in Philadelphia US on 3rd March 2025. The purpose of this volume is to provide an open access and curated anthology for the ToM and AI research community.
Article
Background In-depth studies highlight that trust is essential for the effective interaction with autonomous vehicles, which, have not yet gained public trust. Objective The aim of this study was to develop and validate a comprehensive questionnaire to evaluate people's trust in autonomous vehicles. Methods After identifying influential factors in trust by interviews and brainstorming sessions, 75 questions across 5 dimensions were developed, which were then narrowed to 69 in 3 dimensions using “thinking aloud”. This version was assessed by 24 experts, resulting in 19 responses and the exclusion of 22 questions based on CVI and CVR. Finally, the reliability was determined using Cronbach's Alpha after an experiment with 24 participants. Results A 47-item questionnaire, with 3 dimensions including personal, social, and technical factors, and 21 sub-dimensions was developed. The lowest CVI (0.63) was for “mental complexity” and the highest (0.90) belonged to “personality”. The least CVR (0.56) for “meaningfulness attitude” was acquired, while the highest CVR (1.00) was recorded for 9 questions. Total Cronbach's Alpha coefficient (0.93) showed satisfactory reliability. Conclusions This validated instrument offers a comprehensive tool for evaluating trust in AVs for future studies.
Article
Full-text available
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a ‘trust transfer function’ is developed using lime series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Article
Full-text available
in this chapter we will examine the development and impact of trust in the context of close relationships we will begin with a definition of trust and a discussion of its roots in individuals' interpersonal histories we will go on to explore the development of trust in intimate relationships, emphasizing how its foundations are colored by the seminal experiences that mark different stages of interdependence we will then consider the various states of trust that can evolve and their consequences for people's emotions and perceptions in established relationships (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
We conducted a classification analysis to identify factors associated with sitting comfort and discomfort. The objective was to investigate the possible multidimensional nature of comfort and discomfort. Descriptors of feelings of comfort and discomfort were solicited from office workers and validated in a questionnaire study. From this study, 43 descriptors emerged. The 42 participants rated the similarity of all 903 pairs of descriptors, and we subjected the resulting similarity matrix to multidimensional scaling, factor analysis, and cluster analysis. Two main factors emerged, which were interpreted as comfort and discomfort. Based on these findings, we postulate a hypothetical model for perception of comfort and discomfort. Comfort and discomfort need to be treated as different and complementary entities in ergonomic investigations.
Article
Full-text available
Two experiments are reported which examined operators' trust in and use of the automation in a simulated supervisory process control task. Tests of the integrated model of human trust in machines proposed by Muir (1994) showed that models of interpersonal trust capture some important aspects of the nature and dynamics of human-machine trust. Results showed that operators' subjective ratings of trust in the automation were based mainly upon their perception of its competence. Trust was significantly reduced by any sign of incompetence in the automation, even one which had no effect on overall system performance. Operators' trust changed very little with experience, with a few notable exceptions. Distrust in one function of an automatic component spread to reduce trust in another function of the same component, but did not generalize to another independent automatic component in the same system, or to other systems. There was high positive correlation between operators' trust in and use of the automation; operators used automation they trusted and rejected automation they distrusted, preferring to do the control task manually. There was an inverse relationship between trust and monitoring of the automation. These results suggest that operators' subjective ratings of trust and the properties of the automation which determine their trust, can be used to predict and optimize the dynamic allocation of functions in automated systems.
Article
The effectiveness and safety of large scale technological systems of all kinds is more and more dependent on their command and control. The effectiveness of command and control, in turn, is closely related to perceived trust in them, by operators, managers, and society as a whole. This paper examines concepts of trust, both rational and irrational, and both as cause and effect, and suggests some possibilities for quantitative modeling.
Article
Interpersonal trust is an aspect of close relationships which has been virtually ignored in social scientific research despite its importance as perceived by intimate partners and several family theorists. This article describes the development, validation, and correlates of the Dyadic Trust Scale, a tool designed for such research. It is unidimensional, reliable, relatively free from response biases, and purposely designed to be consistent with conceptualizations of trust from various perspectives. Dyadic trust proved to be associated with love and with intimacy of self-disclosure, especially for longer married partners. It varied by level of commitment, being lowest for ex-partners and highest for those engaged and living together, for newlyweds, and for those married over 20 years. Partners reciprocated trust more than either love or depth of self-disclosure. Future research could fruitfully relate dyadic trust to such issues as personal growth in relationships, resolving interpersonal conflict, and developing close relationships subsequent to separation or divorce.
Article
Automation-induced "complacency" has been implicated in accidents involving commercial aircraft and other high-technology systems. A 20-item scale was developed for measuring attitudes toward commonly encountered automated devices that reflect a potential for complacency. Factor analysis of responses (N = 139) to scale items revealed four factors: Confidence-Related, Reliance-Related, Trust-Related, and Safety-Related Complacency. It is proposed that these form components of automation-induced complacency rather than general attitudes toward automation. The internal consistency (r = 37) and test reliability (r = .90) of the factors and the scale as a whole were high. Complacency potential is discussed with respect to interrelations between automation and operator trust in and reliance on automation.
Article
A problem in the design of decision aids is how to design them so that decision makers will trust them and therefore use them appropriately. This problem is approached in this paper by taking models of trust between humans as a starting point, and extending these to the human-machine relationship. A definition and model of human-machine trust are proposed, and the dynamics of trust between humans and machines are examined. Based upon this analysis, recommendations are made for calibrating users' trust in decision aids.
Article
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a 'trust transfer function' is developed using time series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
How do we trust machine advice De-signing and using human-computer interface and knowledge based systems
  • F J Lerch
  • M J Prietula
Lerch, F. J., & Prietula, M. J. (1989). How do we trust machine advice? In G. Salvendy & M. J. Smith (Eds.), De-signing and using human-computer interface and knowledge based systems (pp. 410–419). Elsevier Science Publishers, North-Holland.