Article

# Identifying factors that influence trust in automated cars and medical diagnosis systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

## Abstract

Our research goals are to understand and model the factors that affect trust in automation across a variety of application domains. For the initial surveys described in this paper, we selected two domains: automotive and medical. Specifically, we focused on driverless cars (e.g., Google Cars) and automated medical diagnoses (e.g., IBM's Watson). There were two dimensions for each survey: the safety criticality of the situation in which the system was being used and name-brand recognizability. We designed the surveys and administered them electronically, using Survey Monkey and Amazon's Mechanical Turk. We then performed statistical analyses of the survey results to discover common factors across the domains, domain-specific factors, and implications of safety criticality and brand recognizability on trust factors. We found commonalities as well as dissimilarities in factors between the two domains, suggesting the possibility of creating a core model of trust that could be modified for individual domains. The results of our research will allow for the creation of design guidelines for autonomous systems that will be better accepted and used by target populations. Copyright © 2013, Association for the Advancement of Artificial Intelligence. All rights reserved.

## No full-text available

... Our findings point to a continuum between control and trust that has been highlighted in the literature on trust in other automated systems (e.g. self-driving cars) (Carlson et al., 2014;Heitmeyer and Leonard, 2015;Helldin et al., 2013;Parasuraman and Riley, 1997). As we have depicted in Figure 1, there are self-management and quality of life implications at both ends of the continuum. ...
... Advances in automated technology have created the need to better understand what factors influence trust in automated processes. Human-automation interaction (HAI) research has shown that user trust has implications for how users engage with a system (Carlson et al., 2014;Parasuraman and Riley, 1997). Technology may be imperfect and not able to adapt immediately to any given real-world situation; therefore, technology should instill an appropriate level of trust (Carlson et al., 2014). ...
... Human-automation interaction (HAI) research has shown that user trust has implications for how users engage with a system (Carlson et al., 2014;Parasuraman and Riley, 1997). Technology may be imperfect and not able to adapt immediately to any given real-world situation; therefore, technology should instill an appropriate level of trust (Carlson et al., 2014). Too little trust in a system could dissuade those with type 1 diabetes from using CL systems or lead users to employ workarounds and other strategies to "trick" the system or regain some control. ...
Article
Full-text available
Automated closed loop systems will greatly change type 1 diabetes management; user trust will be essential for acceptance of this new technology. This qualitative study explored trust in 32 individuals following a hybrid closed loop trial. Participants described how context-, system-, and person-level factors influenced their trust in the system. Participants attempted to override the system when they lacked trust, while trusting the system decreased self-management burdens and decreased stress. Findings highlight considerations for fostering trust in closed loop systems. Systems may be able to engage users by offering varying levels of controls to match trust preferences.
... You were essentially also trusting that the particular taxi company hired good drivers and maintained their vehicles. You trusted the brand name manufacturer of the taxi, the builders of the road, the state that licensed the driver, the authority that gave him a hack license, the authority which gave an operator license to the cab company, etc. (Carlson et al 2014) You were also trusting that the person driving the cab did not want to die, did not want the cost of an accident, and did not want to lose his license (and thus his livelihood) by breaking laws (e.g., speeding, drinking while driving, running lights/stop signs, etc.). You were fairly certain that there were police monitoring driver behavior, and that the taxi company would fire bad drivers. ...
... You will need to trust that the manufacturer of the driverless car built an effective system because its investors never would have wanted to sell a system that could make them liable for significant damages. Perhaps you would have confidence in only certain brand name driverless car manufacturers (Carlson et al 2014). You have to trust that the taxi company never would have purchased a driverless cab unless it functioned as it should. ...
Article
Full-text available
Automation in transportation (rail, air, road, etc.) is becoming increasingly complex and interconnected. Ensuring that these sophisticated non-deterministic software systems can be trusted and remain resilient is a community concern. As technology evolves, systems are moving increasingly towards autonomy where the "machine" is intelligent: perceiving, deciding, learning, etc. often without human engagement. Our current mechanisms and policies for oversight and certification of these systems to ensure they operate robustly in safety-critical situations are not keeping pace with technology advancements. How is an autonomous system different than an advanced automatic system? How is trust different for an autonomous system? What are different perspectives on trust? Is it appropriate to apply the techniques used to establish trust in a today's transportation systems to establishing trust in an autonomous system? This paper will examine these questions and propose a framework for discussing autonomy assurance and trust in transportation applications. We will explore further with two examples: 1) the notion of a self-driving taxi-cab; and 2) the evolution of a two-pilot flight deck, to single-pilot operations. Copyright © 2014, Association for the Advancement of Artificial Intelligence. All rights reserved.
... The US comments in particular show negative responses relating to the restriction of freedom, pointing at the high relevance of consumer autonomy. A second study that focuses on the contexts of automated medical diagnosis systems and automated driving provides descriptive evidence for the important role of prominent brands as risk-reducing mechanism [13]. Four distinct scenarios of future mobility are developed in a study using a set of experts from car manufacturers, public authorities, scientists, and environmental groups [19]. ...
... Besides the sparse empirical evidence for the risk-reducing effects of strong brands in the context of automated driving [13], the aforementioned brand functions should be positively related to consumer acceptance of automated driving systems. Knowledge and experience of consumers with automated driving technology is marginal. ...
Chapter
Full-text available
The automated, self-driving vehicle is one of the automobile industry’s major ventures in the 21st century, driven by rapid advances in information technology (Brynjolfsson and McAfee in MIT Sloan Management Review 53:53–60, 2012 [10], Burns in Nature 497:181–182, 2013 [11]). Technological innovations in the field of automated driving promise to contribute positively to the financial bottom line of automobile manufacturers (Meseko in J Econ Sustain Dev 5:24–27, 2014 [46]). Their integration as supplementary equipment increases the contribution margin of each car sold. In addition, automated mobility functions lay the foundations for new business models such as elaborated navigation services.
... One needs to understand trust factors for providing guidance to the CAV system developers. Carlson et al. identified twenty-nine factors that can compromise trust of CAV user, and performed statistical analyses for automated vehicle related factors and trust factors related to the safety features of a vehicle [115]. The critical factors identified for user's desirability and reliability of the automated cars are: i) level of accuracy of the vehicle's routes; ii) availability of current roadway information (e.g., weather, traffic congestion, and construction) to a vehicle; iii) level of training and prior learning of a vehicle; iv) system failure detection (e.g., making a wrong turn, running a stop light); v) accuracy of the route selection; vi) user's familiarity with the vehicle features; vii) agreement of routes between vehicle and user's knowledge; viii) the vehicle's methods of information collection [115]. ...
... Carlson et al. identified twenty-nine factors that can compromise trust of CAV user, and performed statistical analyses for automated vehicle related factors and trust factors related to the safety features of a vehicle [115]. The critical factors identified for user's desirability and reliability of the automated cars are: i) level of accuracy of the vehicle's routes; ii) availability of current roadway information (e.g., weather, traffic congestion, and construction) to a vehicle; iii) level of training and prior learning of a vehicle; iv) system failure detection (e.g., making a wrong turn, running a stop light); v) accuracy of the route selection; vi) user's familiarity with the vehicle features; vii) agreement of routes between vehicle and user's knowledge; viii) the vehicle's methods of information collection [115]. ...
Article
Full-text available
Information-aware connected and automated vehicles (CAVs) have drawn great attention in recent years due to their potentially significant positive impacts on roadway safety and operational efficiency. In this paper, we conduct an in-depth review of three basic and key interrelated aspects of a CAV: sensing and communication technologies; human factors; and information-aware controller design. First, the different vehicular sensing and communication technologies and their protocol stacks, to provide reliable information to the information-aware CAV controller, are thoroughly discussed. Diverse human factors, such as user comfort, preferences, and reliability, to design the CAV systems for mass adaptation are also discussed. Then, the different layers of a CAV controller (route planning, driving mode execution, and driving model selection) considering human factors and information through connectivity are reviewed. In addition, the critical challenges for the sensing and communication technologies, human factors, and information-aware controller are identified to support the design of a safe and efficient CAV system while considering user acceptance and comfort. Finally, the promising future research directions of these three aspects are discussed to overcome existing challenges to realize a safe and operationally efficient CAV.
... Previous studies have examined potential benefits associated with the use of AVs, which include traffic and crash reduction, increased fuel efficiency, and personal benefits such as the transformation of commute time into personal time (Bansal & Kockelman, 2017;Fagnant & Kockelman, 2015;Kyriakidis, Happee, & de Winter, 2015). While these studies have shown that the public finds these benefits desirable, they have also revealed significant concerns about the potential safety issues surrounding AVs (Carlson, Desai, Drury, Kwak, & Yanco, 2013;König & Neumayr, 2017). ...
... Unsurprisingly, the literature supports the conclusion that there is a positive relationship between the two. Carlson et al. cite the user's past experience with the car as the tenth most influential factor on a user's trust of AV's (Carlson et al., 2013). Additionally, König and Neumayr found a positive relationship between users' knowledge of AV's and their attitudes toward using AV's. ...
Conference Paper
While previous studies have examined public opinions of autonomous vehicles, there is a distinct gap in the literature concerning opinions of the use of AVs to transport children. This study examined participants’ concerns about and willingness to use AVs to transport children unaccompanied. Specifically, it focused on differences in attitudes between men and women and between parent versus non-parent groups. Significant results were found between these groups, with men and non-parents being more willing to use AVs to transport children unaccompanied. All groups demonstrated a significant hesitance toward such use, with only very small percentages indicating that they definitely would use AVs to transport children unaccompanied.
... Carlson et. al. [42] conducted a statistical analysis in the domain of autonomous vehicles and autonomous diagnostic systems. They created an online survey and asked human subjects about various scenarios related to self-driving cars and usage of IBM Watson in critical medical situations (e.g., to determine types of cancer). ...
... The authors expressed that the human subjects would have a high degree of discomfort if they were on a fully autonomous commercial airplane with both pilots just overseeing the movements and controlling the airplane remotely. They also mentioned that the subjects would have a high degree of distrust when only one [41] Consumers have positive feelings toward the ease of use that comes with self-driving cars Survey-based Factors affecting trust in self-driving cars such as gender and income [42] Past performance, reliability, errors, software and hardware failures will affect trust Online survey Impacts of safety, efficiency, and failure rates on trust [43] Self-driving cars will be popular, however, users are concerned about safety, hacking, and legal issues Survey-based The future of self-driving cars and major concerns [44] Unpredictable hazards are still an issue that needs to be resolved N/A Self-driving cars and safety [45] Level of trust increases if the design and dynamics of SDC are closer to what they are in regular vehicles Data collection from an experimental self-driving cars Autopilot personalization [46] Over-trust is an issue and can potentially cause hazardous situations Visual channels and 10 screens in order to analyze reaction time of users Educating consumers on the proper time to regain manual control over vehicle [47] A model to capture the dynamic variations of human trust Experiment involving 581 subjects Dynamic trust and impacts of demographic information [48] Consumers are willing to pay significantly more for autonomous features Experiment involving 1260 subjects Consumer behavior pilot was in the cockpit. This study also discovered that the human trust in autonomous airplanes is related to the culture of humans. ...
Chapter
Full-text available
As a result of the exponential growth in technology and computing in recent years, autonomous systems are becoming more relevant in our daily lives. As these systems evolve and become more complex, the notion of trust in human-autonomy interaction becomes a prominent issue that affects the performance of human-autonomy teaming. Prior studies indicate that humans have low levels of trust in semi and fully autonomous systems. In this survey, we review a wide range of technical papers and articles and go over the related experimental techniques in the literature of trust. We also explain limitations that are present in existing research works, and discuss open problems in this domain. It’s apparent that trust management is critical for the development of future artificial intelligence technologies.
... Eine zweite Studie behandelt den Einsatz automatisierter Technologien in zwei verschiedenen Kontexten, und zwar der automatisierten Diagnose im Gesundheitsbereich und des automatisierten Fahrens. Der Beitrag liefert Anhaltspunkte dafür, dass der Marke des Anbieters eine bedeutende Rolle zur Reduktion des wahrgenommenen Risikos zukommen könnte [13]. In einer weiteren Studie werden vier verschiedene Szenarien zukünftiger Mobilität unter Mitwirkung von Experten von Fahrzeugherstellern, Behörden, Wissenschaft und Umweltschutzgruppen entwickelt [19]. ...
... Da der Kauf eines Neuwagens den Charakter einer extensiven Konsumentscheidung hat, mit relativ hohen Kosten und einer umfangreichen Informationsphase verbunden und das Auto immer noch ein wichtiges Statussymbol ist, kann von einem starken Einfluss der Marke auf das Kaufverhalten ausgegangen werden.Ausgehend von der ersten empirischen Evidenz für die Risikoreduktionsfunktion starker Marken im Kontext des automatischen Fahrens[13] lässt sich auf Grundlage der genannten Markenfunktionen ein positiver Einfluss starker Marken auf die Akzeptanz automatisierter Fahrtechnologien schließen. Das Wissen über und die Erfahrung der Konsumenten mit automatisiertem Fahren ist gering. ...
Chapter
Full-text available
Die vom rasanten Fortschritt in der Informationstechnologie [10], [11] getriebene Entwicklung von automatisierten, selbstfahrenden Fahrzeugen ist eines der gr??ten unternehmerischen Vorhaben der Automobilindustrie im 21. Jahrhundert. Die Automobilhersteller und ihre Zulieferer versprechen sich von den technologischen Innovationen auf dem Gebiet des automatisierten Fahrens einen positiven Beitrag zur Profitabilit?t von Fahrzeugen [46], steigert die Integration als optionales Ausstattungsmerkmal doch den Ergebnisbeitrag jedes verkauften Fahrzeugs.
... It might also compromise readiness to take over manual control (Moray & Inagaki, 1999). Carlson, Desai, Drury, Kwak, and Yanco (2014) argued that if "people trust systems too much, such as when challenging environmental conditions cause the systems to operate at the edge of their capabilities, users are unlikely to monitor them to the degree necessary" (p. 21). ...
... However, some minimum level of automation trust is necessary so that drivers are willing to activate highly automated driving systems and eventually attend to NDRTs until a takeover is required. When users have low trust in an automated system, they tend to disuse it and are less likely to take full advantage of its capabilities (Carlson et al., 2014;Lee & See, 2004;Sheridan & Parasuraman, 2005). As Brown and Noy (2004) put it, the "extent that an individual driver will allow a device to take over functions will depend on the amount of trust that s/he feels toward it" (p. ...
Article
Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
... Subsequently, automation trust usually recovers to some extent even when failures occur, and ultimately settles at a steady state that more or less reflects the actual reliability of the automation [25]. To develop design guidelines for highly automated driving systems that support appropriate automation trustfor example by providing additional information when needed and facilitate correct use of the system -it is necessary to understand drivers' automation trust and the factors that influence it [32]. At present however, the understanding of relationships between automation trust, automation reliance, and system performance is still lacking [25]. ...
... People find that it is enjoyable to drive using manual systems, as well as fascinating to employ automated vehicles [25]. Looking of aspects of safety, automated cars could decrease the rates of accidents, as self-driving cars have the potential to virtually eliminate accidents due to inattentive drivers [26, 27, 28]. To distinguish authority transitions in automated driving and have a better understanding of trust levels of automation, different levels of autonomy have been proposed [29]. ...
Research
Full-text available
Examination of the levels and perceptions of trust in automotive automation, and the influences of cultural differences concerning trust and automation, with respect to automated automobiles.
... People find that it is enjoyable to drive using manual systems, as well as fascinating to employ automated vehicles[25]. Looking of aspects of safety, automated cars could decrease the rates of accidents, as self-driving cars have the potential to virtually eliminate accidents due to inattentive drivers[26,27,28]. To distinguish authority transitions in ...
Conference Paper
Full-text available
Our work examines the levels and perceptions of trust in automotive automation, and the influences of cultural differences concerning trust and automation, with respect to automated automobiles. We found the expected style of communication of the drivers in the autonomous automobile, showed a great effect on trust levels, both at initial contact and with sustained use. This communication style was dependent upon the client culture’s level of context, individualism, and collectivism. Across cultures, the balance of trust levels was found to need to be at moderate levels (not too high or low) to reduce automation misuse, disuse, and abuse. These findings align with the goal to create a positive flow state wherein there are reduced accidents, improved safety and satisfaction with use, across cultures. Future research is needed to assess physiological measures which may be useful to monitor and adapt to the drivers and passengers of automated automobiles.
... They further suggested that neatness of information provision could be a factor for a system to act as an expert, which appears to be related to UI aesthetics. Apart from neatness, brand reputation was found to be associated with a system being perceived as an expert and hence could influence trust further [4]. A study by Stockert et al. [15] demonstrated that there is not only an effect of presenting system uncertainty on trust, but also the style of information presentation. ...
Conference Paper
Acceptance of highly and fully automated vehicles will depend on system trust and the ability to comfortably engage in non-driving related tasks (NDRT). We here hypothesize a potential trade-off between the two. The paper describes the development of two UI concepts based on trust factors derived from the literature and benchmarking of current (concept) UIs to explore the balance between trust and engagement in NDRT. The concepts were inspired by the Valeo Mobius User Interface concept. The level of intrusiveness of trust related information was the key parameter differentiating the two concepts and was manipulated by adopting a single head up display versus a distributed display configuration also including a central console display. A comparative simulator study is underway to explore the balance between trust and comfortable engagement in NDRT and areas of future research are discussed.
... Other related research has focused on the factors that affect trust in a robot [6]. Carlson et al. finds that reliability and reputation impact trust in surveys of how people view robots. ...
Article
Robots have the potential to save lives in high-risk situations, such as emergency evacuations. To realize this potential, we must understand how factors such as the robot's performance, the riskiness of the situation, and the evacuee's motivation influence his or her decision to follow a robot. In this paper, we developed a set of experiments that tasked individuals with navigating a virtual maze using different methods to simulate an evacuation. Participants chose whether or not to use the robot for guidance in each of two separate navigation rounds. The robot performed poorly in two of the three conditions. The participant's decision to use the robot and self-reported trust in the robot served as dependent measures. A 53% drop in self-reported trust was found when the robot performs poorly. Self-reports of trust were strongly correlated with the decision to use the robot for guidance ( $\phi ({90}) = + 0.745$ ). We conclude that a mistake made by a robot will cause a person to have a significantly lower level of trust in it in later interactions.
... To provide a safe environment for all traffic participants, the car is required to constantly monitor outdoor surroundings to detect anomalies. A reliable and up-to-date system was found to be a top factor influencing feelings of safety by other studies as well [3]. However, our participants found that simply having a reliable system is not enough: users need to be aware of its capabilities. ...
Conference Paper
Full-text available
In this paper, we present preliminary results of a user research study investigating factors with high potential impact on user experience (UX) of autonomous vehicles (AVs). The study was conducted in Singapore with 29 participants in 10 sessions, each one lasting around 2 hours. Participants conveyed their requirements verbally, as well as visually through sketching. The extensive rounds of discussions revealed an underlying trend of general mistrust towards AVs expressed in three requirement categories concerning: safety, empowerment and interaction style. In response to these requirements, designers need to ensure the vehicle's reliability is "expressed" in users' "language", passengers are allowed to have some decision power during navigation (despite the car being autonomous) and are able to interact with the AV in a flexible, easy and straightforward manner. We believe such design decisions would be beneficial to generate more trust towards AVs and improve the overall UX.
... The range of individual concerns already expressed include: lack of trust in the capabilities of AV and their networking (Fraedrich & Lenz 2014); specific risks for crashes (Daziano, Sarrias & Leard 2016) that may be generated by non-AV traffic participants (Bazilinskyy et al. 2015), hacking of the systems (Fagnant & Kockelman 2015), data transfer to third parties, and deprivation from the joy of driving (Fagnant & Kockelman 2015;Kyriakidis et al. 2015). Carlson et al. (2014) report that trust increases with the past performance of the system, with research on reliability or validation of the system, and through the reputation of the designer and manufacturer of the system; still, numerous trust issues remain unresolved and under review (Howard & Dai 2014;Kyriakidis et al. 2015). The classical dilemma of 'who is the AV saving?' in a crash produced by a fallen object or an inattentive cyclist on the road, suggests that the public is equally unlikely to leave this decision in the hands of a computer scientist incorporating rules of operation for AV or of a machine, learning from itself how to drive safely. ...
Article
Full-text available
Autonomous vehicle technology and its potential effects on traffic and daily activities is a popular topic in the media and in the research community. It is anticipated that AVs will reduce accidents, improve congestion, increase the utility of time spent travelling and reduce social exclusion. However, knowledge about the way in which AVs will function in a transport system is still modest and a recent international study showed a lower familiarity with AVs in Australia compared to the USA and UK. Attitudes towards fully automated driving (or higher levels of autonomy) range from ‘excitement’ to ‘suspicion’. The breadth of feelings may be due to the low level of awareness or reflect polarising attitudinal positions. Whilst experts appear to be more confident about the adoption of AV technology in the near future, public acceptance is key to AVs’ market success. Hence, research that examines local contexts and opinions is needed. This paper reviews existing scholarly work and identifies gaps and directions for future developments, with a focus on the Australian context. The review will address the following broad categories: investigation of AV features and mobility models, implications for road traffic and connectivity to infrastructure (especially in low to medium density urban areas), public attitudes and concerns, travel behaviour and demand, potential business models, and policy implications. The aims of the paper are to identify critical issues for the development of a focus group inquiry to understand attitudes of potential users of AVs and to highlight AV development issues for policy makers in Australia.
... A great deal of related research has focused on the factors that affect trust in a robot (Carlson, Desai, Drury, & Yanco, 2014). Carlson et al. finds that reliability and reputation impact trust in surveys of how people view robots. ...
Chapter
The word “trust” has many definitions that vary based on context and culture, so asking participants if they trust a robot is not as straightforward as one might think. The perceived risk involved in a scenario and the precise wording of a question can bias the outcome of a study in ways that the experimenter did not intend. This chapter presents the lessons we have learned about trust while conducting human-robot experiments with 770 human subjects. We discuss our work developing narratives that describe trust situations as well as interactive human-robot simulations. These experimental paradigms have guided our research exploring the meaning of trust, trust loss, and trust repair. By using crowdsourcing to locate and manage experiment participants, considerable diversity of opinion is found; there are, however, several considerations that must be included. Conclusions drawn from these experiments demonstrate the types of biases that participants are prone to as well as techniques for mitigating these biases.
... However, Weinstock et al. [25] also discusses the possibility of automated systems aesthetics affecting trust and satisfaction more moderately compared to mobile commerce applications and websites. Apart from neatness, Carlson et al. [26] showed that brand reputation is associated with a system being perceived as an expert and could influence trust further. ...
Thesis
Acceptance of highly and fully automated vehicles will depend on trust in the automated system and the ability to comfortably engage in non-driving related tasks (NDRT). A potential trade-off between the two is hypothesized in the research study conducted. Three user interface (UI) concepts were designed and evaluated in a driving simulator. The UI concepts were designed based on factors associated with trust and comfort in NDRT, which were derived from the literature and technical review of current (concept) UIs. The concepts designed were inspired by the Valeo Mobius User Interface concept. The level of intrusiveness (in NDRT) of trust related information was the key parameter differentiating the three concepts and was manipulated by adopting a baseline versus a single head up display versus a distributed display configuration also including a central console display. The configurations differed in terms of information delivering frequency and location of trust related information. The UI concepts were developed and displayed in a driving simulator using Systems Engineering approach. A comparative driving simulator study with 30 participants was conducted to explore the balance between trust and comfortable engagement in NDRT. Quantitative, qualitative and gaze tracking data was taken. Results suggest there exists a positive correlation between system trust and comfort in engaging in NDRT. Also, the relationship between system trust and comfort in engaging in NDRT was found to be dynamic, suggesting a need for providing different customization options to the user and adaptive user interface. Based on the results obtained, UI design guidelines and areas of future research are discussed.
... Operators then tend to be vulnerable to monitoring failures (Bagheri & Jamieson, 2004;Bailey & Scerbo, 2007) and tend to exhibit longer reaction times (Beller, Heesen, & Vollrath, 2013;Helldin, Falkman, Riveiro, & Davidsson, 2013) or poorer reaction quality in critical events (McGuirl & Sarter, 2006;de Waard, van der Hulst, Hoedemaeker, & Brookhuis, 1999). Hence, not only a minimum level but an appropriate level of trust is crucial: The operator has to know the capabilities of an automated system and should monitor it adequately when it is close to the limits of its capability (Carlson, Drury, Desai, Kwak, & Yanco, 2014). Otherwise, the consequences are unexpected situations in which the driver may not able to react in time. ...
Article
Full-text available
Trust in automation is a key determinant for the adoption of automated systems and their appropriate use. Therefore, it constitutes an essential research area for the introduction of automated vehicles to road traffic. In this study, we investigated the influence of trust promoting (Trust promoted group) and trust lowering (Trust lowered group) introductory information on reported trust, reliance behavior and take-over performance. Forty participants encountered three situations in a 17-min highway drive in a conditionally automated vehicle (SAE Level 3). Situation 1 and Situation 3 were non-critical situations where a take-over was optional. Situation 2 represented a critical situation where a take-over was necessary to avoid a collision. A non-driving-related task (NDRT) was presented between the situations to record the allocation of visual attention. Participants reporting a higher trust level spent less time looking at the road or instrument cluster and more time looking at the NDRT. The manipulation of introductory information resulted in medium differences in reported trust and influenced participants’ reliance behavior. Participants of the Trust promoted group looked less at the road or instrument cluster and more at the NDRT. The odds of participants of the Trust promoted group to overrule the automated driving system in the non-critical situations were 3.65 times (Situation 1) to 5 times (Situation 3) higher. In Situation 2, the Trust promoted group's mean take-over time was extended by 1154 ms and the mean minimum time-to-collision was 933 ms shorter. Six participants from the Trust promoted group compared to no participant of the Trust lowered group collided with the obstacle. The results demonstrate that the individual trust level influences how much drivers monitor the environment while performing an NDRT. Introductory information influences this trust level, reliance on an automated driving system, and if a critical take-over situation can be successfully solved.
... A survey by Carlson et al comparing the trust in Automated cars and Medical Systems showed that knowledge of the developing corporation had a significant impact on the trust value, taking the mean from below the trust threshold to above the trust threshold for both vehicle and medical technology. A large name like IBM or Google has more inherent trust than a small startup [10]. This is where the basis of the Authority Approval is being investigated in unifying the ability of users to place trust in an authority's seal, such as the FAA or NTSB. ...
... Another study presented a survey to understand the factors that affect how people trust automation in driverless cars and medical devices [11]. Key factors included performance statistics, research on the reliability of the autonomous product or service, existence or indication of errors, and possibility of failure, among others. ...
Conference Paper
Full-text available
Autonomous vehicles have been in development for nearly thirty years and recently have begun to operate in real-world, uncontrolled settings. With such advances, more widespread research and evaluation of human interaction with autonomous vehicles (AV) is necessary. Here, we present an interview study of 32 pedestrians who have interacted with Uber AVs. Our findings are focused on understanding and trust of AVs, perceptions of AVs and artificial intelligence, and how the perception of a brand affects these constructs. We found an inherent relationship between favorable perceptions of technology and feelings of trust toward AVs. Trust in AVs was also influenced by a favorable interpretation of the company's brand and facilitated by knowledge about what AV technology is and how it might fit into everyday life. To our knowledge, this paper is the first to surface AV-related interview data from pedestrians in a natural, real-world setting.
... The range of individual concerns already expressed include: lack of trust in the capabilities of AV and their networking (Fraedrich & Lenz, 2014); specific risks for crashes (Daziano et al., 2016) that may be generated by non-AV traffic participants (Bazilinskyy et al., 2015), hacking of the systems (Fagnant & Kockelman, 2015), data transfer to third parties, deprivation from the joy of driving (Fagnant & Kockelman, 2015;Kyriakidis et al., 2015). Carlson et al. (2014) report that trust increases with the past performance of the system, with research on reliability or validation of the system, and through the reputation of the designer and manufacturer of the system; still, numerous trust issues are still unresolved and under review (Howard & Dai, 2015;Kyriakidis et al., 2015). The classical dilemma of 'who is the AV saving?' in a crash produced by a fallen object or an inattentive cyclist on the road, suggests that the public is equally unlikely to leave this decision in the hands of a computer scientist incorporating rules of operation for AV or of a machine, learning from itself how to drive safely. ...
Conference Paper
Full-text available
Autonomous vehicle technology and its potential effects on traffic and daily activities is a popular topic in the media and in the research community. It is anticipated that AVs will reduce accidents, improve congestion, increase the utility of time spent travelling and reduce social exclusion. However, knowledge about the way in which AVs will function in a transport system is still modest and a recent international study showed a lower familiarity with AVs in Australia compared to USA and UK. Attitudes towards fully automated driving (or higher levels of autonomy) range from excitement to suspicion. The breadth of feelings may be due to the low level of awareness or reflect polarising attitudinal positions. Whilst experts appear to be more confident about the adoption of AV technology in the near future, public acceptance is key to AVs’ market success. Hence, research that examines local contexts and opinions is needed. This paper reviews existing scholarly work and identifies gaps and directions for future developments, with a focus on the Australian context. The review will address the following broad categories: investigation of AV features and mobility models, implications for road traffic and connectivity to infrastructure (especially in low to medium density urban areas), public attitudes and concerns, potential business models, and policy implications. The aims of the paper are to identify critical issues for the development of a focus group inquiry to understand attitudes of potential users of AVs and to highlight AV development issues for policy makers in Australia.
... Since the acceptance level of autonomous driving will decide the marketing success of the autonomous vehicles, the analysis on consumer perceptions of autonomous driving vehicles, although the present knowledge is sparse, is extremely important (Carlson et al., 2013, Burns, 2013. In this new innovative industry, not only automobile manufactures and suppliers will make the further development and footprint, bu8t also the technology firms such as Google or ...
Thesis
... One explanation for this is that higher levels of trust are associated with lower monitoring frequencies [67,68]. More specifically, people tend to be less inclined to monitor the performance of a system when they trust that system more. ...
Article
Full-text available
Autonomous vehicles use sensors and artificial intelligence to drive themselves. Surveys indicate that people are fascinated by the idea of autonomous driving, but are hesitant to relinquish control of the vehicle. Lack of trust seems to be the core reason for these concerns. In order to address this, an intelligent agent approach was implemented, as it has been argued that human traits increase trust in interfaces. Where other approaches mainly use anthropomorphism to shape appearances, the current approach uses anthropomorphism to shape the interaction, applying Gricean maxims (i.e., guidelines for effective conversation). The contribution of this approach was tested in a simulator that employed both a graphical and a conversational user interface, which were rated on likability, perceived intelligence, trust, and anthropomorphism. Results show that the conversational interface was trusted, liked, and anthropomorphized more, and was perceived as more intelligent, than the graphical user interface. Additionally, an interface that was portrayed as more confident in making decisions scored higher on all four constructs than one that was portrayed as having low confidence. These results together indicate that equipping autonomous vehicles with interfaces that mimic human behavior may help increasing people’s trust in, and, consequently, their acceptance of them.
... A second described ''a computer program developed at the prestigious Mayo Clinic, one of the nation's premier medical facilities'', but found no consistently beneficial effect of using the aid. 23(p198) Carlson and others 39 reported that of 29 potential factors hypothesized to produce trust in an aid, ''reputation of the manufacturer'' ranked near the bottom. In contrast, ''statistics of the machine's past performance'' ranked very high in determining patient trust in an aid. ...
Article
Full-text available
Previous research has described physicians’ reluctance to use computerized diagnostic aids (CDAs) but has never experimentally examined the effects of not consulting an aid that was readily available. Experiment 1. Participants read about a diagnosis made either by a physician or an auto mechanic (to control for perceived expertise). Half read that a CDA was available but never actually consulted; no mention of a CDA was made for the remaining half. For the physician, failure to consult the CDA had no significant effect on competence ratings for either the positive or negative outcome. For the auto mechanic, failure to consult the CDA actually increased competence ratings following a negative but not a positive outcome. Negligence judgments were greater for the mechanic than for the physician overall. Experiment 2. Using only a negative outcome, we included 2 different reasons for not consulting the aid and provided accuracy information highlighting the superiority of the CDA over the physician. In neither condition was the physician rated lower than when no aid was mentioned. Ratings were lower when the physician did not trust the CDA and, surprisingly, higher when the physician believed he or she already knew what the CDA would say. Finally, consistent with our previous research, ratings were also high when the physician consulted and then followed the advice of a CDA and low when the CDA was consulted but ignored. Individual differences in numeracy did not qualify these results. Implications for the literature on algorithm aversion and clinical practice are discussed.
... Confidence and risk have been identified as factors [11,61] as has the robot's behavior [6] and appearance [25,32]. Carlson et al. [7] finds that reliability and reputation impact trust in surveys of how people view a robot. Hancock et al. [17] performed a meta-analysis over the existing human-robot trust literature identifying 11 relevant research articles and found that, for these articles, robot performance is most strongly associated with trust. ...
Article
This article presents a conceptual framework for human-robot trust which uses computational representations inspired by game theory to represent a definition of trust, derived from social psychology. This conceptual framework generates several testable hypotheses related to human-robot trust. This article examines these hypotheses and a series of experiments we have conducted which both provide support for and also conflict with our framework for trust. We also discuss the methodological challenges associated with investigating trust. The article concludes with a description of the important areas for future research on the topic of human-robot trust.
... The list below shows all 18 indicators that respondents had to rate from "strongly agree" to "strongly disagree". The statements are partly based and adapted from Carlson et al. (2011), Casley, Jardim & Quartulli (2013 1. I enjoy driving a car myself. ...
Article
Many experts believe the transport system is about to change dramatically. This change is due to so-called fully-automated vehicles (AVs). However, at present, there are numerous important knowledge gaps that need to be solved for the successful integration of AVs in our transport systems, in particular regarding the impacts of AVs on travel demand. For instance, full automation will enable passengers to perform other, non-driving, related tasks while traveling to their destination. This may substantially change the way in which passengers experience traveling by car, and, in turn, may lead to considerable changes in the so-called Value of Travel Time (VOTT). Many experts anticipate the VOTT to decrease substantially due to AVs. However, the extent to which VOTT will change is currently far from clear. This study aims to develop new insights on the potential impacts of fully automated vehicles on the VOTT for commute trips. To do so, we first look at the existing microeconomics theory on the perceived VOTT and analyze the expected changes accrued from the effect of working and having leisure in an AV. We conclude that the VOTT of a work vehicle should be lower than what is experienced today in a conventional vehicle but the leisure one could stay the same. Then we conduct a stated choice experiment, specifically designed and administered for measuring the VOTT, and analyze these data using discrete choice models (DCMs). In total, we collected data from about 500 respondents. In the experiment, respondents were presented choice tasks consisting of three alternatives: an AV with office interior, an AV with leisure interior, and a conventional car. The same experiment was also given to another sample of respondents but this time describing a chauffeur-driven vehicle. Overall we find the average VOTT for an AV with an office interior (5.50€/h) to be lower than the VOTT for the conventional car (7.47€/h), however the AV with leisure interior is not found to decrease the value of time (8.17€/h) which confirms the theoretical results. The VOTT for the chauffeur experiment is systematically lower than for the AV experiment which we attribute to some distrust that people have regarding the AVs.
... Why? There are studies that show that our trust in machines in general, and in computers and robotic systems in particular, depends on their effectiveness rate, that is, on the statistics of their past performance and their capacity to respond appropriately to new situations [44]. Effectiveness also matters when we decide whether to trust other human beings who help us in our decision-making. ...
Article
Full-text available
The moral enhancement of human beings is a constant theme in the history of humanity. Today, faced with the threats of a new, globalised world, concern over this matter is more pressing. For this reason, the use of biotechnology to make human beings more moral has been considered. However, this approach is dangerous and very controversial. The purpose of this article is to argue that the use of another new technology, AI, would be preferable to achieve this goal. Whilst several proposals have been made on how to use AI for moral enhancement, we present an alternative that we argue to be superior to other proposals that have been developed.
... Studies show that the fact that participants tend to not only have trust in the engineers who designed the system is an important fact, also the trust in the company or brand themselves seems to be of high importance. Therefore, the level of trust in a self-driving car is depending on whether the system was made by a better-known company versus a smaller company (Carlson et al. 2013;Elliott and Yannopoulou 2007) . ...
Research
Full-text available
Autonomous cars are considered one of the most promising and advanced technologies nowadays. In order to be accepted by users and market they need to fulfil several criteria, one of which is trust. Trust in autonomous cars has been studied from many perspectives resulting in valuable literature about specific aspects directly affecting trust. However, this field of research is lacking a big picture which combines these different perspectives. One important aspect is algorithm transparency which at least within EU became mandatory with the updated GDPR but has neither been operationalized nor even standardized. By conducting a literature review of relevant papers, this research paper aims at understanding interrelations between different constructs affecting trust in autonomous cars with strong focus on algorithm transparency. We identify various constructs like predictability, theory of mind, anthropomorphism, communication via UI and more as main influencers and eventually provide a first draft of a framework explaining dependencies and operationalizations of constructs with strong influence on trust in autonomous cars.
... A user-interface (UI) can be an effective tool in establishing user trust in a software system Klein and Shortliffe (1994);Carlson et al. (2014). For our XAI system, we created a user-interface (UI) with six major components (see Fig. 1). ...
Preprint
Full-text available
We consider the problem of providing users of deep Reinforcement Learning (RL) based systems with a better understanding of when their output can be trusted. We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation: a graphical depiction of the systems generalization and performance in the current game state, how well the agent would play in semantically similar environments, and a narrative explanation of what the graphical information implies. We created a user-interface for our XAI framework and evaluated its efficacy via a human-user experiment. The results demonstrate a statistically significant increase in user trust and acceptance of the AI system with explanation, versus the AI system without explanation.
... Existing research regarding human-machine trust for novice users described the importance of appropriate levels of trust (e.g. Sanchez and Duncan 2009;Higham et al. 2013;Carlson et al. 2014;Schaefer et al. 2016). However, empirical studies on what forms trust and how trust develops are relatively limited. ...
Article
Full-text available
This study aimed to replicate Muir and Moray (1996) that demonstrated operators’ trust in automated machines developing from faith, then dependability, and lastly predictability. Following the procedure of Muir and Moray (1996), we asked undergraduate participants to complete a training program in a simulated pasteurizer plant and an experimental program including various errors in the pasteurizer. Results showed that the best predictor of overall trust was not faith but dependability, and that dependability consistently governed trust throughout the interaction with the pasteurizer. Thus, the obtained data patterns were inconsistent with those reported in Muir and Moray (1996). We observed that operators in the current study used automatic control more frequently than manual control to successfully produce performance scores contrary to the operators in Muir and Moray (1996). The results imply that dependability is a critical predictor of human-machine trust, which automation designer may focus on. More extensive future research using more modern automated technologies is necessary for understanding what factors control human-autonomy trust in modern ages. Practitioner Summary: The results suggest that dependability is a key factor that shapes human-machine trust across the time course of the trust development. This replication study suggests a new perspective for designing effective human-machine systems for untrained users who do not go through extensive training programs on automated systems.
... For instance, pilots of an Airbus A320 relied so heavily on an autopilot that they eventually were not able to act manually and caused an airplane to crash (Sparaco, 1995). Overtrust can also lead to skill loss or loss of vigilance during monitoring tasks, as discussed in the context of automated cars and medical diagnosis systems (Carlson et al., 2014). Such excessive trust in "intelligent" technology can be seen as a more extreme version of automation bias, that is, the tendency of people to defer to automated technology when presented with conflicting information (Mosier et al., 1992;Wagner et al., 2018). ...
Article
Full-text available
With impressive developments in human–robot interaction it may seem that technology can do anything. Especially in the domain of social robots which suggest to be much more than programmed machines because of their anthropomorphic shape, people may overtrust the robot's actual capabilities and its reliability. This presents a serious problem, especially when personal well-being might be at stake. Hence, insights about the development and influencing factors of overtrust in robots may form an important basis for countermeasures and sensible design decisions. An empirical study [ N = 110] explored the development of overtrust using the example of a pet feeding robot. A 2 × 2 experimental design and repeated measurements contrasted the effect of one's own experience, skill demonstration, and reputation through experience reports of others. The experiment was realized in a video environment where the participants had to imagine they were going on a four-week safari trip and leaving their beloved cat at home, making use of a pet feeding robot. Every day, the participants had to make a choice: go to a day safari without calling options (risk and reward) or make a boring car trip to another village to check if the feeding was successful and activate an emergency call if not (safe and no reward). In parallel to cases of overtrust in other domains (e.g., autopilot), the feeding robot performed flawlessly most of the time until in the fourth week; it performed faultily on three consecutive days, resulting in the cat's death if the participants had decided to go for the day safari on these days. As expected, with repeated positive experience about the robot's reliability on feeding the cat, trust levels rapidly increased and the number of control calls decreased. Compared to one's own experience, skill demonstration and reputation were largely neglected or only had a temporary effect. We integrate these findings in a conceptual model of (over)trust over time and connect these to related psychological concepts such as positivism, instant rewards, inappropriate generalization, wishful thinking, dissonance theory, and social concepts from human–human interaction. Limitations of the present study as well as implications for robot design and future research are discussed.
Chapter
The addition of a robot to a human team can be beneficial if the robot can perform important tasks, provide additional skills, or otherwise help the team achieve its goals. However, if the human team members do not trust the robot they may underutilize it or excessively monitor its behavior. We present an algorithm that allows a robot to estimate its trustworthiness based on interactions with a team member and adapt its behavior in an attempt to increase its trustworthiness. The robot is able to learn as it performs behavior adaptation and increase the efficiency of future adaptation. We compare our approach for inverse trust estimation and behavior adaptation to a variant that does not learn. Our results, in a simulated robotics environment, show that both approaches can identify trustworthy behaviors but the learning approach does so significantly faster.
Article
Human-Machine Interaction (HMI) in Intelligent and Connected Vehicles (ICVs) has drawn great attention in recent years due to its potentially significant positive impacts on the automotive revolution and travel experience. In this paper, we conduct an in-depth review of HMI in ICVs. Firstly, research and application development status are pointed out through the discussion on the cutting-edge technology classification, achievements, and challenges of the HMI technologies in ICVs, including recognition technology, multi-dimensional human vehicle interface, and emerging in-vehicle intelligent units. Then, the human factors issues of ICVs are discussed from three aspects: ICV acceptance, interaction quality of ICVs, and user experience of ICVs. Besides, based on the interaction technology and the mapping of the above issues, we conducted a visual analysis of the literature to realize the reflective thinking of the current HMI in ICVs. Finally, the challenges of HMI technology in ICVs are summarized. Moreover, the promising future opportunities are proposed from three aspects: utility optimization, experience reconfiguration, and value acquisition, to gaining insight into advanced and pleasant HMI in ICVs.
Conference Paper
Robots can be important additions to human teams if they improve team performance by providing new skills or improving existing skills. However, to get the full benefits of a robot the team must trust and use it appropriately. We present an agent algorithm that allows a robot to estimate its trustworthiness and adapt its behavior in an attempt to increase trust. It uses case-based reasoning to store previous behavior adaptations and uses this information to perform future adaptations. We compare case-based behavior adaptation to behavior adaptation that does not learn and show it significantly reduces the number of behaviors that need to be evaluated before a trustworthy behavior is found. Our evaluation is in a simulated robotics environment and involves a movement scenario and a patrolling/threat detection scenario.
Article
Full-text available
Automated driving (AD) is one of the key directions in the intelligent vehicles field. Before full automated driving, we are at the stage of human-machine cooperative driving: Drivers share the driving control with the automated vehicles. Trust in automated vehicles plays a pivotal role in traffic safety and the efficiency of human-machine collaboration. It is vital for drivers to keep an appropriate trust level to avoid accidents. We proposed a dynamic trust framework to elaborate the development of trust and the underlying factors affecting trust. The dynamic trust framework divides the development of trust into four stages: dispositional, initial, ongoing, and post-task trust. Based on the operator characteristics (human), system characteristics (automated driving system), and situation characteristics (environment), the framework identifies potential key factors at each stage and the relation between them. According to the framework, trust calibration can be improved from three approaches: trust monitoring, driver training, and optimizing HMI design. Future research should pay attention to the following four perspectives: the influence of driver and HMI characteristics on trust, the real-time measurement and functional specificity of trust, the mutual trust mechanism between drivers and AD systems, and ways in improving the external validity of trust studies.
Article
Full-text available
Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.
Article
Full-text available
Future autonomous vehicle systems will be diverse in design and functionality because they will be produced by different brands. It is possible these brand differences yield different levels of trust in the automation, therefore different expectations for vehicle performance. Perceptions of system safety, trustworthiness, and performance are important because they help users determine how reliant they can be on the system. Based on a review of the literature, the system’s perceived intent, competence, method, and history could be differentiating factors. Importantly, these perceptions are based on both the automated technology and the brand’s personality. The following theoretical framework reflects a Human Systems Engineering approach to consider how brand differences impact perceived trustworthiness, performance expectations and ultimate safety of autonomous vehicles.
Article
Full-text available
As the role of artificial intelligence (AI) agents in information curation has emerged with recent advancements in AI technologies, the present study explored which users would potentially be susceptible to the filter bubble phenomenon. First, a large-scale analysis of conversational agent users in South Korea (N = 2808) was conducted to investigate the relative importance of content optimization algorithms in shaping positive user experience. Five user clusters were identified based on their information technology proficiency and demographics, and a multiple-group path analysis was performed to compare the influences of content optimization algorithms across the user groups. The results indicated that the personalization algorithm generally exhibited a stronger impact on evaluations of an AI agent’s usefulness than the diversity algorithm. In addition, increased user age and greater Internet usage were found to decrease the importance of objectivity in shaping trust in AI agents. This study improves the understanding of the social influence of AI technology and suggests the necessity of segmented approaches in the development of AI technology.
Article
Full-text available
This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artificial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, that reason-responsive people can be persuaded by. This proposal can play a normative role and it is also a more promising avenue towards moral enhancement. It is more promising because such a system can be designed to take advantage of the sometimes undue trust that people put in automated technologies. We could therefore expect a well-designed moral reasoner system to be able to persuade people that may not be persuaded by similar arguments from other people. So, all things considered, there is hope in artificial intelligence for moral enhancement, but not in artificial intelligence that relies solely on ambient intelligence technologies.
Chapter
Autonomous vehicles (AVs) or automated driving systems (ADSs) are projected to be widely available in the coming years. Prior research has documented the reasoned benefits and concerns about this prospect, especially from the perspectives of mobility and safety. However, little work has focused on the prospect of using AVs to enhance children’s mobility as well as the AV features that are needed for safety. An online survey was used to collect the opinions of parents within the United States on their willingness to use AVs to transport children. Results showed that parents’ concerns, assurance-related car features, parents’ technology readiness, child restraint system use (as a proxy for child age), and parent sex were important variables for modeling parents’ willingness. These findings highlight potential users’ needs and requirements as they consider AV ridership and use scenarios in the context of children’s mobility. More research is critically needed to guide the development of AV features, safety evaluations, and regulatory policies, as child passengers are likely part of AV ridership scenarios in the perceivable future.
Article
Full-text available
One of the long-term goals of autonomous mobility is to achieve mobility for non-drivers or those with difficult access to mobility: for seniors, women, children or other groups of people who are not able to drive a car. However, previous surveys revealed that respondents in these subpopulations were rather reluctant to use connected and automated vehicles (CAVs). This discrepancy brings a paradox in the context of autonomous mobility because one of the main benefits of autonomous mobility is its use by groups that currently reject it the most. One of the reasons for this refusal may lie in the amount of available information on CAVs. Thus, this study focuses on the general attitude, the level of awareness and the preferred ways of new information obtaining on CAVs. Firstly, focus groups revealed preferred media channels for obtaining new information on CAVs. Consequently, a survey was conducted on perceptions and attitudes related to CAVs among the general population in the Czech Republic. Overall, 59 professional inquirers personally interviewed 1116 persons older than 15 years via computer (CAPI). Respondents were selected through the multistage probabilistic sampling procedure, based on the list of address points in the Czech Republic. In the sample, there were 573 (51%) women, the average age was 51 years (SD = 17 years). The results show that, on average, women declared more neutral and negatives attitude towards CAV in comparison to men regardless of age. Furthermore, men declared higher CAV awareness than women in all age groups. As for the preferred information channels, young men mostly chose internet or a "trial as a driver on the circuit". On the other hand, seniors declared the lowest willingness to receive new information about CAVs. However, if they wish to receive information on CAVs, they prefer TV or a "Trial during a social event at my neighbourhood". Results of this study are thus consistent with findings of previous studies as they all identify the importance of gender and age when it comes to the attitude on CAVs.
Article
This study examined trust in artificial intelligence in medical care and identified its determinants. Studies on risk perception have found that perceived ability, integrity, and value similarity determine trust in risk managers. Further, engineering studies on trust in artificial intelligence have suggested that perceived ability and integrity determine trust. However, few researchers have examined whether perceived value similarity affects trust in artificial intelligence. We employed a situation assumption method and focused on the shared policy of medical treatment. In Study 1 (n=165), the results revealed that the shared policy of medical treatment enhanced participants' trust in artificial intelligence, as it did in humans. In addition, artificial intelligence was less trusted than humans were. Study 2 (n=139) replicated the experiment conducted in Study 1 by improving items for manipulation check. The results of Study 2 mostly reproduced those of Study 1. Empirical implications of the findings are discussed.
Article
Full-text available
With the advent of microprocessor technology, it has become possible to automate many of the functions on the flight deck of commercial aircraft that were previously performed manually. However, it is not clear whether these functions should be automated, taking into consideration various human factors issues. A NASA-industry workshop was held to identify the human factors issues related to flight-deck automation which would require research for resolution. The scope of automation, the benefits of automation and automation-induced problems were discussed, and a list of potential research topics was generated by the participants. This report summarizes the workshop discussions and presents the questions developed at that time. While the workshop was specifically directed towards flight-deck automation, the issues raised and the research questions generated are more generally applicable to most complex interactive systems.
Article
In the final round of a televised game show that pitted top players against IBM's AI program Watson, a humb led human jotted down an aside to his written response: "I for one welcome our new computer overlords." Now even doctors are speaking that way. "I'd like to shake Watson's hand," says Mark Kris, an oncologist at Memorial Sloan-Kettering Cancer Center, in New York City. He talks excitedly about the day in late 2013 when Watson-now his student-will be fully trained and ready to assist physicians at the cancer center with their diagnoses and treatment plans.
Article
analyzes different aspects of vigilance performance in tasks requiring (1) detection or discrimination of targets in simple, single-source, visual, and auditory displays, (2) discrimination of targets in multiple-source and other complex displays, (3) monitoring of continuously varying (e.g., stochastic) processes for unpredictable changes in process parameters, (4) searching visual displays for specified targets, and (5) tasks combining one or more of the preceding features decision processes and criterion shifts human factors principles for the control of vigilance (PsycINFO Database Record (c) 2012 APA, all rights reserved)