ArticlePublisher preview available

Self-Driving Vehicles Against Human Drivers: Equal Safety Is Far From Enough

American Psychological Association
Journal of Experimental Psychology: Applied
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We examined the acceptable risk of self-driving vehicles (SDVs) compared with that of human-driven vehicles (HDVs) and the psychological mechanisms influencing the decision-making regarding acceptable risk through 4 studies conducted in China and South Korea. Participants from both countries required SDVs to be 4-5 times as safe as HDVs (Studies 1 and 4). When an SDV and an HDV were manipulated to exhibit equivalent safety performance, participants' lower trust in the SDV, rather than the higher negative affect evoked by the SDV, accounted for their lower risk acceptance of the SDV (Studies 2 and 3). Both lower trust and higher negative affect accounted for why participants were less willing to ride in the SDV (Study 3). These reproducible findings improve the understanding of public assessment of acceptable risk of SDVs and offer insights for regulating SDVs. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Self-Driving Vehicles Against Human Drivers: Equal Safety Is Far
From Enough
Peng Liu
Tianjin University and University of Oxford
Lin Wang
Incheon National University
Charles Vincent
University of Oxford
We examined the acceptable risk of self-driving vehicles (SDVs) compared with that of human-driven
vehicles (HDVs) and the psychological mechanisms influencing the decision-making regarding accept-
able risk through 4 studies conducted in China and South Korea. Participants from both countries
required SDVs to be 4 –5 times as safe as HDVs (Studies 1 and 4). When an SDV and an HDV were
manipulated to exhibit equivalent safety performance, participants’ lower trust in the SDV, rather than
the higher negative affect evoked by the SDV, accounted for their lower risk acceptance of the SDV
(Studies 2 and 3). Both lower trust and higher negative affect accounted for why participants were less
willing to ride in the SDV (Study 3). These reproducible findings improve the understanding of public
assessment of acceptable risk of SDVs and offer insights for regulating SDVs.
Public Significance Statement
This study suggests that people require self-driving cars to be safer than conventional cars. Further,
it explains that people trust less in self-driving cars than conventional cars with equivalent safety
performance, which in turn leads them to be less willing to accept the risk of self-driving cars.
Keywords: acceptable risk, trust, affect heuristic, self-driving vehicles
Policymakers, scientists, and road safety organizations are en-
thusiastic about the potential for widespread adoption of self-
driving vehicles (SDVs) to reduce traffic accidents, traffic conges-
tion, and air pollution and to increase fuel efficiency, space
utilization, and human mobility (Anderson et al., 2016;National
Highway Traffic Safety Administration, 2016;Waldrop, 2015).
However, SDVs also pose risks and challenges related to safety,
security, liability, and regulation (Anderson et al., 2016;Bonnefon,
Shariff, & Rahwan, 2016;Fagnant & Kockelman, 2015;Liu,
Yang, & Xu, 2019b;Nunes, Reimer, & Coughlin, 2018;Xu et al.,
2018). Among dozens of studies on public perceptions, attitude,
and acceptance, some found that participants held positive atti-
tudes toward SDVs (e.g., Penmetsa, Adanu, Wood, Wang, &
Jones, 2019;Schoettle & Sivak, 2014), whereas others reported
participants’ resistance and negative attitude to SDVs (e.g.,
Nielsen & Haustein, 2018;Smith & Anderson, 2017). In particu-
lar, participants were concerned about potential risks of SDVs
(e.g., hacking; Liu et al., 2019b). The present series of studies
builds on these earlier findings to consider what people would
regard as an acceptable level of risk for SDVs, an issue that has so
far not been well addressed.
Removing control from human drivers is assumed to make
SDVs much safer than conventional human-driven vehicles
(HDVs; Mervis, 2017), but this has not yet been confirmed (Ba-
nerjee, Jha, Cyriac, Kalbarczyk, & Iyer, 2018). Then, how safe is
safe enough for SDVs? This popularized question has been widely
debated (Halsey, 2017;Hook, 2017;Mervis, 2017). Some law-
makers and regulators have been reported to consider allowing
SDVs to be deployed on roads provided they are deemed either as
safe as human drivers (Mervis, 2017) or twice as safe as human
drivers (Demers, 2018). Others claim that SDVs need to be mul-
tiple times (between 2 and 100 times) safer than HDVs, but, so far,
this claim lacks a scientific foundation (Shladover & Nowakowski,
2019). Policy researchers (Kalra & Groves, 2017) argued that a
less stringent policy (e.g., allowing SDVs to be slightly safer than
the average human driver) should be considered to save more
human lives. From a utilitarian standpoint, this policy seems
This article was published Online First March 23, 2020.
XPeng Liu, College of Management and Economics, Tianjin Univer-
sity, and Department of Experimental Psychology, University of Oxford;
Lin Wang, Department of Library and Information Science, Incheon Na-
tional University; Charles Vincent, Department of Experimental Psychol-
ogy, University of Oxford.
Peng Liu and Lin Wang contributed equally to this work. Lin Wang was
supported by Incheon National University Research Grant in 2018. Charles
Vincent was supported by the Health Foundation.
Correspondence concerning this article should be addressed to Peng Liu,
College of Management and Economics, Tianjin University, Building 25A,
No. 92 Weijin Road, Nankai District, Tianjin 300072, China. E-mail:
pengliu@tju.edu.cn
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Journal of Experimental Psychology: Applied
© 2020 American Psychological Association 2020, Vol. 26, No. 4, 692–704
ISSN: 1076-898X http://dx.doi.org/10.1037/xap0000267
692
... Artificial intelligence, advanced sensors, and machine learning algorithms enable autonomous vehicles (AVs), often referred to as SDVs, to navigate and function with little to no human assistance (P. Liu et al., 2020). ...
... This study evaluated and analyzed the dangers of HDVs versus SDVs, and the results are presented in this chapter (P. Liu et al., 2020). The study's main goal was to determine if SDVs are safer than traditional human-operated vehicles. ...
Thesis
Full-text available
The advanced improvement of autonomous vehicle technology has marked a transformative shift in the automotive industry, promising efficiency, mobility, and safety. However, safety concern remained in comparison to traditional human-driven vehicles. This research would present a comparative analysis of the risks which was associated with the self-driving vehicles and human-driven vehicles, addressing accident frequency, limitation of the technology, and acceptance in the public. By using quantitative methodology for the study and using the publicly available data, the study would evaluate the extent to which self-driving cars enhance our safety in the road compromise. The research used statistical techniques to compare the rate of accidents, analyzing the impact of the level of automation, and assessing the influence of factors like regulations, infrastructure, and ethical considerations. The result could be used to contribute the ongoing discussions on the feasibility of widespread self-driving cars adoption and informing the manufactures, policymakers, and the general public about the potential usefulness and risks linked with the self-driving cars. This study would provide verifiable evidence on autonomous vehicles safety, directing the development of self-driving technologies in the future. Keywords: future impact, TAM, road safety, technology acceptance model, self-driving cars, driverless cars, autonomous vehicles, human-driven cars, SDV
... Although the influence of these errors has been investigated sufficiently, little is known about 1 minor errors that do not invoke negative consequences. Previous studies considered property damage and 2 traffic violations as errors (33,34). These errors comprise two components: misoperation and negative 3 ...
... human drivers did, ACs were accepted less than human drivers were (34). Their results also revealed that 20 ...
Article
Full-text available
Studies have explored the factor of trust in autonomous cars (ACs), and it has been shown that their ability and performance are crucial for determining trust. However, little is known about the effects of minor errors without involving negative consequences such as property damage and fatalities. People are likely to expect automation technologies to perform better than humans. It was, therefore, hypothesized that minor errors would destroy expectations and significantly decrease trust in ACs. This study aimed to investigate whether minor errors have a more negative effect on trust in ACs than in human drivers. Two experiments were conducted ( N = 821) in Japan. Two independent variables were manipulated: agent (AC and human) and error (error and no-error). Some participants were shown videos depicting ACs and human drivers making minor errors, such as taking a longer time to park (Experiment 1) and delaying to take off when traffic lights turned green (Experiment 2). These minor errors did not violate Japanese traffic laws. Others watched videos in which no errors occurred. The results of the two-way analysis of variance did not show evidence that the agent type modulated the negative effects of these minor errors on trust. Minor errors did not lead to a significant difference in trust levels between ACs and human drivers. This study also indicated that people expected ACs to not make more errors than humans did. However, these expectations did not increase trust in ACs. The findings also suggest that minor errors are unlikely to cause an underestimation of ACs’ capabilities.
... Therefore, the relationship between errors and trust in SDVs 21 should be further examined. Liu et al. (2020) investigated how the severity of errors influenced 22 attitudes towards SDVs. In their scenario experiments, the severity levels of the error consisted 23 of three conditions: property damage only, injury, and fatality. ...
... 21 Furthermore, this study did not provide evidence that people trusted SDVs less than they did 22 human drivers in Experiments 3 and 4. People may not always have lower trust in SDVs than 23 in human drivers. 24 This study's findings were not consistent with those of Liu et al. (2020), that is, when 1 SDVs had the same driving capacity as human drivers did, SDVs were accepted less than 2 human drivers were. The findings also revealed that people were reluctant to accept SDVs 3 unless the SDVs were 4-5 times safer than human drivers. ...
Article
Full-text available
Studies have investigated the determinants of trust in self-driving vehicles (SDVs) and confirmed that the ability to execute the driving task flawlessly is essential to promote trust. However, little is known about the extent to which errors decrease trust in SDVs. This study conducted four experiments (N = 2221) and tested whether people’s trust in SDVs was lower than that in human drivers when they made errors without causing negative events. In Experiments 1 and 2, which manipulated the driving accuracy of the drivers, the participants checked nine different pieces of information that showed accuracy. The results demonstrated that the SDV was less trusted than humans only when there was a slight possibility of making an error. This study did not identify factors explaining lower trust in the SDV. Experiments 3 and 4 consisted of participants watching videos showing that the SDV and human driver made minor errors, such as taking a long time to park. This study showed that the minor errors largely reduced trust, regardless of whether the vehicle was self-driven or driven by humans. These findings imply that errors without describing severe accidents are less likely to cause a gap in trust between SDVs and human drivers.
... While these studies suggest high expectations about the performance of computers, there is some evidence that expectations about computers might be even higher than for humans. In a survey study on vehicle safety, it was found that respondents expect autonomous vehicles to be four to five times as safe as humans to receive comparable levels of acceptance (Liu et al., 2020), suggesting that an autonomous vehicle that commits as many driving errors as a human is deemed unacceptable. In sum, the elevated expectation hypothesis predicts that for tasks that require agency (such as describing kanji characters) humans expect more from a computer than from humans, thus leading to more negative judgments about AI performance (particularly after failure). ...
Article
Full-text available
Several studies have reported algorithm aversion, reflected in harsher judgments about computers that commit errors, compared to humans who commit the same errors. Two online studies ( N = 67, N = 252) tested whether similar effects can be obtained with a referential communication task. Participants were tasked with identifying Japanese kanji characters based on written descriptions allegedly coming from a human or an AI source. Crucially, descriptions were either flawed (ambiguous) or not. Both concurrent measures during experimental trials and pre-post questionnaire data about the source were captured. Study 1 revealed patterns of algorithm aversion but also pointed at an opposite effect of “algorithm benefit”: ambiguous descriptions by an AI (vs. human) were evaluated more negatively, but non-ambiguous descriptions were evaluated more positively, suggesting the possibility that judgments about AI sources exhibit larger variability. Study 2 tested this prediction. While human and AI sources did not differ regarding concurrent measures, questionnaire data revealed several patterns that are consistent with the variability explanation.
... People are reportedly willing to behave more aggressively toward automated vehicles [20] and express lower trust toward those vehicles compared to human-driven vehicles even given similar safety performance [21]. Li and colleagues [22] tested whether aberrant, unexpected driving behavior by automated vehicles provoke greater anger than similar behaviors by humans. ...
Article
The first exposure most people will have to fully automated vehicles will be Low-speed Automated Vehicles (LSAVs) on select corridors. LSAVs may disrupt traffic due to their slower speeds and experience increased risky overtaking by nearby drivers which could decrease safety for their occupants and surrounding road users. A field observational study compared driver behavior near an LSAV and a human-driven research vehicle across 94 passes of a 1.3-mile urban downtown route (4 straight and 4 right-turn sections). GEE analyses indicated a higher likelihood of other drivers queuing behind the LSAV when stopped, overtaking the LSAV while turning at signalized intersections, and overtaking the LSAV along straight road segments. To test whether the low speed of the LSAV was the key factor for this difference, 85 participants in a second study watched videos from the perspective of a driver following a simulated LSAV and a simulated passenger vehicle traveling at either low speeds (16 km/h) or high speeds (40 km/h). Once the simulated vehicle stopped midblock at a marked crosswalk, the participants responded whether they would, as a driver, attempt to pass the stopped vehicle. GEE analyses indicated participants were more willing to pass the vehicle if it had been portrayed as driving at a slower speed, irrespective of vehicle type. These results indicate a disruptive effect of LSAVs in mixed traffic and a need to investigate alternative communication signal designs for LSAVs, along with policy changes to mitigate these risks such as regulations on LSAV speeds and traffic separation.
... However, with the expected widespread of HAVs, this issue may affect passengers' trust and feeling of safety [31], [32]. Such dimensions are crucial for the acceptance of HAVs, and their decline may jeopardize the successful introduction of such vehicles on public roads [17], [33], [34], [35]. ...
Conference Paper
Full-text available
Although commonly experienced, the issue of car sickness is still poorly understood. Research towards its alleviation only gained interest in recent years, coinciding with the electric, digital and autonomous transformation of road vehicles. Within these vehicles, a heightened risk of car sickness poses a potential obstacle to their successful integration on public roads. Identifying effective mitigation means for car sickness may not only ensure the acceptability of new transportations means but also have broader benefits in terms of safety, and sustainability. Among others, a solution for car sickness would improve the quality of life for most susceptible individuals, enhance accessibility for people with disabilities, and reduce inequalities to mobility. Addressing car sickness additionally also contribute to vehicle safety by preventing driver distraction and support sustainability by encouraging the use of shared transportation. Such improvements could already be beneficial for current road transportation.
... The number of fatal accidents caused by Self-Driving cars in both countries would have to be 5-8 times lower to be acceptable to the public. To ensure the benefits and usefulness of vehicles, people must tolerate traffic risks that are higher than their expected risk [9]. ...
Article
Machines powered by artificial intelligence have the potential to replace or collaborate with human decision-makers in moral settings. In these roles, machines would face moral tradeoffs, such as automated vehicles (AVs) distributing inevitable risks among road users. Do people believe that machines should make moral decisions differently from humans? If so, why? To address these questions, we conducted six studies (N = 6805) to examine how people, as observers, believe human drivers and AVs should act in similar moral dilemmas and how they judge their moral decisions. In pedestrian-only dilemmas where the two agents had to sacrifice one pedestrian to save more pedestrians, participants held them to similar utilitarian norms (Study 1). In occupant dilemmas where the agents needed to weigh the in-vehicle occupant against more pedestrians, participants were less accepting of AVs sacrificing their passenger compared to human drivers sacrificing themselves (Studies 1–3) or another passenger (Studies 5–6). The difference was not driven by reduced occupant agency in AVs (Study 4) or by non-voluntary occupant sacrifice in AVs (Study 5), but rather by the perceived social relationship between AVs and their users (Study 6). Thus, even when people adopt an impartial stance as observers, they are more likely to believe that AVs should prioritize serving their users in moral dilemmas. We discuss the theoretical and practical implications for AV morality.
Chapter
The interface of the library website should be designed in a way that provides a good user experience to users. However, the same user interface may have different user experiences in different cultures. This study investigated the cultural differences in users’ preference for library website interface design factors including background colors, amounts of information presented, and image-based vs. text-based interfaces in Korea and the US. A survey was used as the research method. It was found both Koreans and Americans preferred a concise interface, meaning that too much information should not be presented on the homepage of library websites. There was a significant difference between the Korean subjects’ and American subjects’ preference for image vs. text-based interface. Koreans preferred text-based interfaces. On the opposite, Americans significantly preferred image-based interfaces. Due to the small sample size of this study, although there was no significant difference found, Korean subjects preferred no-color background more, and American subjects preferred colorful background more on the opposite. It is hoped that the results of this study can help libraries provide better services to users.KeywordsCultural DifferenceLibrary WebsitesInterface DesignKoreanAmerican
Article
Full-text available
Public perceptions play a crucial role in wider adoption of autonomous vehicles (AVs). This paper aims to make two contributions to the understanding of public attitudes toward AVs. First, we explore opinions regarding the perceived benefits and challenges of AVs among vulnerable road users – in particular, pedestrians and bicyclists. Second, the paper evaluated whether interaction experiences with AVs influence perceptions among vulnerable road users. To explore this, we examined survey data collected by Bike PGH, a Pittsburgh based organization involved in programs to promote safe mobility options for road users. Analysis of the data revealed that respondents with direct experience interacting with AVs reported significantly higher expectations of the safety benefits of the transition to AVs than respondents with no AV interaction experience. This finding did not differ across pedestrian and bicyclist respondents. The results of this study indicate that as the public increasingly interacts with AVs, their attitudes toward the technology are more likely to be positive. Thus, this study recommends that policy makers should provide the opportunities for the public to have interaction experience with AVs. The opportunities can be provided through legislation that allows auto manufacturers and technology industries to operate and test AVs on public roads. This interactive experience will positively affect people's perceptions and help in wider adoption of AV technology.
Article
Full-text available
Trust is important for any relationship, especially so with self-driving vehicles: passengers must trust these vehicles with their life. Given the criticality of maintaining passenger’s trust, yet the dearth of self-driving trust repair research relative to the growth of the self-driving industry, we conducted two studies to better understand how people view errors committed by self-driving cars, as well as what types of trust repair efforts may be viable for use by self-driving cars. Experiment 1 manipulated error type and driver types to determine whether driver type (human versus self-driving) affected how participants assessed errors. Results indicate that errors committed by both driver types are not assessed differently. Given the similarity, experiment 2 focused on self-driving cars, using a wide variety of trust repair efforts to confirm human-human research and determine which repairs were most effective at mitigating the effect of violations on trust. We confirmed the pattern of trust repairs in human-human research, and found that some apologies were more effective at repairing trust than some denials. These findings help focus future research, while providing broad guidance as to potential methods for approaching trust repair with self-driving cars.
Article
Full-text available
Automated driving (AD) is one of the most significant technical advances in the transportation industry. Its safety, economic, and environmental benefits cannot be realized if it is not used. To explain, predict, and increase its acceptance, we need to understand how people perceive and why they accept or reject AD technology. Drawing upon the trust heuristic, we tested a psychological model to explain three acceptance measures of fully AD (FAD): general acceptance, willingness to pay (WTP), and behavioral intention (BI). This heuristic suggests that social trust can directly affect acceptance or indirectly affect acceptance through perceived benefits and risks. Using a survey (N = 441), we found that social trust retained a direct effect as well as an indirect effect on all FAD acceptance measures. The indirect effect of social trust was more prominent in forming general acceptance; the direct effect of social trust was more prominent in explaining WTP and BI. Compared to perceived risk, perceived benefit was a stronger predictor of all FAD acceptance measures and also a stronger mediator of the trust–acceptance relationship. Predictive ability of the proposed model for the three acceptance measures was confirmed. We discuss the implications of our results for theory and practice.
Article
Full-text available
This article presents a theoretical and experimental framework for assessing the biases associated with the interpretation of numbers. This framework consists of having participants convert between different representations of quantities. These representations should include both variations in numerical labels that symbolize quantities and variations in displays in which quantity is inherent. Five experiments assessed how people convert between relative frequencies, decimals, and displays of dots that denote very low proportions (i.e., proportions below 1%). The participants demonstrated perceptual, response, and numerical transformation biases. Furthermore, the data suggest that relative frequencies and decimals are associated with different abstract representations of amount.
Article
Mass adoption of self-driving vehicles (SDVs) is predicted to have a profound effect on the environment. Here, we present three studies (N = 1258) that examine the impact of the environmental benefits (EB) of SDVs on individuals’ acceptance of their risks, and their willingness to ride (WTR) in them. Two types of SDVs were presented: SDVs with a clear mention of positive EB information (“EB-enhanced SDVs”) and SDVs without the mention of positive EB information (“normal SDVs”). Study 1 and Study 2 found that participants expressed higher risk acceptance and WTR regarding EB-enhanced SDVs. Further, Study 2 reported that higher trust in EB-enhanced SDVs, rather than lower negative affect associated with EB-enhanced SDVs, accounted for the participants’ higher risk acceptance and WTR. Study 3 observed that the participants’ acceptable risk of EB-enhanced SDVs was greater than that of normal SDVs in magnitude, although not significant. If SDVs can achieve the purported EB, the public may be willing to tolerate their risks more. Highlighting the environmental advantages of SDVs and increasing public trust in them are likely to be useful strategies for increasing societal acceptance of SDVs.
Article
Automated vehicles (AVs) already navigate US highways and those of many other nations around the world. Current questions about AVs do not now revolve around whether such technologies should or should not be implemented; they are already with us. Rather, such questions are more and more focused on how such technologies will impact evolving transportation systems, our social world, and the individuals who live within it and whether such systems ought to be fully automated or remain under some form of direct human control. More importantly, how will mobility itself change as these independent operational vehicles first share and then dominate our roadways? How will the public be kept apprised of their evolving capacities, and what will be the impact of science and the communication of scientific advances across the varying forms of social media on these developments? We look here to address these issues and to provide some suggestions for the problems that are currently emerging.
Article
Although self-driving vehicles (SDVs) bring with them the promise of improved traffic safety, they cannot eliminate all crashes. Little is known about whether people respond crashes involving SDVs and human drivers differently and why. Across five vignette-based experiments in two studies (total N = 1267), for the first time, we witnessed that participants had a tendency to perceive traffic crashes involving SDVs to be more severe than those involving conventionally human-driven vehicles (HDVs) regardless of their severity (injury or fatality) or cause (SDVs/HDVs or others). Furthermore, we found that this biased response could be a result of people’s reliance on the affect heuristic. More specifically, higher prior negative affect tagged with an SDV (vs. an HDV) intensifies people’s negative affect evoked by crashes involving the SDV (vs. those involving the HDV), which subsequently results in higher perceived severity and lower acceptability of the crash. Our results imply that people’s over-reaction to crashes involving SDVs may be a psychological barrier to their adoption and that we may need to forestall a less stringent introduction policy that allows SDVs on public roads as it may lead to more crashes that could possibly deter people from adopting SDVs. We discuss other theoretical and practical implications of our results and suggest potential approaches to de-biasing people’s responses to crashes involving SDVs.
Article
Automation in transport is increasing rapidly. While it is assumed that automated driving will have a significant impact on travel demand, the nature of this impact is not clear yet. Based on an online survey (N = 3040), this study explores the expected consequences of automated driving in the Danish population. Participants were divided into three homogeneous segments based on attitudes towards automated and conventional car driving: Sceptics (38%); Indifferent stressed drivers (37%) and Enthusiasts (25%). The attitudinal segments differ in their socio-demographic profiles, current travel behaviour, interest in use-cases for self-driving cars, and anticipated changes of behaviour in a future with self-driving cars. People who are enthusiastic about self-driving cars are typically male, young, highly educated, and live in large urban areas, while Sceptics are older, car reliant and more often live in less densely populated areas. The indifferent group consists of more car reluctant people. The expected advantages of self-driving cars generally resemble the aspects highlighted in other studies, such as relief from driving tasks and the possibility of doing other things while travelling, with some variation between the three segments. Preferred future scenarios include car ownership rather than sharing solutions as well as residential relocation, which is considered by 22% of all participants as a consequence of the possibility of working in the car (13% of Sceptics; 28% of Enthusiasts). All in all, increased travel demand can be expected from an uptake of increasingly automated cars, which will be realised in the different segments with different speeds, depending on policies, business models, and proven functionality and safety.