Content uploaded by Peter André Busch
Author content
All content in this area was uploaded by Peter André Busch on Sep 26, 2024
Content may be subject to copyright.
Non-user acceptance of autonomous technology: A survey of bicyclist
receptivity to fully autonomous vehicles
Peter Andr´
e Busch
Department of Information Systems, University of Agder, Kristiansand, Norway P.O. Box 422, 4604, Kristiansand, Norway
ARTICLE INFO
Keywords:
Trust
Autonomy
Interact
Innovation
Theory of planned behavior
Transport
ABSTRACT
Whereas the information systems literature mainly has focused on the individual user acceptance of technology,
this study focuses on non-user acceptance, termed technology receptivity. Autonomous vehicles (AVs) represent
an emerging technology that challenges individual technology user acceptance. Whereas AVs promise several
advantages, their success is conditioned on public trust. AVs impact not only those who use them but also those
who share the environment with them, such as bicyclists. Empirical research has been characterized by several
competing constructs aimed at explaining AV receptivity. In this article, we (1) review extant research on the
receptivity of AVs, and (2) develop and validate a model of AV receptivity. A cross-sectional survey of 219 bi-
cyclists showed empirical support for the model. The model can be a useful tool for the AV industry and poli-
cymakers in need of assessing the likelihood of success for a wider AV diffusion and help them understand
intervention strategies to address populations that may be less inclined to accept the new technology on public
roads. We offer several recommendations for future research on rening the measurement instrument and
enhancing our understanding of AV and autonomous technology receptivity.
1. Introduction
Technology acceptance is one of the most inuential and mature
research areas in the information systems (IS) domain (Mahmud et al.,
2022; Venkatesh et al., 2003; Williams et al., 2015). Its theories are
being widely used by researchers in several disciplines. Although IS
research has gained interest in additional perspectives of technology
acceptance such as rejection, discontinuance, and replacement, the
domain continues to focus much on individual technology acceptance
(Taherdoost, 2018; Venkatesh et al., 2003). These studies have mainly
used intention to use technology and actual usage behavior as depen-
dent variables.
More recently, the introduction of autonomous technologies chal-
lenges the traditional user acceptance perspective in IS. Examples of
these technologies include autonomous vehicles (AVs) and generative
articial intelligence (AI). Autonomous technology becomes part of the
societal fabric and may inuence people who have made no active
choice of being exposed to it but can still experience the consequences of
its use. Tasks performed by this technology involve information acqui-
sition, analysis, decision selection, and/or action implementation. For
example, if AI-driven diagnostic tools are used in healthcare, patients
may become uncertain about the quality of healthcare services. The
nature of autonomous technologies makes their acceptance different
from traditional technology acceptance in several respects. First, non-
users are exposed to technology usage inuencing them without their
consent. Second, whereas traditional technology acceptance considers
technologies that require certain levels of human control, autonomous
technologies can operate without human interference. Third, public
acceptance of autonomous technologies is inuenced by safety and
privacy challenges.
Third-party acceptance has been studied in other domains (e.g., Li
et al., 2023), however, IS research has focused less on this perspective. In
this study, we focus on AVs as an application of autonomous technology.
AVs, also referred to as self-driving, driverless, or automated vehicles,
encompass six categories (SAE International, 2021), ranging from level
0 (no automation) to level 5 (full automation). AVs at level 3 and above
can minimize or eliminate the need for human interaction in both
monitoring the driving environment and controlling the vehicle.
Whereas level 3 and 4 vehicles have certain requirements for humans to
resume control of the driving if needed, level 5 vehicles are expected to
be completely controlled by software and hardware (Zhou et al., 2022).
In this study, we specically refer to fully autonomous vehicles (FAVs)
(i.e., level 5 vehicles). FAVs are understood as robotic vehicles that drive
completely autonomously from the beginning to the end of a journey
E-mail address: peter.a.busch@uia.no.
Contents lists available at ScienceDirect
Computers in Human Behavior Reports
journal homepage: www.sciencedirect.com/journal/computers-in-human-behavior-reports
https://doi.org/10.1016/j.chbr.2024.100490
Received 31 May 2024; Received in revised form 12 August 2024; Accepted 19 September 2024
Computers in Human Behavior Reports 16 (2024) 100490
Available online 23 September 2024
2451-9588/© 2024 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ ).
without any need for a human operator (Kaur & Rampersad, 2018;
Paden et al., 2016).
AVs have received much societal attention for their potential bene-
ts. Around 1.2 million individuals lose their lives in trafc accidents
across the globe every year. Additionally, between 20 and 50 million
people sustain non-fatal injuries from these incidents (WHO, 2023).
Human error accounts for about 90% of all accidents (Deb et al., 2017;
Fagnant & Kockelman, 2015). AVs are projected to reduce accidents
signicantly (Kaur & Rampersad, 2018; Miskolczi et al., 2021). In
addition, AVs can enhance mobility for those unable to drive, decrease
pollution through improved efciency, and alleviate trafc congestion
(e.g., Krueger et al., 2016; Paden et al., 2016).
However, for AVs to deliver on their potential benets, they must
reach a certain degree of market adoption. Several substantial in-
novations fall short of meeting expectations and are often abandoned
before they reach the market (Story et al., 2011). The level of acceptance
among non-users could therefore be a signicant determinant of their
ultimate success or failure. The diffusion of AVs in the streets is complex.
AVs must be able to detect the external environment, such as road signs,
other road users, and trafc density, and respond accordingly. Never-
theless, these decisions hinge on the correct operation of all the com-
ponents that make up the technology, including cameras, lasers, sensors,
and radar scanners. This would involve anticipations of correct trafc
behavior across different conditions such as weather conditions (e.g.,
snow), different types of road users (e.g., children and distracted pe-
destrians), different road conditions (e.g., unrepaired roads), different
trafc rules (e.g., expected yielding behavior), and trafc density (e.g.,
urban or rural areas). Despite signicant efforts have been directed to-
ward transitioning vehicle control from human drivers to AVs, recent
research shows that the adoption of AVs by the public is relatively low
(Stilgoe & Cohen, 2021). Vulnerable individuals exposed to this tech-
nology, such as bicyclists and pedestrians who share the same urban
fabric with these vehicles, often seem to lack the condence and will-
ingness to accept them (Kenesei et al., 2022).
This study investigates technology acceptance from the perspective
of bicyclists who are exposed to AVs—which we term AV receptivity
(Deb et al., 2017). The bicyclist perspective is chosen since bicyclists are
more vulnerable in trafc accidents than, for example, motorists, who
are protected by the structure of their vehicles. Moreover, our review of
the literaturehas shown that most studies focus on technology recep-
tivity from the perspective of pedestrians, which is another reason why
the bicyclist perspective is important. This study has two main
objectives.
(1) To develop and validate a model of AV receptivity: Based on our
review, we developed a research model explaining non-users’
receptivity to AVs. The model development is described in Sec-
tion 4. An empirical test of the model is conducted, providing
support for its hypothesized relationships. The empirical valida-
tion of the model is presented in Section 6.
(2) To contribute to the IS literature: Whereas user acceptance is much
researched in the IS domain, the acceptance of non-users has not
achieved much attention, unlike in other domains. This study (a)
reveals potential misalignments between users’ and non-users’
acceptance of technology, (b) provides a theoretical model of AV
receptivity, which can serve as a starting point toward theorizing
autonomous technology receptivity, and (c) suggests further
research opportunities.
2. Non-user receptivity to autonomous technology
Studies on the acceptance and use of technology have encompassed a
wide variety of perspectives in terms of different technologies, contexts,
stakeholders, units of analysis, theoretical aspects, control factors, and
research methods (Williams et al., 2015). Building on the technology
acceptance model (TAM) or the unied theory of acceptance and use
(UTAUT), researchers have developed models explaining the acceptance
of car technology (Osswald et al., 2012, pp. 51–58) and autonomous
driving (Garidis et al., 2020, pp. 1381–1390). Often-used explanations
in this context are general attitudes to technology, perceived safety,
perceived privacy, trust, and social inuence (Deb et al., 2017).
However, technology usage can impact others than its users, and
even more profoundly. For example, injuries caused if a bicyclist is hit by
an FAV. Therefore, it is important to explore the receptivity of the non-
users to such technologies. In this study, receptivity is understood as the
willingness to interact with autonomous technology, that may be un-
certain, unfamiliar, or paradoxical (Deb et al., 2017), as opposed to
acceptance, understood as the willingness to embrace usage and/or begin
to use autonomous technology (cf., Venkatesh et al., 2003).
The willingness to interact with autonomous technology is different
from using it in terms of the level of technology control, goal orientation,
and technological knowledge. Actively using technology implies a
higher level of engagement and behavioral control. When individuals
actively use technology, they are consciously and deliberately engaging
with digital tools to accomplish specic tasks or goals. This requires a
more profound understanding and control over the technology and its
functions. For instance, a pilot operating the autopilot on a plane and an
individual using a smart home system both exemplify usage that ne-
cessitates active technological choices. Users in this context are not just
passive recipients of technology but provide initial input and supervise it
to achieve its desired outcomes.
By contrast, an individual interacting with autonomous technology
may have little or no knowledge and control of it. For example, a
bicyclist crossing the street in front of an FAV. The technology functions
independently of the individuals being exposed to it and operates ac-
cording to goals set by others. In these scenarios, individuals may not
always know how to anticipate the actions of autonomous technologies,
leading to uncertainty and discomfort (Deb et al., 2017). Despite algo-
rithms often outperforming humans in a variety of analytical tasks,
many users still lean toward human options when given a choice
(Mahmud et al., 2022). In their seminal article, Dietvorst et al. (2015)
termed this phenomenon “algorithm aversion”. Whereas algorithm
aversion presents a signicant challenge in the adoption of autonomous
technologies, this aversion becomes even more salient when the user is
exposed to the algorithms without making a conscious choice to do so.
When exposed to this technology, potential users might feel a sense of
lost choice and control (Kaur & Rampersad, 2018) due to perceptions
that such technologies carry security risks. These risks include a
perceived vulnerability to car hacking, unauthorized remote access or
control of the vehicle, and susceptibility to computer viruses (Kaur &
Rampersad, 2018; Schoettle & Sivak, 2014). The lack of control and
understanding regarding these highly technical and opaque systems can
exclude people from the technological decisions that affect their daily
lives.
Yet, the non-users are subjected to the consequences of autonomous
technology use. Autonomous algorithms are used in numerous areas,
ranging from recommendations for the next movie to watch to critical
systems performing tasks on behalf of humans across varied sectors.
However, the consequences of interacting with these systems differ
widely, warranting more attention from researchers (Hannon et al.,
2022). For example, the consequences of incorrect medical advice,
wealth management, and fatal pedestrian accidents far exceed those of
mistaken movie recommendations. Unintended interactions with tech-
nology can lead to feelings of alienation, confusion, skepticism, or
outright resistance (Chang & Wang, 2023; Egala & Liang, 2023; Hannon
et al., 2022; Renier et al., 2021). Media coverage of accidents involving
AVs contributes to the opposition to their adoption (Hegner et al., 2019).
A signicant obstacle to the widespread acceptance of FAVs is a lack
of trust in the technology (Kaur & Rampersad, 2018). For example,
pedestrians, being particularly vulnerable on the road, have been a
concern in previous studies, with reports indicating that a substantial
number of them (approximately 60%) lack condence that vehicles (or
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
2
drivers) will react appropriately to their presence (Deb et al., 2017;
Karsch et al., 2012). Several of the arguments in the algorithm aversion
literature are reected in the non-user perspective: worry about and/or
witnessing algorithms fail, the severity of algorithmic errors, unclear
algorithmic functioning, the context of use, domain experience, and the
inability of algorithms to address uniqueness (cf., Mahmud et al., 2022).
Whereas the IS domain has given technology receptivity little
attention, other domains have researched it. However, empirical
research on the receptivity of AVs is characterized by using several
models including a multitude of different constructs (e.g., Li et al., 2023;
Penmetsa et al., 2019; M. T. Rahman et al., 2021). Therefore, there is a
need to review and synthesize constructs that are used to explain AV
receptivity to progress our understanding of the phenomenon. This is the
purpose of this article. In the following, recent empirical research on AV
receptivity is presented.
3. A review of recent empirical research on AV receptivity
Existing research on AV receptivity is published in different journals,
mainly in the transportation, computer science, and psychology elds.
This review focuses on empirical studies with AV receptivity as the
dependent variable, excluding the perspectives of drivers. Appendix D
presents empirical results from AV receptivity research during the last
ten years.
Our analysis reveals the literature’s primary focus on pedestrians,
being vulnerable road users and vital for broad public AV acceptance.
This study focuses on bicyclists as another group of vulnerable road
users. Whereas pedestrians normally are the least protected, bicyclists
wear protection equipment that may be of little or no help if hit by an AV
at higher speeds. A variety of factors was investigated as predictors of
AV receptivity, which can be categorized as relating to specic AV en-
counters, characteristics of individuals, and individuals’ experience with
AVs. Of the constructs investigated in the studies, trust is salient. Several
of the studies used virtual experiments as research method, focusing on
specic AV encounters. Whereas such experiments bring forth several
advantages, they cannot fully grasp emotions and considerations made
in AV encounters. We further observed that the experiences of road users
are less researched as an explanation for AV receptivity. Considering the
current situation with a relatively low diffusion of AVs, this is under-
standable. Empirical research on factors explaining situational and
learned trust in AVs, is therefore nascent. Several of the studies used pre-
recorded videos to familiarize the respondents with AVs. Moreover,
most studies are cross-sectional.
Our review further suggests that whereas technology acceptance
models have been applied to investigate the user perspective, fewer
theoretical frameworks have been used in AV receptivity studies. The
uniqueness of AV introduces a new set of challenges that necessitates
more comprehensive and context-specic frameworks. For example,
trust in automation, perceived safety, and societal readiness are all
pivotal factors that could inuence AV receptivity. These factors are not
traditionally captured in technology acceptance models. Additionally,
AV receptivity involves diverse stakeholders such as bicyclists, passen-
gers, urban planners, policymakers, and the general public whose lives
can be reshaped by the advent of these vehicles. Therefore, building
theory to capture the multifaceted dimensions of AV receptivity may be
benecial in comparing studies within and maturing this stream of
research.
3.1. Factors inuencing receptivity to AVs
Drawing on our review of the literature and the categorization
scheme described by Hoff and Bashir (2015), factors explaining AV
receptivity are categorized (see Table 1). Among the different variables
explaining receptivity, trust is considered salient (Deb et al., 2017).
3.1.1. Disposition to trust technology
Individuals’ inclination to trust AVs varies widely. Whereas certain
reasons for human behavior can be attributed to context-dependent
factors, inherent traits inuence human behavior in most situations.
They constitute an individual’s propensity to trust technology and can
have both biological and environmental explanations. For example, role
models can explain personal innovativeness. Similarly, inherent traits
can explain technology mistrust. For example, algorithm aversion is
particularly prevalent among people who are cautious and prevention-
oriented (Chang & Wang, 2023).
In this study, disposition to trust technology refers to long-term
tendencies, meaning that these tendencies are expected to be preva-
lent in most situations. Thus, dispositional trust is a relatively stable trait
(Hoff & Bashir, 2015). Our review revealed ve primary explanations
for dispositional trust: age, gender, trust propensity, risk-taking ten-
dency, and personal innovativeness. For example, Hudson et al. (2019)
found young people to be more in favor of AVs than older adults, and
Deb et al. (2018) found that participants scoring more highly in personal
innovativeness spent less time waiting before crossing the road in front
of an FAV.
3.1.2. Situational trust in AVs
In addition to more stable dispositions to trust technology, in-
dividuals can be greatly affected by the specic situation in which they
are exposed to AVs. For example, research suggests that algorithm
aversion is more prevailing in situ, particularly in pressing situations
(Chang & Wang, 2023; Egala & Liang, 2023). Situational trust in AVs
can be linked to a specic individual (e.g., having a child as a passenger)
and to a specic environment (e.g., whether an AV can communicate its
intentions).
In this study, situational trust depends on specic encounters with
AVs. The socio-technical ensembles arising from these encounters
consist of a varying combination of individual and environmental fac-
tors, and are thus, expected to be different from situation to situation.
Contrary to dispositional trust, the dening characteristic of situational
trust is that it is unstable (Hoff & Bashir, 2015). Our review revealed
three primary sources of situational trust: characteristics of the AV
(trafc behavior, external human-machine interfaces (eHMI) status),
characteristics of the environment (e.g., weather, type of crosswalk),
and characteristics of the individual (e.g., mood, risk-taking). For
example, Rad et al. (2020) found that the presence of a zebra crossing
Table 1
Factors inuencing AV receptivity.
Factor Denition Example references
Dispositional
trust in
technology
An individual’s overall
tendency to trust new
technology, independent of a
specic context and/or
technology (based on Hoff &
Bashir, 2015).
Deb et al. (2018; Hudson
et al. (2019); Hulse et al.
(2018); Jayaraman et al.
(2019); Li et al. (2023)
Situational trust
in AVs
An individual’s trust
evaluation of AVs, dependent
on a specic context and/or
technology.
Hulse et al. (2018); Faas
et al. (2021, pp. 1–17); Rad
et al. (2020)
Learned trust in
AVs
An individual’s trust
evaluation of AVs, based on
prior experience and/or
knowledge (based on Hoff &
Bashir, 2015).
Liu et al. (2019); Penmetsa
et al. (2019); Rad et al.
(2020); Reig et al. (2018)
Social inuence The degree to which an
individual perceives that
important others believe s/he
should accept AVs (based on
Venkatesh et al., 2003).
Deb et al. (2017); Rahman
et al. (2019)
Perceived benet The degree to which an
individual believes that the
use of AVs will lead to benets
that are important to him/her.
Liu et al. (2019); Penmetsa
et al. (2019)
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
3
affected intended crossing behavior, and Faas et al. (2021, pp. 1–17)
concluded that a status eHMI may lead to pedestrians over-trusting AVs.
3.1.3. Learned trust in AVs
Learned trust refers to an individual’s perception of the performance
of AVs based on experience and knowledge. People tend to base de-
cisions on prior experience. Experience can change based on each
interaction, and even during interaction. Thus, learned trust in AVs may
vary. Learned trust can, however, also be linked to more stable per-
ceptions—for example, the reputation of a vehicle or technology brand
(e.g., Reig et al., 2018, pp. 198–209).
In this study, learned trust is linked to non-users’ experience with
and knowledge of AVs. For most people, these factors are not based on
professional knowledge but rather reect an ‘outside’ perspective. Our
review revealed ve primary sources of learned trust: perceived risk,
previous interaction (including experiences of malfunction occur-
rences), perceived safety (including compatibility), brand reputation,
and familiarity. For example, non-users’ trust in AVs could be affected
by their interpretations of vehicle brands (Reig et al., 2018, pp.
198–209) and familiarity with AVs can affect intended crossing behavior
(Rad et al., 2020).
3.1.4. Social inuence
As social beings, our behaviors are often shaped by those around us.
Social inuence can explain technology usage, reecting that in-
dividuals are inuenced by how others will view them after using the
technology (Venkatesh et al., 2003). Faced with new technology, in-
dividuals may look to others such as inuencing people (e.g., inuencers
and celebrities) and important people (e.g., family and friends) for cues
on how to react (M. M. Rahman et al., 2019). This endorsement, whether
explicit or implicit, serves as a form of validation, signaling that the
technology is useful, trustworthy, or trendy.
In this study, social inuence is considered important when non-
users are exposed to AVs that may be unfamiliar but also an important
change in trafc, requiring consideration about advantages and risks.
Others’ testimonies can be valuable. Receptivity may stem from argu-
ments for fewer accidents and better collaboration between trafc ac-
tors. On the contrary, Rahman et al. (2019)discovered that older adults’
road-crossing behavior was not inuenced by social norms.
3.1.5. Perceived benet
Since AV adoption could be associated with risk, safety concerns, and
a loss of control, the potential benets of AVs are important since in-
dividuals may be more positive if they consider potential risks to be
outweighed by perceived benets (Liu et al., 2019; Penmetsa et al.,
2019). In this study, perceived benet is understood as the extent to
which a non-user of AVs believes that its usage will lead to benets that
are important to him or her. Contrary to social inuence, in which an
individual acknowledges the inuence of important others, perceived
benet is a result of individual consideration. Whereas these benets
may be individually motivated, such as believing that AVs will lead to
fewer accidents involving bicycles, they may also be motivated by other
reasons such as the increased mobility of a parent. Our review revealed
six primary sources of perceived benet: increased safety, reduced
trafc congestion, reduced vehicle emissions and pollution, improved
fuel economy, reduced transport costs, and increased mobility. Research
suggests that perceived benets are more inuential in determining AV
receptivity than perceived risks (Liu et al., 2019; M. M. Rahman et al.,
2019).
3.1.6. Moderating effects
Key relationships in technology acceptance models are moderated
(cf., Venkatesh et al., 2003). Our review mostly revealed moderating
effects related to situational factors. The effects of crossing time, speed
of the approaching FAV, and the use of an external FAV interface are
moderated by age, with older adults being more hesitant to cross the
road than their younger counterparts, as they use more time in crossing
the street, are more skeptical of fast-approaching FAVs, and prefer to be
signaled by an external interface on the FAV (e.g., Deb et al., 2018;
Domm`
es et al., 2021; Jayaraman et al., 2019; Rad et al., 2020).
Furthermore, female participants were found to be more receptive to-
ward FAVs in terms of feature ratings compared to male participants
(Deb et al., 2018). Additionaly, street-crossing may be initiated sooner if
the eHMI shows intention in addition to status (Faas et al., 2021, pp.
1–17). Jayaraman et al. (2019) found that signalized crosswalks
moderated intent to cross the street. In addition to the situational fac-
tors, Li et al. (2023) suggested that prior experience with advanced
driving assistance systems (ADAS) could affect bicyclists’ behaviors to-
ward them, encouraging other researchers to control for this moderating
effect.
4. Model development
Based on the literature, we have developed a research model of AV
receptivity, reecting the factors of dispositional trust and learned trust
(through perceived reliability). While situational trust and social inu-
ence are salient in the broader understanding of AV receptivity, their
exclusion from the research model is based on the current context where
direct interaction with FAVs is not yet a common experience. Therefore,
we expect the effect of social inuence to be small (if present). Situa-
tional factors such as driving behavior and weather conditions are hard
to imagine, even considering detailed scenarios. Moreover, whereas
dispositional trust and learned trust can be explained by several factors,
this study has selected two salient explanations from the literature for
each of these constructs. Fig. 1 presents the developed and tested
research model. The rationale for the hypotheses is outlined below.
4.1. Receptivity explained by inherent dispositions (H1-3)
Personal innovativeness can be understood as an individual’s will-
ingness to try something new when faced with the unknown (Agarwal &
Prasad, 1998; Deb et al., 2017). Highly innovative individuals have
strong intrinsic motivations to try out new technologies, reecting their
innovative mindset. An innovative mindset is fueled by curiosity, which
in turn engenders a form of trust not contingent upon established fa-
miliarity or proven functionality but instead on the anticipation of
future gains and the intrinsic value seen in innovation itself. This trait
has been observed to impact attitudes, behaviors, and perceptions of
societal norms (Lee et al., 2007). Innovative individuals typically
possess more optimistic views about technology (Agarwal & Prasad,
1998). In their study, Iranmanesh et al. (2017) found that those who
score highly on personal innovativeness are better at coping with un-
certainties and tend to be rst-movers. Consistent with prior research,
Deb et al. (2017) discovered that heightened personal innovativeness
bolsters pedestrians’ receptivity to FAVs. We hypothesized that.
H1. Personal innovativeness is positively associated with the pro-
pensity to trust technology.
Risk propensity is understood as the predominant or prevailing
tendency of an individual to engage in risky behaviors (based on Hulse
et al., 2018). The psychological factors that explain an individual’s
willingness to engage with the unknown or to endure potential adverse
consequences such as fatal bicyclist accidents are also inuential in
shaping attitudes toward technology (e.g., Cristea & Gheorghiu, 2016).
The literature suggests that the perceived risk of using technology can
lead to resistance to adopting new technologies (Hong et al., 2020).
Individuals with a high propensity for risk are typically characterized by
a forward-looking optimism and a greater acceptance of potential failure
as a component of progress. This outlook naturally extends to their
perceptions of technology, where the benets of engaging with new
systems, tools, or platforms are often weighed against the potential risks.
Those who are more prone to risky behaviors will seek risk and
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
4
appreciate new technologies with uncertain outcomes. Conversely, a
prevention-oriented approach makes it difcult to accept autonomous
technology in situations that are characterized by uncertainty (Egala &
Liang, 2023). The argument here is that the same psychological traits
that incline an individual toward risk-taking also foster a propensity to
trust in the capabilities and promises of technology. In line with this
argument, we hypothesized that.
H2. Risk propensity is positively associated with the propensity to
trust technology.
Researchers suggest that in the same way people are predisposed to
trust or mistrust others, they might also be inclined to trust or be
skeptical of machines (Merritt & Ilgen, 2008; Muir & Moray, 1996). The
underlying idea is that those who naturally lean toward trusting others
are more likely to exhibit higher levels of trust when they rst encounter
a new entity. In this study, a propensity to trust technology describes a
bicyclist’s consistent, long-term inclination to trust FAVs, regardless of
the peculiarities of a specic situation (Hoff & Bashir, 2015; Zhou et al.,
2022). Unlike trust that is situation-dependent or acquired through
experience, this trust remains constant, shaped by both biological and
external factors (Hoff & Bashir, 2015; Lazanyi & Maraczi, 2017, pp.
135–140). The level of this inherent trust can inuence how individuals
perceive information and determine their readiness to interact with
FAVs (Lazanyi & Maraczi, 2017, pp. 135–140). Since dispositional trust
is situationally independent, the characteristics of FAVs are expected to
have little or no inuence on it. Based on this, we hypothesize that.
H3a. Propensity to trust technology is associated with a positive atti-
tude toward interaction.
H3b. Propensity to trust technology is positively associated with a
behavioral intention to interact.
4.1.1. Receptivity as a result of learning (H4-6)
Brand reputation has been shown to impact trust in AVs (Hengstler
et al., 2016; Reig et al., 2018, pp. 198–209). In this study, brand repu-
tation importance refers to the extent to which the reputation of FAV
technology brands is important for bicyclists (Li et al., 2023). Most bi-
cyclists are unlikely to have had interaction experiences with AVs, so
their perceptions of these vehicles may be inuenced by brand reputa-
tion (Zhou et al., 2022). Research in different areas has shown that a
strong and positive reputation can play a signicant role in fostering
trust (Forster et al., 2018, pp. 118–128).
One study examined how trust is shaped by the brand in both med-
ical devices and FAVs (Hengstler et al., 2016). Results revealed that trust
in autonomous technology is dual-faceted, consisting of trust in the
technology itself and trust in the company innovating it (Hengstler et al.,
2016). Another study supported these results, nding varied perceptions
of Uber affecting the perceived reliability of its technology (Reig et al.,
2018, pp. 198–209). Whereas some of the study participants emphasized
the size and established reputation of the company as explanations for
trust, others pointed out fast vehicle development processes as an
explanation for distrust. This study found that only one participant
mentioned the vehicle brand, indicating that trust mainly centers
around the technology producer rather than the vehicle manufacturer
(Reig et al., 2018, pp. 198–209). We hypothesized that.
H4. Brand reputation importance is positively associated with
perceived reliability.
New technology is open to many possible and plausible in-
terpretations. In this study, we understand familiarity with FAVs as bi-
cyclists’ recognition, awareness, or understanding of FAVs through
previous experience or exposure (Gefen, 2000). Bicyclists might acquire
previous knowledge about FAVs from their past experiences with similar
technologies or other modes of transportation, such as aviation (Hoff &
Bashir, 2015; Papadimitriou et al., 2020).
Research investigating non-users’ understanding of AVs found that it
is limited and, for many, conned to what they see on the streets or hear
through word of mouth (Reig et al., 2018, pp. 198–209). However,
others have noticed the driving patterns of AVs on the streets. Yet others
use popular media as a source of information or had a direct experience
with a company’s AV or representative (Reig et al., 2018, pp. 198–209).
Importantly, a lack of familiarity with AVs led to mistrust and fear of
them as well as the creation of personal and creative explanations, such
as concerns about the unreliability of sensors and misunderstandings
about the vehicle’s functionality (Reig et al., 2018, pp. 198–209). In-
formation about fatalities has been shown to negatively inuence pe-
destrians and bicyclists with less familiarity with AVs, whereas
pedestrians and bicyclists with a high familiarity with AVs increase their
safety perceptions regarding AVs and demand less strict regulations (M.
T. Rahman et al., 2021). Similarly, Li et al. (2023) found that a group of
bicyclists who had heard about FAVs maintained their position in the
Fig. 1. Research model of AV receptivity
* The dashed lines show hypothesized moderating effects.
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
5
lane when exposed to an FAV, whereas the group of bicyclists who were
unfamiliar with FAVs, were more likely to move over. Khastgir et al.
(2018) emphasized that incorrect information from external sources,
such as media and marketing, could potentially result in both overtrust
and mistrust of AVs. Based on this literature, we hypothesized that.
H5. Familiarity is positively associated with perceived reliability.
Interacting with FAVs involves situations in which people may lack
complete volitional control over the behavior of interest. Although
experience and familiarity with technology have been shown to improve
trust in automated systems, trust beliefs are also dependent on how a
specic technology is performing (Muir, 1994; Muir & Moray, 1996).
Perceived reliability of FAVs refers to an individual’s perception of the
probability that FAVs will function correctly, effectively, and securely. If
failures occur, they still can continue to perform their function, although
with limited capacity. It indicates the reliability perception of bicyclists
prior to interacting with FAVs (Hoff & Bashir, 2015). Reliability is
paramount in an industry such as transportation, where technology is
central to critical functions. Failure or inconsistent performance can
have serious consequences, including injuries and endangering lives (cf.,
Renier et al., 2021). If non-users do not perceive FAVs as reliable, they
are less likely to be receptive to them. There is much empirical evidence
showing that trust declines when there is a defect in automation (Chang
& Wang, 2023; Dietvorst et al., 2015; Egala & Liang, 2023), leading to
further negative expectations of automation in subsequent encounters.
Understanding people’s readiness to interact with emerging technolo-
gies is crucial, especially in contexts where prevailing risk perceptions
need to be addressed (Hengstler et al., 2016; McKnight et al., 2002). To
summarize, perceived reliability affects the attitude of bicyclists as well
as their intention to interact with FAVs. Thus, we hypothesized that.
H6a. Perceived reliability is associated with a positive attitude toward
interaction.
H6b. Perceived reliability is positively associated with the behavioral
intention to interact.
4.2. Behavioral intention to interact explained by the attitude toward
interaction (H7)
Attitude toward behavior is dened as “an individual’s positive or
negative feelings (evaluative affect) about performing the target
behavior” (Fishbein & Ajzen, 1975, p. 216) and can include perceptions
of benets, such as increased safety (M. M. Rahman et al., 2019). Atti-
tude is a multifaceted construct that typically includes cognitive (beliefs
about the attributes of FAVs), affective (feelings related to the use of
FAVs), and behavioral (past experiences with or observations of FAVs)
components. In this study, we focus on behavioral intention to interact
with FAVs rather than actual interaction. The theory of planned
behavior posits that having a favorable or unfavorable evaluation of
behavior would inuence the intention to engage in this behavior
(Ajzen, 2002). Humans tend to make decisions to minimize costs and
maximize benet as much as possible (Liu et al., 2019). If individuals
hold favorable views of FAVs, believing that they are safe and efcient,
such attitudes are likely to translate into a willingness to engage with
this technology. Positive attitudes are thus indicative of an overall
appraisal that the implementation of FAVs into the urban fabric will lead
to positive outcomes, and thus, the interaction with them will be
desirable. We therefore hypothesized that.
H7. Attitude toward interaction is positively associated with the
behavioral intention to interact.
4.3. Moderating effects (H8)
The model also controlled for moderating effects. Following extant
research and suggestions for future research by Li et al. (2023), we
consider the moderating effect of the bicyclists’ prior experience with
ADAS on technology receptivity. There are smart vehicles with ADAS in
the market that aim to correct driving errors and support the driver (e.g.,
by measuring driver fatigue through face detection software) (Krügel
et al., 2023). Although the participants may have knowledge of FAVs
from before, ADAS are different in that they are not fully autonomous.
Having a positive experience with these systems may inuence an in-
dividual’s perception of how likely FAVs are to malfunction, leading to a
perception of fewer accidents, contrary to making this judgment based
on descriptive information only (e.g., based on media coverage or word
of mouth), which may increase the perceived probability that such
events will happen (Charness et al., 2018; Reig et al., 2018, pp.
198–209). On the contrary, experiencing shortcomings of ADAS may
lead participants to weaken their intention to interact with an FAV. We
therefore hypothesized that.
H8a. Prior positive experience with ADAS moderates the effect of the
propensity to trust technology on behavioral intention to interact.
H8b. Prior positive experience with ADAS moderates the effect of
perceived reliability on behavioral intention to interact.
5. Method
5.1. Measurement
The nal questionnaire consisted of three sections. The rst section
introduced the respondents to the topic and provided a video describing
FAVs. The second section measured age, gender, educational level,
residing area (urban/rural), occupation/work status, daily average
cycling distance, previous interaction with FAVs, and ADAS experience.
The respondents were asked if they watched the video. The nal survey
instrument is found in Appendix B. The questionnaire was designed after
a review of validated measurement instruments. The items were adapted
to the context of FAVs when necessary. A ve-point Likert scale with the
anchors (1) totally disagree and (5) totally agree was used to measure all
items.
To measure personal innovativeness (PI), we used four items from
Agarwal and Prasad (1998), adapted to this study context. The term
“information technology” was changed by removing “information” since
bicyclists may have other associations with this term, for example,
expecting the use of a personal computer. An example item is “Among
my peers, I am usually the rst to try out new technologies”.
We used ve items from the risk propensity scale (Meertens & Lion,
2008) to measure risk propensity (RP). An example item is “I take risks
regularly”. The scale originally consisted of seven items, however, we
removed two of them to avoid any domain-specic items in our ques-
tionnaire (e.g., “I do not take risks with my health”), instead measuring
RP as a general tendency. One of the items was adapted to be measured
using the same anchors as the rest of the items. Instead of having “risk
avoider” and “risk seeker” as anchors to “I view myself as a …”, the item
was changed to “I view myself as a risk seeker.”
Propensity to trust technology (PTT) was measured by using four
adapted items of the propensity to trust scale (Hagenzieker et al., 2020;
Merritt et al., 2013). The term “trust in machines” was changed to “trust
in technology”, as this better suited our study, and the terms have
similar meanings. Our adapted questionnaire included four out of the
original six questions (e.g., “I usually trust technology until there is a
reason not to”), with two items omitted due to their lack of relevance to
our study (e.g., “In general, I would rely on a technology to assist me”).
Familiarity with FAVs (FF) was measured by adapting three items
from K¨
orber (2019, pp. 13–30) and Gefen (2000) to the context of this
study. An example item is “I already know how fully self-driving vehicles
work.” Even though the respondents most likely had not interacted with
AVs before, they may have become acquainted with the concept through
sources such as media coverage.
Based on the literature, we developed three items to measure the
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
6
brand reputation importance of the technology (BRT). We used the
wording of similar studies (e.g., McKnight et al., 2002 on initial con-
sumer trust) to avoid unclear, complex, and ambiguous language. The
items were then included in a larger questionnaire in a preliminary test
for validation, using a sample of 57 bicyclists. An example item is “It is
important to me that well-respected technology brands are involved in
the production of fully self-driving vehicles.” The initial test revealed
that the items performed well; BRT showed high convergent and
discriminant validity as well as acceptable composite reliability (
ρ
A
=
0.898). An evaluation followed the preliminary test based on the
participant feedback on the questionnaire. After conrming the items’
reliability and validity, they were nalized for inclusion in the nal
questionnaire.
Perceived reliability (PR) was measured by adapting four items from
K¨
orber (2019, pp. 13–30) to measure reliability in the context of auto-
mated vehicles. Items that did not t the context of this study were
omitted (e.g., “The system is capable of taking over complicated tasks”).
An example item is “I believe fully self-driving vehicles can detect the
environment correctly.”
We measured receptivity to FAVs through two constructs: attitude
toward interaction (ATI) and behavioral intention to interact (BII). ATI
was measured using four items adapted from the theory of reasoned
action (Fishbein & Ajzen, 1975). An example item is “Crossing the road
in front of an approaching fully self-driving vehicle is a good idea”.
BII was measured through the willingness to cross the road in front of
an approaching FAV, following the description of a scenario. The sce-
nario the respondents faced was:
Background: You are a cyclist (alone) approaching a crosswalk in a city.
The streets are busy but well organized. There are functioning trafc
lights, and the crosswalks are clearly marked.
Situation: While approaching the crosswalk, you see that the trafc light
for crossing the street is green. At the same time, you notice a fully self-
driving vehicle approaching the crosswalk from the left side in the lane
closest to you. The fully self-driving vehicle seems to adjust its speed, but it
is not immediately clear whether it will stop completely at the crosswalk or
not.
As a cyclist, what will your action be at the crosswalk when the fully
self-driving vehicle approaches?
We used three items from UTAUT measuring behavioral intention to
use technology (Venkatesh et al., 2003), adapting them to the context of
this study. In addition, one reversed item was added to avoid response
bias. An example item is “I predict I will cross the street in front of the
approaching fully self-driving vehicle”.
5.2. Preliminary test
A preliminary test of the model was conducted to identify potential
issues with the questionnaire. The test used respondents (n =57)
recruited from social media groups focusing on amateur bicycling (posts
approved by moderators). They were asked to ll out the self-
administered questionnaire and could then comment on the wording
and instructions. No restrictions were made regarding age, gender, work
experience, etc., focusing on the representativeness of the respondents.
A gift certicate was offered to increase the quality of the responses and
the likelihood of participation. The gift certicate was handed out to one
of the respondents after a draw. The preliminary test showed that the
participants spent 7–9 min on average to complete the questionnaire.
The reliability of the model met the conventional standard of internal
consistency, with Cronbach’s alphas falling between 0.90 and 0.95 for
two of the constructs (FF and PTT), higher than the recommended level
of 0.70–0.90 (Diamantopoulos et al., 2012). The model met the re-
quirements for discriminant validity. The questionnaire was rened
based on the preliminary test (see Appendix A).
5.3. Participants
Participants were recruited from Prolic, a crowdsourcing platform
for online subject recruitment, servicing researchers. The participants
were informed about the research purposes of their participation, and
the service included several ltering options, such as occupation, resi-
dency, hobbies, and use of transportation services.
Prolic was chosen because of its proven quality, transparency,
ltering options, and population diversity (Palan & Schitter, 2018; Peer
et al., 2017). The service is a good option to receive relevant responses
within a reasonable time. Participants were ltered using the following
criteria: (1) approval rating (≥95%), (2) commuting to work by bike,
and (3) having English as a uent language. The cognitive bias checklist
by Draws et al. (2021) was used to inform our survey design. We used
online survey software to collect data and predicted £1.68 (£10.08/hr)
for a 10-min survey on Prolic. The average hourly compensation was
£9.85 and the median completion time was 10:14.
In total, 229 bicyclists completed the survey. 10 responses were
removed because of failed attention checks, of which nine belonged to
participants under the age of 30 years. Respondents’ ages varied from 18
to 74 years, with an average of 35.6 years and a median of 33 years. Most
of the respondents were male (n =152, 69.4%), whereas 29.7% (n =65)
were female, and 0.9% identied themselves as other. Table 2 provides
more details about the respondents.
6. Validation of the model
The hypotheses were examined using partial least squares structural
equation modeling (PLS-SEM), adhering to guidelines suggested by
Becker et al. (2023) and Hair et al. (2017, 2019). PLS-SEM is particularly
useful when applied in exploratory and/or prediction-oriented research
where the aim is to explain the variance in dependent variables (J. Hair
Table 2
Descriptive information about the respondents (n =219).
Measure Item Frequency Percentage
Gender Female 65 29.7%
Male 152 69.4%
Other 2 0.9%
Age Under 30 years 71 32.5%
30–39 years 80 36.5%
40–49 years 43 19.6%
50 years and older 25 11.4%
Education (highest completed) Primary school 0 0.0%
High school/upper
secondary school
21 9.5%
Some college/
university subjects
38 17.4%
Associates/
Bachelor’s degree
84 38.4%
Graduate/Master’s
degree or higher
76 34.7%
Work status Student/
unemployed
23 10.5%
Employed 194 88.6%
Retired 2 0.9%
Area Urban 18 8.2%
Rural 201 91.8%
Daily average cycling distance
(during the last two months of
the cycling season)
0 km - 3,99 km 49 22.4%
4 km - 6,99 km 67 30.6%
7 km - 9,99 km 52 23.8%
10 km - 13,99 km 29 13.2%
14 km or more 22 10.0%
Watched video? Yes 177 80.8%
No 42 19.2%
Interacted with a fully self-
driving vehicle?
Yes 20 9.1%
No 199 90.9%
Previous experience with
advanced driver assistance
technology?
Yes 38 17.4%
No 181 82.6%
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
7
et al., 2022). Due to the exploratory nature of this study, the limited
existing research on the subject, and the predictive objectives of this
research, PLS-SEM was considered appropriate method for data analysis
(J. Hair et al., 2022). The analysis was conducted in two steps: (1)
assessment of the PLS model’s reliability using all survey items (in-
strument validation) and (2) assessment of the hypothesized relation-
ships in the research model (structural model validation).
6.1. Instrument validation (outer model)
All the items in the survey instrument were reective. The instru-
ment validation procedure was conducted in two steps, checking (1)
construct validity and (2) construct reliability. Construct validity refers
to the degree to which a measurement tool measures the construct it is
intended to measure (O’Leary-Kelly & Vokurka, 1998) and is done by
calculating convergent and discriminant validity.
Convergent validity is understood as the extent to which different
items that are supposed to measure the same underlying construct, are
indeed related (J. Hair et al., 2022). The outer loadings (OLs) indicate
the associations between the constructs and respective items and are
calculated for each construct. A common rule of thumb is that the OLs
should be 0.707 or above (Fornell & Larcker, 1981), and showed satis-
factory values for all items except RP3 and RP4. The model was modied
by removing these items and after the modication, all OLs were satis-
factory. We further analyzed the average variance extracted (AVE) of the
constructs. AVE values must be above the recommended threshold of
0.50 (J. Hair et al., 2022), and all values satised this criterion.
Discriminant validity evaluates whether a construct is truly distinct
from other constructs, meaning that each construct captures a phe-
nomenon that is not expected nor represented by any of the other con-
structs. We used the heterotrait-monotrait ratio of correlations criterion
(HTMT) to assess this (Henseler et al., 2015). Our analysis showed that
the HTMT values were satisfactory. Furthermore, we found that the
square root of each construct’s AVE was higher than the correlations
between the construct and other constructs, satisfying the For-
nell-Larcker’s test (Fornell & Larcker, 1981). Appendix C provides an
overview of the results.
In the second step, we tested construct reliability, understood as the
extent to which multiple items, intended to measure the same construct,
are reliable, that is, they produce consistent results. Construct reliability
was evaluated by studying Cronbach’s alpha (CA) and Composite
reliability (CR) using rho a (
ρ
A) (Dijkstra & Henseler, 2015; J. Hair et al.,
2022). The analysis showed that all CA and CR values were above the
recommended value of 0.70 (J. Hair et al., 2022). Some of the values
were above 0.90, which can harm content validity (J. Hair et al., 2022).
Appendix C shows the reliability and validity metrics, demonstrating
measurement quality.
6.2. Structural model validation (inner model)
To assess the structural model, we examined the statistical signi-
cance of the relationships between the constructs, along with the size of
the corresponding path coefcients. Fig. 2 shows the research model
with path coefcients (β), hypotheses, and the explained variance of the
endogenous variables (R
2
) without moderating effects. Table 3 presents
a summary of the results.
As shown in Table 3, eight out of nine hypotheses were empirically
supported. PI and RP had signicant positive impacts on PTT, explaining
31.1% of its variance. BRT and FF signicantly inuenced PR, explain-
ing 17.2% of its variance. For BII, PR, PTT, and ATI explained 38.6% of
the variance.
The model was further evaluated by examining effect size (f2). This
procedure assesses how the exogenous constructs contribute to the
endogenous constructs by simulating the inclusion and exclusion of the
exogenous constructs (Hair et al., 2022). The analysis showed different
effects on the endogenous constructs: large effects for PI→PTT, PR→ATI,
and ATI→BII, medium effects for FF→PR, BRT→PR, PR→BII, and
Fig. 2. Results of structural model tests.
Table 3
Summary of the structural model test results.
Hypothesized
path
βT
statistics
P
value
Hypothesis
supported?
Effect size
(f2)
H1: PI → PTT 0.475 7.958 0.000 Yes 0.301
H2: RP → PTT 0.204 3.605 0.000 Yes 0.056
H3a: PTT → ATI 0.109 1.748 0.040 Yes 0.012
H3b: PTT → BII 0.073 1.175 0.120 n.s. 0.006
H4b: BRT → PR 0.241 3.502 0.000 Yes 0.070
H5: FF → PR 0.333 5.609 0.000 Yes 0.134
H6a: PR → ATI 0.451 6.819 0.000 Yes 0.204
H6b: PR → BII 0.258 3.310 0.000 Yes 0.066
H7: ATI → BII 0.409 5.463 0.000 Yes 0.200
n.s. =non-signicant.
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
8
RP→PTT, and effects below the acceptable level for PTT→ATI and
PTT→BII.
We thereafter analyzed the results using the PLS
predict
procedure
(Shmueli et al., 2019). All the items of the endogenous constructs had a
smaller prediction error than the mean value prediction benchmark (i.e.,
Q
2
predict
>0). Furthermore, PLS-SEM had a lower prediction error in
terms of the root mean square error (RMSE
PLS-SEM
) than the linear model
(LM) benchmark (RMSE
LM
) for most of the items (see Appendix C).
Therefore, we concluded that the model had medium predictive power
(Shmueli et al., 2019). Moreover, the cross-validated predictive ability
test (CVPAT) showed that PLS-SEM demonstrated a signicantly lower
average loss than the LM benchmark (Sharma et al., 2022).
The study assessed the moderating effect of experience with ADAS
(EXP) on the relationship between PTT/PR and BII. The moderating
effects were tested sequentially and showed signicant effects. The re-
sults are presented in Appendix C. Our analysis showed negative and
signicant moderating effects of EXP on the relationship between PTT
and BII and PR and BII, supporting H8a and H8b. This means that with
an increase in negative experiences with ADAS, the bicyclists showed
less intention to interact with the approaching FAV when they had
experience with ADAS. Without the inclusion of the moderating effects,
the R
2
value for BII was 0.386. With the inclusion of the interaction
terms (EXP*PTT and EXP*PR), the R
2
values increased to 0.402 and
0.404, respectively. This shows an increase of 1.6% in variance
explained in BII when moderating the relationship between PTT and BII
and 1.8% when moderating the relationship between PR and BII.
7. Discussion
This study explored non-users’ receptivity to autonomous technol-
ogy, specically focusing on FAVs. It identied ve key factors inu-
encing technology receptivity: dispositional, situational, and learned
trust in technology, social inuence, and perceived benet. A theoretical
model excluding real-life FAV interaction factors (i.e., situational trust
and social inuence) was developed and empirically tested with a survey
of 219 bicyclists. The model is depicted in Fig. 1. The ndings supported
the model, highlighting three direct determinants of BII and two of ATI,
and showed the model could predict AV receptivity based on individual
characteristics and perceived AV reliability. When data were analyzed
without including prior experience with ADAS as a moderator, the effect
of the propensity to trust technology on behavioral intention to interact
was nonsignicant. The structural model analysis is depicted in Fig. 2
and Table 3 summarizes the ndings.
Prior to discussing the implications of this work, it is necessary to
recognize its limitations. First, the analysis showed low or negligible
effects for some of the constructs, suggesting that the way these con-
structs are measured or conceptualized should be revisited. In this
respect, the scales used to measure the constructs in the model constitute
a limitation. The measures included here should be considered pre-
liminary, and future researchers should develop and validate more fully
and appropriate scales for several of the constructs. For example, the
construct BII has high-loading items. Future research should oper-
ationalize the constructs, focusing more on content validity and then
revalidate the model or extend it accordingly with new measures (cf.,
Venkatesh et al., 2003). Second, this study omitted situational trust and
social inuence as explanations of autonomous technology receptivity.
Only 9.1% of the participants had experience with AVs, which is a
common challenge with current social science studies of AVs (Li et al.,
2023). The exclusion of some of the identied constructs calls for future
research. Situational factors are known to be important in settings where
risk is involved (e.g., Cristea & Gheorghiu, 2016). Given that experi-
ments can enhance our understanding of technology receptivity, future
research should investigate bicyclists’ receptivity toward FAVs using
participants who have been exposed to FAVs (Li et al., 2023). Third, this
study is cross-sectional, capturing the attitudes of individuals at early
stages of AV diffusion. The lack of longitudinal data limits the study’s
ability to assess changes in receptivity and behavioral intention over
time. Future research should target longitudinal studies to capture how
receptivity develops as non-users are more exposed to these evolving
technologies. And fourth, although some moderating effects (like
experience with ADAS) were considered, other potential moderators,
such as perceived situational risk, were not fully explored. This could
limit the understanding of how different factors interact to inuence AV
receptivity, and future research could explore additional, potential
moderating effects to better understand individual perceptions about
technology (cf., Venkatesh et al., 2003).
7.1. Theoretical implications
From a theoretical perspective, we have developed a model of AV
receptivity, representing early theoretical advancements regarding non-
user acceptance of technology. We project that this will be more
important in the future as autonomous technologies such as AVs and
generative AI are implemented and used.
Our model showed a nonsignicant relationship between PTT and
BII. We know from technology acceptance work that latent variables can
be signicant only under certain conditions (e.g., in cases of mandatory
use) or when used by specic groups of users (e.g., older adults).
Whereas this literature has identied several moderating effects (cf.,
Venkatesh et al., 2003), our review suggests that positive experiences
with ADAS may inuence the receptivity toward FAVs positively. Our
analysis showed that EXP moderated the relationship between PTT and
BII and that this relationship became signicant considering this
moderating effect. It further showed that the relationship between PR
and BII was moderated. Our ndings indicated that the experience with
ADAS was negative, weakening the effect of PTT and PR on BII (cf.,
Charness et al., 2018; Reig et al., 2018, pp. 198–209). Considering the
argument by Venkatesh et al.’s (2003) that “it is only when one con-
siders the complex range of potential moderating inuences that a more
complete picture of the dynamic nature of individual perceptions about
technology begins to emerge” (p. 470), understanding moderating ef-
fects may be important to better understand the complexity of AV
receptivity.
The analysis of effect sizes showed that certain constructs are crucial
in determining other constructs. For example, PI is a critical factor in
determining PTT, and changes in PI will likely lead to signicant
changes in PTT. Furthermore, the analysis indicates a substantial in-
uence of PR on ATI. This means that PR is important for ATI. However,
it may be valuable to identify other predictors or antecedents of PTT and
ATI. A substantial inuence of ATI on BII was also identied, meaning
that improvements in ATI would likely result in noticeable changes in
BII. Constructs with large effect sizes are of high practical signicance
and should be the focus of interventions or changes where the aim is to
inuence the associated endogenous constructs. However, small and
negligible effects were also identied. The analysis suggests that
changes in PTT will have little to no effect on ATI, implying that other
factors are more important in determining ATI. Furthermore, our anal-
ysis showed an insignicant or negligible inuence of PTT on BII, which
suggests that PTT does not play a meaningful role in shaping BII in this
model. Given the negligible effects of PTT on ATI and BII, it may be
worth revisiting how PTT is measured or conceptualized. There is a
possibility that PTT is not adequately captured in this study.
Whereas attitude toward behavior has been left out in technology
acceptance models (e.g., UTAUT), it may play an important role in
technology receptivity research since there is a difference between
having a positive attitude toward behavior and carrying out the
behavior, especially when the latter is associated with risk. Risk-taking
is a complex phenomenon, inuenced by several factors, including in-
dividual characteristics and the situations (Cristea & Gheorghiu, 2016;
Figner & Weber, 2011). The positive evaluation of bicyclists faced with
potentially risky behaviors could be an important explanation for BII
(Cristea & Gheorghiu, 2016). Furthermore, the model was built by
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
9
selecting some of the explanations for PTT and PR. These constructs
were selected based on their salience in the AV receptivity literature. As
our review shows, other explanations have also been posed, and future
researchers can explore these and others as antecedents to PTT and PR.
The dependent variable of the model, AV receptivity, is measured by
the potential to interact with FAVs through attitude and behavioral
intention. The level of AV receptivity is interesting from a theoretical
perspective. Researchers may investigate consequences of the AV
receptivity level. For example, it can help explain marketing strategies.
High AV receptivity among non-users may suggest that marketers can
focus on advanced features, safety improvements, and environmental
benets of AVs. On the contrary, among non-users with low AV recep-
tivity, marketing strategies might need to address basic concerns such as
reliability, safety testing, and the regulatory compliance of AVs. Another
potential avenue is the link to usage. For example, private use of FAVs
will most likely increase if non-users accept their presence in the streets.
Social inuence has repeatedly been identied as the least effective
predictor of behavioral intention (Armitage & Conner, 2001). Whereas it
has predicted technology acceptance in voluntary settings, technology
receptivity research may show that descriptive norms (i.e., what others
do without implying what a correct behavior would be) are more salient.
Descriptive norms have been identied as good predictors of risky be-
haviors in trafc (Cestac et al., 2014). If crossing behavior is perceived
risky, what others do could be a salient predictor of crossing behavior.
However, social inuence may affect perceptions about FAVs. Cristea
and Gheorghiu (2016) suggested that signicant others’ approval of
behavior may lead to a more positive attitude toward that same
behavior. Moreover, they found that the social inuence on attitudes
varied according to the situation in question (Cristea & Gheorghiu,
2016).
Theoretically extending the model to study other autonomous tech-
nologies offers a promising avenue for research. Other prominent ap-
plications of autonomous technology areas include generative articial
intelligence, algorithmic trading, patient diagnoses, and manufacturing.
However, the consequences of interacting with these systems differ
widely, warranting more attention from researchers (Hannon et al.,
2022). For example, the consequences of incorrect medical advice,
erroneous wealth management, and fatal bicyclist accidents far exceed
those of mistaken movie recommendations (Chang & Wang, 2023; Egala
& Liang, 2023; Hannon et al., 2022; Renier et al., 2021). One critical
aspect in this respect is the role of perceived reliability. For autonomous
technologies to be embraced, users must trust not only their current
capabilities but also their potential for consistent performance and
improvement. Considering the sometimes-grandiose promises of
generative AI, non-users need to trust not only its current capabilities
but also its potential for consistent performance. In healthcare, for
example, the ability of generative AI to analyze vast datasets could
signicantly advance diagnosis and treatment options, provided that
patients are willing to accept the recommendations of the technology.
Similarly, in the educational sector, generative AI is used experimen-
tally, for example, to provide grading explanations to students. This
practice will require the acceptance of students. An extension of the
model to other autonomous technologies may suggest other antecedents
to the main constructs of the model. For example, algorithmic bias may
be a worry of students if generative AI is used in education.
7.2. Practical implications
From a practical perspective, this research raises several interesting
questions. One important issue is the link between PR, ATI, and BII.
Whereas much of extant empirical research has investigated situational
trust through the use of eHMIs, this study shows that other explanations
also contribute to heightened AV receptivity, for example, PR explained
by factors such as BRT, and FF, as investigated in this study. As non-users
gain actual interaction experience with AVs, FF will become more
important. However, in an era where the knowledge of AVs is still low,
AVs must be introduced to the market in such a way that it allows society
to gain gradually learned trust in them. Increased learned trust in AVs
can be achieved through positive publicity about AVs as well as through
increasing the transparency of AV development processes (Zhou et al.,
2022).
Another perspective pertains to the importance of PTT. The rela-
tionship of the construct with BII was only signicant when a moderator
was introduced to the model: EXP. This nding could suggest that even
though an individual may have a positive attitude toward FAVs, an
actual interaction will be dependent on other factors such as EXP. This
experience could, of course, be both positive and negative, which ulti-
mately affects the attitude of the non-users. For example, positive ex-
periences with ADAS may reduce anxiety or increase comfort with
leaving control of technology, thereby increasing the willingness to
interact with FAVs. Marketing efforts could be directed toward non-
users with positive ADAS experiences if these individuals could be
identied. The marketing could then highlight similarities between
ADAS and FAV functionalities, leveraging trust and familiarity to
overcome barriers to acceptance among the public.
The level of AV receptivity may signal when the market is ready for a
wider diffusion of AV technology. For AVs to deliver on their potential
benets, it is imperative that non-users develop a certain level of
receptivity to avoid an ultimate failure. For a successful diffusion of
FAVs, practice should further take into consideration the factors that
come into play when actual interaction takes place.
8. Concluding remarks
We can expect autonomous technology to be more prevalent in
future society. AVs represent a frontier of innovation that can redene
industries, job roles, and social norms. Therefore, it becomes imperative
to anticipate how this technology will affect us to mitigate potential
adverse consequences and address concerns among the public. Although
the model presented here has a parsimonious structure, it provides op-
portunities for researchers to further explore the antecedents and con-
sequences of AV interaction. Future research should identify constructs
that can add to the prediction of attitudes and intentions toward AVs,
beyond what is currently known and understood. Moreover, as AVs
become more integrated into our urban fabric, research should focus on
actual behavior and consequences in addition to attitudes and intention.
By studying actual behavior in naturalistic settings, researchers can gain
a clearer understanding of the real-world challenges and opportunities
that come with the widespread adoption of autonomous technology.
Whereas IS research mainly has been concerned with individual tech-
nology user acceptance, the present work advances research by focusing
on the acceptance of individuals who do not use technology, but still are
affected by it in their daily lives. Moreover, this research has reviewed
and synthesized empirical literature to address the use of multiple
constructs and to develop a parsimonious model that can progress our
understanding of AV receptivity. The model can also provide a foun-
dation for researchers focusing on other types of autonomous
technologies.
CRediT authorship contribution statement
Peter Andr´
e Busch: Writing – review & editing, Writing – original
draft, Validation, Project administration, Methodology, Formal analysis,
Data curation, Conceptualization.
Declaration of competing interest
The authors declare that they have no known competing nancial
interests or personal relationships that could have appeared to inuence
the work reported in this paper.
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
10
Data availability
Data will be made available on request.
Appendices.
Appendix A: Measurement instrument renement
The results and evaluations of the preliminary test model suggested that some of the items should be removed or replaced by new items. In
particular, two issues motivated the renement of the measurement instrument: (1) low-loading reective items, and (2) some of the constructs having
too high ICR, threatening content validity (J. Hair et al., 2022; Venkatesh et al., 2003).
(1) Three of the items (PR2, RP3, and RP4) loaded too low, according to the recommended threshold of 0.707 (Fornell & Larcker, 1981). Two of the
items (RP3 and RP4) were deleted, and one item (PR2) was rened to address this issue.
(2) Two of the constructs had a high ICR, which often is not desirable (Diamantopoulos et al., 2012). Whereas a selection of items based on their
loadings or corrected item-total correlations is frequently suggested in the psychometric literature (Venkatesh et al., 2003), this approach
would favor a homogeneous instrument and may compromise content validity due to the reduced breadth of domain coverage (Venkatesh et al.,
2003). The items were rened to address this issue.
Table A1 provides a list of items that are either rened or removed in the developed model’s nal instrument.
Table A1
Rened and removed items compared to the preliminary test model
Construct Item
#
Original item Action Reason for removal/adjusted item
RP RP3 I really dislike not knowing what is going to happen. [R] Removed Low loading
RP4 I usually view risks as a challenge. Removed Low loading
PTT PTT2 For the most part, I distrust technology. [R] Removed Content validity
ATI ATI1 Crossing the street in front of an approaching fully self-driving
vehicle is a good idea.
Removed Content validity
FF FF2 I am unfamiliar with how self-driving vehicles work. [R] Modied Compared to others, I think I am quite familiar with how fully self-
driving vehicles work.
FF3 I am aware of how self-driving vehicles work. Modied I am familiar with fully autonomous driving.
PR PR2 I believe self-driving vehicles are likely to malfunction. [R] Modied I believe fully self-driving vehicles are likely to produce serious errors.
[R]
Appendix B: Final measurement instrument
Table B1Final measurement instrument
Construct and origin Item Wording M SD
Personal innovativeness (PI) Agarwal and Prasad (1998) PI1 If I heard about a new technology, I would look for ways to experiment with it. 3.63 0.99
PI2 Among my peers, I am usually the rst to try out new technologies. 2.99 1.08
PI3 In general, I am hesitant to try out new technologies. [R] 3.70 1.04
PI4 I like to experiment with new technologies. 3.68 1.01
Risk propensity (RP) Meertens and Lion (2008) RP1 I prefer to avoid risks. [R] 2.58 1.02
RP2 I take risks regularly. 2.48 0.98
RP5 I view myself as a risk-seeker 2.27 1.03
Propensity to trust technology (PTT) Adapted from Hagenzieker et al.
(2020) and Merritt et al. (2013)
PTT1 I usually trust technology until it gives me a reason not to. 3.77 0.93
PTT3 I am likely to trust technology even when I have little knowledge about it. 3.23 0.99
PTT4 My tendency to trust technology is high. 3.63 0.96
Brand reputation importance – technology (BRT) BRT1 It is important to me that well-respected technology brands are involved in the
production of fully self-driving vehicles.
3.68 1.04
BRT2 The reputation of the technology brands involved in producing fully self-driving
vehicles is important to me.
3.74 1.02
BRT3 It is important for me that reputable technology brands contribute in the
production of fully self-driving vehicles.
3.68 1.02
Familiarity with FAVs (FF) Adapted from Gefen (2000) and K¨
orber
(2019)
FF1 I already know how fully self-driving vehicles work. 3.17 1.01
FF2 Compared to others, I think I am quite familiar with how fully self-driving
vehicles work.
3.00 1.04
FF3 I am familiar with fully autonomous driving. 2.77 1.11
Perceived reliability (PR) Adapted from Fishbein and Ajzen (1975) and
K¨
orber (2019)
PR1 I believe fully self-driving vehicles can detect the environment correctly. 3.11 0.99
PR2 I believe fully self-driving vehicles are likely to produce serious errors. [R] 2.75 0.99
PR3 I believe fully self-driving vehicles are capable of interpreting situations
correctly.
3.19 0.90
PR4 I believe fully self-driving vehicles are reliable. 3.10 0.91
M =Mean, SD =Standard Deviation.
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
11
Table B1
continued. Final measurement instrument
Construct and origin Item Wording M SD
Attitude toward interaction (ATI) Adapted from Fishbein and Ajzen
(1975)
ATI2 I like the idea of crossing the street in front of an approaching self-driving vehicle. 2.20 1.03
ATI3 The idea of crossing the street in front of an approaching self-driving vehicle is
pleasant.
2.07 0.92
ATI4 Crossing the street in front of an approaching self-driving vehicle is a foolish idea.
[R]
2.72 1.16
Behavioral intention to interact (BII) Adapted from Venkatesh et al.
(2003)
BII1 I imagine I will cross the street in front of the approaching self-driving vehicle. 2.70 1.26
BII2 I have no intention of crossing the street in front of the approaching self-driving
vehicle. [R]
2.78 1.33
BII5 I predict I will cross the street in front of the approaching self-driving vehicle. 2.84 1.26
M =Mean, SD =Standard Deviation.
Appendix C: Instrument and model validation results
Table C1
Crossloadings
Item ATI BII BRT FF PI PR PTT RP
ATI2 0.902 0.507 0.041 0.112 0.238 0.457 0.313 0.216
ATI3 0.917 0.492 0.080 0.217 0.260 0.476 0.345 0.208
ATI4 0.881 0.526 0.034 0.148 0.173 0.436 0.260 0.097
BII1 0.501 0.957 0.077 0.193 0.214 0.477 0.331 0.177
BII2 0.555 0.933 0.078 0.230 0.192 0.474 0.325 0.225
BII3 0.547 0.952 0.117 0.192 0.225 0.478 0.323 0.122
BRT1 0.038 0.089 0.894 0.014 0.028 0.204 0.043 −0.029
BRT2 0.040 0.091 0.910 0.040 0.038 0.226 0.084 0.037
BRT3 0.076 0.083 0.930 0.001 0.090 0.243 0.128 0.067
F1 0.151 0.170 0.037 0.886 0.280 0.290 0.070 0.140
F2 0.210 0.280 −0.030 0.916 0.452 0.355 0.213 0.259
F3 0.079 0.082 0.068 0.841 0.317 0.225 0.064 0.184
PI1 0.206 0.148 0.009 0.339 0.861 0.262 0.475 0.308
PI2 0.211 0.208 0.037 0.414 0.828 0.298 0.390 0.269
PI3 0.186 0.187 0.101 0.247 0.775 0.290 0.472 0.199
PI4 0.228 0.202 0.049 0.370 0.876 0.276 0.464 0.325
PR1 0.428 0.480 0.250 0.270 0.334 0.855 0.504 0.335
PR2 0.399 0.351 0.120 0.225 0.203 0.712 0.377 0.108
PR3 0.371 0.403 0.252 0.328 0.273 0.822 0.357 0.168
PR4 0.469 0.409 0.178 0.284 0.284 0.888 0.439 0.223
PTT1 0.311 0.315 0.136 0.126 0.513 0.532 0.914 0.280
PTT3 0.290 0.304 0.062 0.086 0.441 0.389 0.872 0.310
PTT4 0.319 0.313 0.061 0.171 0.512 0.463 0.920 0.385
RP1 0.129 0.175 −0.009 0.152 0.299 0.150 0.318 0.886
RP2 0.181 0.158 −0.001 0.214 0.233 0.233 0.313 0.919
RP5 0.212 0.168 0.085 0.242 0.356 0.316 0.347 0.909
Table C2
The heterotrait-monotrait (HTMT) ratio of correlations
ATI BII BRT FF PI PR PTT
BII 0.618
BRT 0.064 0.104
FF 0.191 0.223 0.062
PI 0.286 0.248 0.080 0.467
PR 0.591 0.564 0.280 0.387 0.394
PTT 0.384 0.377 0.104 0.150 0.618 0.592
RP 0.217 0.201 0.061 0.250 0.375 0.291 0.405
Table C3
Construct validity and reliability
Construct CA CR (
ρ
A) AVE Item OL
Personal innovativeness (PI) 0.856 0.859 0.699 PI1 0.861
PI2 0.828
PI3 0.775
(continued on next page)
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
12
Table C3 (continued )
Construct CA CR (
ρ
A) AVE Item OL
PI4 0.876
Risk propensity (RP) 0.889 0.891 0.819 RP1 0.886
RP2 0.919
RP5 0.909
Propensity to trust technology (PTT) 0.886 0.890 0.815 PTT1 0.914
PTT3 0.872
PTT4 0.920
Brand reputation importance – technology (BRT) 0.898 0.905 0.831 BRT1 0.894
BRT2 0.910
BRT3 0.930
Familiarity (FF) 0.859 0.899 0.777 FF1 0.886
FF2 0.916
FF3 0.841
Perceived reliability (PR) 0.838 0.847 0.676 PR1 0.855
PR2 0.712
PR3 0.822
PR4 0.888
Attitude toward interaction (ATI) 0.883 0.883 0.810 ATI2 0.902
ATI3 0.917
ATI4 0.881
Behavioral intention to interact (BII) 0.943 0.944 0.898 BII1 0.957
BII2 0.933
BII3 0.952
Table C4
Predictive power analysis
Item Q
2
predict
RMSE
PLS-SEM
RMSE
LM
RMSE
PLS-SEM
<RMSE
LM
?
ATI2 0.023 1.018 1.042 Yes
ATI3 0.062 0.892 0.920 Yes
ATI4 0.021 1.152 1.201 Yes
BII1 0.051 1.231 1.260 Yes
BII2 0.060 1.297 1.324 Yes
BII3 0.058 1.226 1.260 Yes
PR1 0.117 0.936 0.906 No
PR2 0.049 0.975 1.011 Yes
PR3 0.150 0.835 0.841 Yes
PR4 0.092 0.872 0.876 Yes
PTT1 0.259 0.807 0.830 Yes
PTT3 0.213 0.884 0.895 Yes
PTT4 0.298 0.806 0.840 Yes
Table C5
Moderation analysis
Relationship Beta SD T value P value
The moderating effect of EXP on the relationship between PTT and BII
PTT* EXP→BII −0.278 0.153 1.819 0.034
EXP→BII 0.176 0.161 1.089 n.s.
PTT→BII 0.125 0.064 1.937 0.026
The moderating effect of EXP on the relationship between PR and BII
PR* EXP→BII −0.301 0.134 2.239 0.013
EXP→BII 0.218 0.160 1.363 n.s.
PR→BII 0.305 0.084 3.651 0.000
Appendix D: Review of recent empirical AV receptivity research
Table D1
Recent empirical ndings from research on AV receptivity
Author(s) SAE
level
Context Independent variables Dependent variable
(s)
Primary ndings/outcome Perspective
(continued on next page)
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
13
Table D1 (continued )
Author(s) SAE
level
Context Independent variables Dependent variable
(s)
Primary ndings/outcome Perspective
Deb et al.
(2017)
5 Development and validation of a
questionnaire to assess pedestrian
receptivity toward FAVs.
Attitudes, social norm, trust,
compatibility, system effectiveness.
Pedestrians’
receptivity toward
FAVs.
A validated questionnaire assessing
pedestrian receptivity toward
FAVs.
Pedestrian.
Deb et al.
(2018)
5 Determines external features on
the AV and investigates 30
pedestrians’ suggestions for which
features can affect their behaviors
and attitudes using an
experimental study.
External features tested on the FAV,
trafc behavior, gender, age,
personal innovativeness.
Waiting time before
crossing, crossing
time, rating of
different interfaces.
Pedestrians’ receptivity toward
FAVs signicantly increased with
the inclusion of external features.
Females and people 30+of age
reacted the most positively to the
features. Pedestrians who often
commit errors or who show
aggressive behaviors toward other
road users rated the
implementation of FAVs poorly.
Pedestrians who intentionally
violate trafc rules and get
distracted were found to be more
cautious in the presence of FAVs
and appreciated the inclusion of
external features.
Pedestrian.
Dey et al.
(2019)
5 Explores how the road-crossing
behavior of pedestrians may be
affected by driving modes, external
appearances, and driving
behaviors, using a video-based
experiment with 60 participants.
Driving mode, vehicle appearance,
driving behaviors.
Willingness to cross
the road in front of
an AV.
Suggests that in situations when the
intent of the vehicle was unknown,
there were differences in
pedestrians’ road-crossing
willingness between two types of
vehicles (manual/automated
mode) as a result of the differences
in their appearances.
Pedestrian.
Dommes
et al.
(2021)
n.s. Compares 30 young and 30 older
adult pedestrians’ behavior when
crossing the street in front of
conventional and self-driving cars
through an experiment.
Age, trafc condition, speed of
approaching vehicle, target gap.
Decision to cross
the street in front of
an AV.
The introduction of AVs into
current road trafc poses potential
risks. Pedestrians tend to cross the
street by focusing their attention on
the trafc ow closest to them,
neglecting the speed of the
oncoming cars.
Pedestrian.
Dommes
et al.
(2024)
5 Explores the receptivity of 474
French pedestrians towards FAVs,
its effects on behavioral intention
when interacting with FAVs, and
acceptance of FAVs.
Attitudes, social norm, trust,
compatibility, system effectiveness.
Pedestrians’
receptivity toward
FAVs.
A validated questionnaire assessing
pedestrian receptivity toward
FAVs.
Pedestrian.
Eisele and
Petzoldt
(2024)
n.s. Explores the effect of a frontal
brake light on participants’ self-
reported willingness to cross an
AV’s path using a survey with 50
participants.
Presence of frontal brake light, AV
behavior, distance between AV and
pedestrian.
Willingness to cross
the road in front of
an AV or cross an
AV’s path.
When the AV activated a frontal
brake light, willingness to cross the
road was signicantly higher than
if not activated.
Pedestrian.
Faas et al.
(2021)
4+Investigates how different eHMIs
affect trust calibration and crossing
behavior of pedestrians using a
video-based laboratory study with
67 participants.
Occurrence of high-risk FAV
malfunction, status eHMI.
Trust, crossing
behavior.
A status eHMI may lead to
pedestrians overtrusting AVs so
additional messages were needed to
support trust calibration in AVs.
Pedestrian.
Feng et al.
(2024)
n.s. Explores the impact of risk
perception and trust in
autonomous vehicles on pedestrian
crossing decisions. Questionnaire
of 589 pedestrians.
Technological advancements,
personal attributes, risk perception,
trust in AV
Crossing choice
preferences.
Increase in AV speed and decrease
in AV distance increase
pedestrians’ tendency to not cross
the road. Risk perception and trust
in AVs are strong predictors.
Middle-aged and high-risk
perception-level pedestrians are
more conservative in crossing
behavior.
Pedestrian.
Harkin et al.
(2024)
Explores implicit communication
in cyclist-vehicle interactions,
using an experimental, video-
based study with 42 participants.
Driving dynamics. Cycle through the
intersection.
Weak intention to cycle through the
intersection under the crash
condition. The passive yield and
active yield conditions received
strong intentions, while the
intention inclined for the no yield
condition.
Bicyclist.
Hudson et al.
(2019)
n.s. Investigates people’s attitudes
toward AVs in the European Union
(EU), using secondary survey data
with approx. 1000 participants.
Underlying attitudes to new
technology, gender, age, age when
nishing education, location,
current occupation, prosperity, and
country dummy variables.
Attitudes toward
AVs.
In general, people tend to be
lukewarm to AVs. Some are hostile
whereas others are totally in favor.
Young, males, those in cities, and
those more educated are more in
favor. There seems to be more
support for AVs in countries with
high accident rates.
Public
opinion.
(continued on next page)
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
14
Table D1 (continued )
Author(s) SAE
level
Context Independent variables Dependent variable
(s)
Primary ndings/outcome Perspective
Hulse et al.
(2018)
n.s. Explores perceptions of AVs among
925 road users, including drivers,
bicyclists, and pedestrians, using a
survey.
Gender, age, risk-taking
propensity, perceived risk, driver
status.
Attitudes towards
AVs.
AVs were perceived as a ‘low-risk’
form of transport, with little
opposition to future public road
use. Compared to human-operated
cars, AVs were perceived as riskier
when a passenger and less risky
when a pedestrian. Males and
younger adults displayed greater
acceptance.
Public
opinion.
Jayaraman
et al.
(2019)
4+Investigates whether driving
behaviors of AVs and trafc signals
affect pedestrians’ trust in AVs and
trust-related behaviors. Virtual
reality experiment with 30 people.
Driving behavior, crosswalk type Trust in AVs,
trusting behaviors.
Pedestrians’ trust in AVs was
inuenced by AV driving behavior
and the presence of a signal light.
Aggressive AV driving behavior
could signicantly diminish
pedestrians’ trust in AVs. People
expressed more trust in AVs at
signalized crosswalks.
Pedestrian.
Li et al.
(2023)
5 Investigates factors inuencing
bicyclists’ receptivity towards
sharing roads with FAVs and their
behavioral intentions in
interactions with FAVs. The study
used a survey with 314
participants.
Demographics (e.g., age, gender,
crash experience), cycling
behaviors (e.g., violations, errors,
positive behaviors).
Receptivity
towards sharing
roads with FAVs (e.
g., attitude, trust)
Younger and female bicyclists had
higher receptivity towards sharing
the roads with FAVs. Bicyclists
involved in a recent bicycle crash
and those reporting committing
more errors on the roads were more
willing to share roads with FAVs.
Having higher propensity to risky
behaviors and positive behaviors
were linked with taking less self-
protective behaviors during FAV
interaction.
Bicyclist.
Liljamo et al.
(2018)
n.s. Investigates people’s readiness and
concerns regarding the adoption of
AVs, using a survey of 2036
citizens.
Gender, age, education, area
density, driving controls,
technology safety, technology
reliability, privacy.
Attitudes towards
AVs.
Men, highly educated individuals,
people living in densely populated
areas, and those living in
households without a car had more
positive attitudes to AVs than other
respondents. Trafc safety and
ethical perspectives seem to have a
key role in the acceptance of AVs.
Public
opinion.
Liu et al.
(2019)
5 Explores the factors that inuence
people’s acceptance of fully
autonomous driving technology,
using a survey of 441 city
residents.
Perceived benet, perceived risk,
social trust.
General
acceptance.
Whereas perceived benet is a
signicant predictor of general
acceptance, social trust and
perceived risk played minor roles in
determining acceptance.
Public
opinion.
Palmeiro
et al.
(2018)
n.s. Explores the interaction between
pedestrians and AVs through an
experiment where 24 participants
were exposed to 20 different
scenarios.
Environmental factors (task
demands, road environment,
weather conditions), individual
factors (goals, preconceptions,
experience, training), situational
factors (situational awareness)
Crossing decision,
crossing behavior.
About half of the participants
thought that the vehicle was
(sometimes) driven automatically.
Most participants did perceive
differences in vehicle appearance
and reported to have been
inuenced by these features.
Pedestrian.
Penmetsa
et al.
(2019)
n.s. Examines interaction experiences,
perceived benets, and perceived
challenges of AVs among 798
vulnerable road users (pedestrians
and bicyclists) in Pittsburgh, PA.
Previous interaction, perceived
safety.
Attitudes towards
AVs.
Members of an organization,
promoting safe mobility options for
road users, had more positive
attitudes and beliefs regarding AVs
than the general public. Those with
interaction experiences had higher
safety benets expectations than
those with no experiences.
Furthermore, as the public
increasingly interacts with AVs,
their attitudes toward the
technology are more likely to be
positive.
Pedestrian,
bicyclist.
Rad et al.
(2020)
n.s. Explores the relation between
personal characteristics of
pedestrians and their crossing
behavior in front of an AV. The
study used a virtual reality
experiment with 60 participants.
Gender, age, time to collision, type
of vehicle, type of crosswalk,
vehicle appearance, AV familiarity.
Road crossing
behavior.
Besides the distance from the
approaching vehicle and the
existence of a zebra crossing,
pedestrians’ crossing decisions are
signicantly affected by the
participants’ age, familiarity with
AVs, the communication between
the AV and the pedestrian, and
whether the approaching vehicle is
an AV.
Pedestrian.
Rahman
et al.
(2021)
n.s. Investigates how pedestrians and
bicyclists perceive AVs based on
their knowledge and real-world
Familiarity, road-sharing
experience.
Perception of AVs. Perceptions of the respondents
were signicantly inuenced by
their views on AV safety,
Pedestrian,
bicyclist.
(continued on next page)
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
15
Table D1 (continued )
Author(s) SAE
level
Context Independent variables Dependent variable
(s)
Primary ndings/outcome Perspective
road-sharing experiences with
AVs. A survey of 795 bicyclists and
pedestrians was conducted.
familiarity, extent of AV
informedness, and automobile
ownership. Negative perceptions
mostly included a lack of perceived
safety and comfort around AVs and
trust in the AV technology.
Respondents were also concerned
about AV technology issues while
sharing the road with AVs. AVs
following trafc rules appropriately
and AVs driving safer than human
drivers were the most notable
positive perceptions towards AVs.
Rahman
et al.
(2019)
5 Investigates how older adults
(60+) perceive AVs as pedestrians,
using a survey with participants
from the Amazon MTurk
participant pool.
Attitude, perceived usefulness,
social norms, trust, compatibility,
gender, familiarity, location.
Perception of AVs. Older adults, who are familiar with
AVs, are more likely to have a
favorable perception of them. Older
adults in suburban areas showed a
higher perception of trust in AVs.
Perceived usefulness was
associated with a positive
perception of AVs. However,
concerns about the interaction
between older pedestrians and AVs
were also raised.
Pedestrian.
Reig et al.
(2018)
4+Undertakes an interview-based
eld study of AVs with 31
pedestrians who have interacted
with Uber AVs.
Age, gender, education,
perceptions of autonomous driving
technology, brand reputation.
Trust in AVs,
acceptance of AVs.
Pedestrians’ trust in AVs is affected
by their favorable interpretations of
a vehicle brand and facilitated by
their knowledge of AVs.
Pedestrian.
Yokoi (2024) Experiment with 401 participants,
investigating the dynamics of trust
in AVs compared to human drivers,
particularly focusing on how errors
affect this trust.
Driving accuracy. Attitudes towards
AVs.
AVs were less trusted if there was a
slight possibility of making an
error. If errors are minor and do not
result in serious outcomes, people
are less likely to perceive a
signicant difference in trust
between AVs and human drivers.
Public
opinion.
n.s. =not specied.
References
Agarwal, R., & Prasad, J. (1998). A conceptual and operational denition of personal
innovativeness in the domain of information technology. Information Systems
Research, 9(2), 204–215. https://doi.org/10.1287/isre.9.2.204
Ajzen, I. (2002). Perceived behavioral control, self-efcacy, locus of control, and the
theory of planned behavior 1. Journal of Applied Social Psychology, 32(4), 665–683.
https://doi.org/10.1111/j.1559-1816.2002.tb00236.x
Armitage, C. J., & Conner, M. (2001). Efcacy of the theory of planned behaviour: A
meta-analytic review. British Journal of Social Psychology, 40(4), 471–499. https://
doi.org/10.1348/014466601164939
Becker, J.-M., Cheah, J.-H., Gholamzade, R., Ringle, C. M., & Sarstedt, M. (2023). PLS-
SEM’s most wanted guidance. International Journal of Contemporary Hospitality
Management, 35(1), 321–346. https://doi.org/10.1108/IJCHM-04-2022-0474
Cestac, J., Paran, F., & Delhomme, P. (2014). Drive as I say, not as I drive: Inuence of
injunctive and descriptive norms on speeding intentions among young drivers.
Transportation Research Part F: Trafc Psychology and Behaviour, 23, 44–56. https://
doi.org/10.1016/j.trf.2013.12.006
Chang, Y., & Wang, R. (2023). Conservatives endorse Fintech? Individual regulatory
focus attenuates the algorithm aversion effects in automated wealth management.
Computers in Human Behavior, 148, Article 107872. https://doi.org/10.1016/j.
chb.2023.107872
Charness, N., Yoon, J. S., Souders, D., Stothart, C., & Yehnert, C. (2018). Predictors of
attitudes toward autonomous vehicles: The roles of age, gender, prior knowledge,
and personality. Frontiers in Psychology, 9, 2589. https://doi.org/10.3389/
fpsyg.2018.02589
Cristea, M., & Gheorghiu, A. (2016). Attitude, perceived behavioral control, and
intention to adopt risky behaviors. Transportation Research Part F: Trafc Psychology
and Behaviour, 43, 157–165. https://doi.org/10.1016/j.trf.2016.10.004
Deb, S., Strawderman, L. J., & Carruth, D. W. (2018). Investigating pedestrian
suggestions for external features on fully autonomous vehicles: A virtual reality
experiment. Transportation Research Part F: Trafc Psychology and Behaviour, 59,
135–149. https://doi.org/10.1016/j.trf.2018.08.016
Deb, S., Strawderman, L., Carruth, D. W., DuBien, J., Smith, B., & Garrison, T. M. (2017).
Development and validation of a questionnaire to assess pedestrian receptivity
toward fully autonomous vehicles. Transportation Research Part C: Emerging
Technologies, 84, 178–195. https://doi.org/10.1016/j.trc.2017.08.029
Dey, D., Martens, M., Eggen, B., & Terken, J. (2019). Pedestrian road-crossing willingness
as a function of vehicle automation, external appearance, and driving behaviour.
Transportation Research Part F: Trafc Psychology and Behaviour, 65, 191–205.
https://doi.org/10.1016/j.trf.2019.07.027
Diamantopoulos, A., Sarstedt, M., Fuchs, C., Wilczynski, P., & Kaiser, S. (2012).
Guidelines for choosing between multi-item and single-item scales for construct
measurement: A predictive validity perspective. Journal of the Academy of Marketing
Science, 40, 434–449. https://doi.org/10.1007/s11747-011-0300-3
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People
erroneously avoid algorithms after seeing them err. Journal of Experimental
Psychology: General, 144(1), 114. doi.org/psycnet.apa.org/doi/10.1037/
xge0000033.
Dijkstra, T. K., & Henseler, J. (2015). Consistent partial least squares path modeling. MIS
Quarterly, 39(2), 297–316.
Dommes, A., Douffet, B., Pala, P., Deb, S., & Grani´
e, M. A. (2024). Pedestrians’
receptivity to fully automated vehicles: Assessing the psychometric properties of the
PRQF and survey in France. Transportation Research Part F: Trafc Psychology and
Behaviour, 105, 163–181.
Dommes, A., Merlhiot, G., Lobjois, R., Dang, N.-T., Vienne, F., Boulo, J., Oliver, A.-H.,
Cretual, A., & Cavallo, V. (2021). Young and older adult pedestrians’ behavior when
crossing a street in front of conventional and self-driving cars. Accident Analysis &
Prevention, 159, Article 106256. https://doi.org/10.1016/j.aap.2021.106256
Draws, T., Rieger, A., Inel, O., Gadiraju, U., & Tintarev, N. (2021). A checklist to combat
cognitive biases in crowdsourcing. Proceedings of the AAAI Conference on Human
Computation and Crowdsourcing, 9, 48–59. https://doi.org/10.1609/hcomp.
v9i1.18939
Egala, S. B., & Liang, D. (2023). Algorithm aversion to mobile clinical decision support
among clinicians: A choice-based conjoint analysis. European Journal of Information
Systems, 1–17. https://doi.org/10.1080/0960085X.2023.2251927
Eisele, D., & Petzoldt, T. (2024). Effects of a frontal brake light on pedestrians’
willingness to cross the street. Transportation Research Interdisciplinary Perspectives,
23, Article 100990.
Faas, S., Kraus, J., Schoenhals, A., & Baumann, M. (2021). Calibrating pedestrians’ trust
in automated vehicles: Does an intent display in an external HMI support trust
calibration and safe crossing behavior?. Proceedings of the 2021 CHI conference on
human factors in computing systems. https://doi.org/10.1145/3411764.3445738
Fagnant, D. J., & Kockelman, K. (2015). Preparing a nation for autonomous vehicles:
Opportunities, barriers and policy recommendations. Transportation Research Part A:
Policy and Practice, 77, 167–181. https://doi.org/10.1016/j.tra.2015.04.003
Feng, Z., Gao, Y., Zhu, D., Chan, H.-Y., Zhao, M., & Xue, R. (2024). Impact of risk
perception and trust in autonomous vehicles on pedestrian crossing decision:
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
16
Navigating the social-technological intersection with the ICLV model. Transport
Policy, 152, 71–86.
Figner, B., & Weber, E. U. (2011). Who takes risks when and why? Determinants of risk
taking. Current Directions in Psychological Science, 20(4), 211–216. https://doi.org/
10.1177/0963721411415790
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior: An introduction to
theory and research. Addison-Wesley.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with
unobservable variables and measurement error. Journal of Marketing Research, 18(1),
39–50. https://doi.org/10.1177/002224378101800104
Forster, Y., Kraus, J., Feinauer, S., & Baumann, M. (2018). Calibration of trust
expectancies in conditionally automated driving by brand, reliability information
and introductionary videos: An online study. Proceedings of the 10th international
conference on automotive user interfaces and interactive vehicular applications. https://
doi.org/10.1145/3239060.3239070
Garidis, K., Ulbricht, L., Rossmann, A., & Schm¨
ah, M. (2020). Toward a user acceptance
model of autonomous driving. Proceedings of the 53rd Hawaii international conference
on system sciences (HICSS).
Gefen, D. (2000). E-Commerce: The role of familiarity and trust. Omega, 28(6), 725–737.
https://doi.org/10.1016/S0305-0483(00)00021-9
Hagenzieker, M. P., Van Der Kint, S., Vissers, L., Van Schagen, I. N. L. G., De Bruin, J.,
Van Gent, P., & Commandeur, J. J. F. (2020). Interactions between cyclists and
automated vehicles: Results of a photo experiment. Journal of Transportation Safety &
Security, 12(1), 94–115. https://doi.org/10.1080/19439962.2019.1591556
Hair, J., Hollingsworth, C. L., Randolph, A. B., & Chong, A. Y. L. (2017). An updated and
expanded assessment of PLS-SEM in information systems research. Industrial
Management & Data Systems, 117(3), 442–458. https://doi.org/10.1108/IMDS-04-
2016-0130
Hair, J., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2022). A primer on partial least
squares structural equation modeling (PLS-SEM) (3rd ed.). Sage Publications.
Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to
report the results of PLS-SEM. European Business Review, 31(1), 2–24. https://doi.
org/10.1108/EBR-11-2018-0203
Hannon, O., Gal, U., & Dar-Nimrod, I. (2022). Aversion vs. Abstinence: Conceptual
distinctions for the receptivity toward algorithmic decision-making systems within
value-laden contexts. Proceedings of australasian conference on information systems
(ACIS).
Harkin, A. M., Mangold, A., Harkin, K. A., & Petzoldt, T. (2024). Implicit communication
in cyclist-vehicle interaction: Examining the inuence of driving dynamics in
interactions with turning (automated) vehicles on cyclists’ perceived safety,
behavioral intention, and risk anticipation. Journal of Cycling and Micromobility
Research, 2, Article 100028.
Hegner, S. M., Beldad, A. D., & Brunswick, G. J. (2019). In automatic we trust:
Investigating the impact of trust, control, personality characteristics, and extrinsic
and intrinsic motivations on the acceptance of autonomous vehicles. International
Journal of Human-Computer Interaction, 35(19), 1769–1780. https://doi.org/
10.1080/10447318.2019.1572353
Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied articial intelligence and trust—the
case of autonomous vehicles and medical assistance devices. Technological
Forecasting and Social Change, 105, 105–120. https://doi.org/10.1016/j.
techfore.2015.12.014
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing
discriminant validity in variance-based structural equation modeling. Journal of the
Academy of Marketing Science, 43, 115–135. https://doi.org/10.1007/s11747-014-
0403-8
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on
factors that inuence trust. Human Factors, 57(3), 407–434.
Hong, A., Nam, C., & Kim, S. (2020). What will be the possible barriers to consumers’
adoption of smart home services? Telecommunications Policy, 44(2), Article 101867.
https://doi.org/10.1016/j.telpol.2019.101867
Hudson, J., Orviska, M., & Hunady, J. (2019). People’s attitudes to autonomous vehicles.
Transportation Research Part A: Policy and Practice, 121, 164–176. https://doi.org/
10.1016/j.tra.2018.08.018
Hulse, L. M., Xie, H., & Galea, E. R. (2018). Perceptions of autonomous vehicles:
Relationships with road users, risk, gender and age. Safety Science, 102, 1–13.
https://doi.org/10.1016/j.ssci.2017.10.001
Iranmanesh, M., Zailani, S., Moeinzadeh, S., & Nikbin, D. (2017). Effect of green
innovation on job satisfaction of electronic and electrical manufacturers’ employees
through job intensity: Personal innovativeness as moderator. Review of Managerial
Science, 11, 299–313. https://doi.org/10.1007/s11846-015-0184-6
Jayaraman, S. K., Creech, C., Tilbury, D. M., Yang, X. J., Pradhan, A. K., Tsui, K. M., &
Robert Jr, L. P. (2019). Pedestrian trust in automated vehicles: Role of trafc signal
and AV driving behavior. Frontiers in Robotics and AI, 6, 117. https://doi.org/
10.3389/frobt.2019.00117
Karsch, H. M., Hedlund, J. H., Tison, J., & Leaf, W. A. (2012). Review of studies on
pedestrian and bicyclist safety, 1991–2007. United States: National Highway Trafc
Safety Administration. https://doi.org/10.21949/1525710
Kaur, K., & Rampersad, G. (2018). Trust in driverless cars: Investigating key factors
inuencing the adoption of driverless cars. Journal of Engineering and Technology
Management, 48, 87–96. https://doi.org/10.1016/j.jengtecman.2018.04.006
Kenesei, Z., ´
Asv´
anyi, K., K¨
ok´
eny, L., J´
aszber´
enyi, M., Miskolczi, M., Gyulav´
ari, T., &
Syahrivar, J. (2022). Trust and perceived risk: How different manifestations affect
the adoption of autonomous vehicles. Transportation Research Part A: Policy and
Practice, 164, 379–393. https://doi.org/10.1016/j.tra.2022.08.022
Khastgir, S., Birrell, S., Dhadyalla, G., & Jennings, P. (2018). Calibrating trust through
knowledge: Introducing the concept of informed safety for automation in vehicles.
Transportation Research Part C: Emerging Technologies, 96, 290–303. https://doi.org/
10.1016/j.trc.2018.07.001
K¨
orber, M. (2019). Theoretical considerations and development of a questionnaire to
measure trust in automation. Proceedings of the 20th congress of the international
ergonomics association (IEA). https://doi.org/10.1007/978-3-319-96074-6_2
Krueger, R., Rashidi, T. H., & Rose, J. M. (2016). Preferences for shared autonomous
vehicles. Transportation Research Part C: Emerging Technologies, 69, 343–355. https://
doi.org/10.1016/j.trc.2016.06.015
Krügel, S., Ostermaier, A., & Uhl, M. (2023). Algorithms as partners in crime: A lesson in
ethics by design. Computers in Human Behavior, 138, Article 107483.
Lazanyi, K., & Maraczi, G. (2017). Dispositional trust—do we trust autonomous cars?.
IEEE 15th international symposium on intelligent systems and informatics (SISY).
https://doi.org/10.1109/SISY.2017.8080540, 2017.
Lee, H. Y., Qu, H., & Kim, Y. S. (2007). A study of the impact of personal innovativeness
on online travel shopping behavior—a case study of Korean travelers. Tourism
Management, 28(3), 886–897. https://doi.org/10.1016/j.tourman.2006.04.013
Li, X., Afghari, A. P., Oviedo-Trespalacios, O., Kaye, S.-A., & Haworth, N. (2023). Cyclists
perception and self-reported behaviour towards interacting with fully automated
vehicles. Transportation Research Part A: Policy and Practice, 173, Article 103713.
https://doi.org/10.1016/j.tra.2023.103713
Liljamo, T., Liimatainen, H., & P¨
oll¨
anen, M. (2018). Attitudes and concerns on automated
vehicles. Transportation Research Part F: Trafc Psychology and Behaviour, 59, 24–44.
https://doi.org/10.1016/j.trf.2018.08.010
Liu, P., Yang, R., & Xu, Z. (2019). Public acceptance of fully automated driving: Effects of
social trust and risk/benet perceptions. Risk Analysis, 39(2), 326–341. https://doi.
org/10.1111/risa.13143
Mahmud, H., Islam, A. K. M. N., Ahmed, S. I., & Smolander, K. (2022). What inuences
algorithmic decision-making? A systematic literature review on algorithm aversion.
Technological Forecasting and Social Change, 175, Article 121390. https://doi.org/
10.1016/j.techfore.2021.121390
McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). The impact of initial consumer
trust on intentions to transact with a web site: A trust building model. The Journal of
Strategic Information Systems, 11(3–4), 297–323. https://doi.org/10.1016/S0963-
8687(02)00020-3
Meertens, R. M., & Lion, R. (2008). Measuring an individual’s tendency to take risks: The
risk propensity scale 1. Journal of Applied Social Psychology, 38(6), 1506–1520.
https://doi.org/10.1111/j.1559-1816.2008.00357.x
Merritt, S. M., Heimbaugh, H., LaChapell, J., & Lee, D. (2013). I trust it, but I don’t know
why: Effects of implicit attitudes toward automation on trust in an automated
system. Human Factors, 55(3), 520–534. https://doi.org/10.1177/
0018720812465081
Merritt, S. M., & Ilgen, D. R. (2008). Not all trust is created equal: Dispositional and
history-based trust in human-automation interactions. Human Factors, 50(2),
194–210. https://doi.org/10.1518/001872008X288574
Miskolczi, M., F¨
oldes, D., Munk´
acsy, A., & J´
aszber´
enyi, M. (2021). Urban mobility
scenarios until the 2030s. Sustainable Cities and Society, 72, Article 103029. https://
doi.org/10.1016/j.scs.2021.103029
Muir, B. M. (1994). Trust in automation: Part I. Theoretical issues in the study of trust
and human intervention in automated systems. Ergonomics, 37(11), 1905–1922.
https://doi.org/10.1080/00140139408964957
Muir, B. M., & Moray, N. (1996). Trust in automation. Part II. Experimental studies of
trust and human intervention in a process control simulation. Ergonomics, 39(3),
429–460. https://doi.org/10.1080/00140139608964474
O’Leary-Kelly, S. W., & Vokurka, R. J. (1998). The empirical assessment of construct
validity. Journal of Operations Management, 16(4), 387–405. https://doi.org/
10.1016/S0272-6963(98)00020-5
Osswald, S., Wurhofer, D., Tr¨
osterer, S., Beck, E., & Tscheligi, M. (2012). Predicting
information technology usage in the car: Towards a car technology acceptance
model. Proceedings of the 4th international conference on automotive user interfaces and
interactive vehicular applications. https://doi.org/10.1145/2390256.2390264
Paden, B., ˇ
C´
ap, M., Yong, S. Z., Yershov, D., & Frazzoli, E. (2016). A survey of motion
planning and control techniques for self-driving urban vehicles. IEEE Transactions on
Intelligent Vehicles, 1(1), 33–55. https://doi.org/10.1109/TIV.2016.2578706
Palan, S., & Schitter, C. (2018). Prolic. ac—a subject pool for online experiments.
Journal of Behavioral and Experimental Finance, 17, 22–27. https://doi.org/10.1016/j.
jbef.2017.12.004
Palmeiro, A. R., van der Kint, S., Vissers, L., Farah, H., de Winter, J. C. F., &
Hagenzieker, M. (2018). Interaction between pedestrians and automated vehicles: A
wizard of oz experiment. Transportation Research Part F: Trafc Psychology and
Behaviour, 58, 1005–1020. https://doi.org/10.1016/j.trf.2018.07.020
Papadimitriou, E., Schneider, C., Tello, J. A., Damen, W., Vrouenraets, M. L., & Ten
Broeke, A. (2020). Transport safety and human factors in the era of automation:
What can transport modes learn from each other? Accident Analysis & Prevention,
144, Article 105656. https://doi.org/10.1016/j.aap.2020.105656
Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative
platforms for crowdsourcing behavioral research. Journal of Experimental Social
Psychology, 70, 153–163. https://doi.org/10.1016/j.jesp.2017.01.006
Penmetsa, P., Adanu, E. K., Wood, D., Wang, T., & Jones, S. L. (2019). Perceptions and
expectations of autonomous vehicles–A snapshot of vulnerable road user opinion.
Technological Forecasting and Social Change, 143, 9–13. https://doi.org/10.1016/j.
techfore.2019.02.010
Rad, S. R., de Almeida Correia, G. H., & Hagenzieker, M. (2020). Pedestrians’ road
crossing behaviour in front of automated vehicles: Results from a pedestrian
simulation experiment using agent-based modelling. Transportation Research Part F:
Trafc Psychology and Behaviour, 69, 101–119. https://doi.org/10.1016/j.
trf.2020.01.014
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
17
Rahman, M. M., Deb, S., Strawderman, L., Burch, R., & Smith, B. (2019). How the older
population perceives self-driving vehicles. Transportation Research Part F: Trafc
Psychology and Behaviour, 65, 242–257. https://doi.org/10.1016/j.trf.2019.08.002
Rahman, M. T., Dey, K., Das, S., & Shernski, M. (2021). Sharing the road with
autonomous vehicles: A qualitative analysis of the perceptions of pedestrians and
bicyclists. Transportation Research Part F: Trafc Psychology and Behaviour, 78,
433–445. https://doi.org/10.1016/j.trf.2021.03.008
Reig, S., Norman, S., Morales, C. G., Das, S., Steinfeld, A., & Forlizzi, J. (2018). A eld
study of pedestrians and autonomous vehicles. Proceedings of the 10th international
conference on automotive user interfaces and interactive vehicular applications. https://
doi.org/10.1145/3239060.3239064
Renier, L. A., Mast, M. S., & Bekbergenova, A. (2021). To err is human, not
algorithmic–Robust reactions to erring algorithms. Computers in Human Behavior,
124, Article 106879. https://doi.org/10.1016/j.chb.2021.106879
SAE International. (2021). Taxonomy and denitions for terms related to driving automation
systems for on-road motor vehicles. J3016 Standard, April 2021 (pp. 1–41) htt
ps://www.sae.org/standards/content/j3016_202104/.
Schoettle, B., & Sivak, M. (2014). A survey of public opinion about autonomous and self-
driving vehicles in the US, the UK, and Australia. Ann Arbor, Transportation Research
Institute: University of Michigan.
Sharma, P. N., Liengaard, B. D., Hair, J. F., Sarstedt, M., & Ringle, C. M. (2022).
Predictive model assessment and selection in composite-based modeling using PLS-
SEM: Extensions and guidelines for using CVPAT. European Journal of Marketing, 57
(6), 1662–1677. https://doi.org/10.1108/EJM-08-2020-0636
Shmueli, G., Sarstedt, M., Hair, J. F., Cheah, J.-H., Ting, H., Vaithilingam, S., &
Ringle, C. M. (2019). Predictive model assessment in PLS-SEM: Guidelines for using
PLSpredict. European Journal of Marketing, 53(11), 2322–2347. https://doi.org/
10.1108/EJM-02-2019-0189
Stilgoe, J., & Cohen, T. (2021). Rejecting acceptance: Learning from public dialogue on
self-driving vehicles. Science and Public Policy, 48(6), 849–859. https://doi.org/
10.1093/scipol/scab060
Story, V., O’Malley, L., & Hart, S. (2011). Roles, role performance, and radical innovation
competences. Industrial Marketing Management, 40(6), 952–966. https://doi.org/
10.1016/j.indmarman.2011.06.025
Taherdoost, H. (2018). A review of technology acceptance and adoption models and
theories. Procedia Manufacturing, 22, 960–967. https://doi.org/10.1016/j.
promfg.2018.03.137
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of
information technology: Toward a unied view. MIS Quarterly, 36(1), 425–478.
https://doi.org/10.2307/30036540
WHO. (2023). Road trafc injuries. https://www.who.int/news-room/fact-sheets
/detail/road-traffic-injuries.
Williams, M. D., Rana, N. P., & Dwivedi, Y. K. (2015). The unied theory of acceptance
and use of technology (UTAUT): A literature review. Journal of Enterprise Information
Management, 28(3), 443–488. https://doi.org/10.1108/JEIM-09-2014-0088
Yokoi, R. (2024). Trust in self-driving vehicles is lower than in human drivers when both
drive almost perfectly. Transportation Research Part F: Trafc Psychology and
Behaviour, 103, 1–17.
Zhou, S., Sun, X., Liu, B., & Burnett, G. (2022). Factors affecting pedestrians’ trust in
automated vehicles: Literature review and theoretical model. IEEE Transactions on
Human-Machine Systems, 52(3), 490–500. https://doi.org/10.1109/
THMS.2021.3112956
P.A. Busch
Computers in Human Behavior Reports 16 (2024) 100490
18