Conference PaperPDF Available

# What’s in a Name: Vehicle Technology Branding & Consumer Expectations for Automation

Authors:

## Abstract and Figures

Vehicle technology naming has the potential to influence drivers’ expectations (mental model) of the level of autonomous operation supported by semi-automated technologies that are rapidly becoming available in new vehicles. If divergence exists between expectations and actual design specifications, it may make it harder to develop trust or clear expectations of systems, thus mitigating potential benefits. Alternately, over-trust and misuse due to misunderstanding increase the potential for adverse events. An online survey investigated whether and how names of advanced driver assistance systems (ADAS) and automation features relate to expected automation levels. Systems with “Cruise” in their names were associated with lower levels of automation. “Assist” systems appeared to create confusion between whether the driver is assisting the system or vice versa. Survey findings indicate the importance of vehicle technology naming and its impact in influencing drivers’ expectations of responsibility between the driver and system in who performs individual driving functions.
No caption available
…
Content may be subject to copyright.
What’s'in'a'Name:'Vehicle'Technology'Branding'&'
Consumer'Expectations'for'Automation'
Hillary Abraham
MIT AgeLab
Cambridge, US
habraham@mit.edu
Bobbie Seppelt
MIT AgeLab &
Touchstone
Evaluations
Cambridge, US
bseppelt@mit.edu
Bruce Mehler
MIT AgeLab
Cambridge, US
bmehler@mit.edu
Bryan Reimer
MIT AgeLab
Cambridge, US
reimer@mit.edu
ABSTRACT'
Vehicle technology naming has the potential to influence
drivers’ expectations (mental model) of the level of
autonomous operation supported by semi-automated
technologies that are rapidly becoming available in new
vehicles. If divergence exists between expectations and
actual design specifications, it may make it harder to
develop trust or clear expectations of systems, thus
mitigating potential benefits. Alternately, over-trust and
misuse due to misunderstanding increase the potential for
adverse events. An online survey investigated whether and
and automation features relate to expected automation
levels. Systems with “Cruise” in their names were
associated with lower levels of automation. “Assist”
systems appeared to create confusion between whether the
driver is assisting the system or vice versa. Survey findings
indicate the importance of vehicle technology naming and
its impact in influencing drivers’ expectations of
responsibility between the driver and system in who
performs individual driving functions.
Author'Keywords'
Automation; Confusion
CCS'Concepts'
Human-centered computing~User centered design
INTRODUCTION'
Most automotive manufacturers now offer, or are currently
pursuing research on, advanced driver assistance systems
(ADAS) and automated driving features. Collectively,
semi-automated vehicle technologies (ADAS and lower
level automation systems) are rapidly becoming standard or
optional features on new vehicles. In order to help provide
common definitions for different types of automation in
vehicles, the Society of Automotive Engineers (SAE)
developed a taxonomy with detailed descriptions for
vehicles equipped with automated features [24]. At present,
consumers are only able to purchase vehicles equipped with
driver assistance (Level 1) and partial automation (Level 2)
systems. However, several automotive manufacturers have
announced production vehicles to be available this year
with conditional automation (Level 3). High automation
(Level 4) technologies are being tested globally with
expected commercial availability being forecast in less than
5 years [15].
Efforts to develop ADAS and automation features are based
upon manufacturer-specific design specifications. These
specifications aim to produce a technology with the
capability to perform in a particular operational design
domain (ODD). The system implementation and specific
use conditions encompassed in the static and dynamic
aspects of the ODD [28] are representative of a system
designer’s mental model for the technology. How drivers
learn about individual systems is influenced by their pre-
existent mental models those formed prior to initial use,
e.g., from exposure to other technologies [12]. A driver’s
mental model aids him or her in understanding a system’s
ODD, interface characteristics and other system limitations
necessary for proper system use [4,27]. While driver
education and other more active methods for encouraging
proper use (in vehicle coaching, etc.) face challenges at
each level of automation, the most relevant current
challenge exists with partial driving automation (Level 2),
for which governments, businesses, researchers and
consumers have argued the marketing name of a system
may promote the misalignment of driver and designer
expectations [5,7,18]. In Level 2 automation, the system
performs sustained lateral and longitudinal management of
the driving task, while the driver performs the remaining
subtasks, including object and event detection and response
(OEDR). Driver belief that a system has the ability to
perform OEDR at a level greater than the systems design
characteristics may lead to misuse [22].
Human Machine Interfaces (HMIs) for automated features
are intended, by design, to help support driver
understanding of features and to promote proper system
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to
post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from Permissions@acm.org.
AutomotiveUI '17, September 2427, 2017, Oldenburg, Germany
© 2017 Association for Computing Machinery.
ACM ISBN 978-1-4503-5150-8/17/09…$15.00 https://doi.org/10.1145/3122986.3123018 use. At Levels 1 and 2, in which features assist drivers for only a partial set of the dynamic tasks of driving, their HMIs aim to support drivers in maintaining their attention to the roadway. One adopted implementation strategy to support this aim (e.g. Tesla, Volvo, etc.) is to require drivers to keep their hands on the wheel with minimal steering input; however, the amount of input and amount of time a driver can go before hands-off-wheel warnings are issued varies between system and use conditions, resulting in the potential for prolonged intervals of declining situation awareness. Further, there is not currently a proven link between hands-on-wheel during Level 2 use and situational awareness. Looking to enforce a greater degree of control on driver attentiveness, GM’s SuperCruise, anticipated to be commercially available in the 2018 Cadillac CT6, is reported to be designed with an integrated head pose detection system in order to monitor driver awareness and to trigger a range of cues to promote driver attentiveness [8]. The standardization of such approaches is currently under consideration in Europe [9] and is supported by research [23]. Multiple factors contribute to a driver’s expectations of system capability [e.g., 1,13,21,22,26]. Drivers’ attitudes and beliefs about system capability and performance are known to influence their use of technology [6,10,14,30]. Factors such as a driver’s prior experience with similar technologies, predisposed trusting tendencies, and attitudes formed from exposure to media and societal opinion might all contribute to a driver’s belief that a system can handle a task outside of its ODD. The name of a driver assistance system also has the potential to impact their perceptions of system capability. From consumer psychology research, there is an ascribed importance of branding and the names assigned to products; naming influences expectation of product attributes and preconditions consumers to assign valence based on induced biases [17]. In application to driving automation systems, the names assigned to technologies have the potential to shape driver perceptions in a way that bias attitudes and affect use [30]. Other than a small survey by Tesla [31], little structured research has investigated whether the name of a system impacts driver expectations of a system, particularly in relation to what the driver expects their role should be while using the vehicle and system. As brand names are increasingly used to discuss vehicle automation systems with a vast range of design models, improved understanding of whether or not brand names of current and proposed driver assistance / automation systems impact driver expectations may help guide future naming discussions and considerations for standardization. A survey was designed to investigate two primary research questions: 1.!Does the name of driver assistance systems affect a customer’s perception of the level of automation of that system? 2.!If so, do commonly used terms when branding ADAS (e.g. Auto, Pilot, Assist, Cruise) direct consumer perceptions toward presumptions of lower or higher levels of automation? METHOD' Participants' Participants were recruited using online advertisements and web posts to the MIT AgeLab website. In total, a convenience sample of 453 participants was analyzed. The sample was 37% male and 61% female; the remaining 2.6% of individuals selected “Other or choose not to answer. Age of respondents ranged from 20-69, with 30% of respondents in their 20s, 19% in their 30s, 6% in their 40s, 18% in their 50s, and 27% in their 60s. Respondents were generally highly educated; 38% had completed a graduate or professional degree as their highest level of education, 18% had completed some graduate education, 29% had completed a Bachelor’s degree, 2% had an Associate’s degree, 1% had a trade school certificate, 12% had completed some college, 1% had graduated high school, and 0% had completed some high school. Most respondents (71%) were from the state of Massachusetts in the USA. Survey'Instrument' Systems'Addressed' Nineteen driver assistance systems were selected for inclusion in the survey (Table 1). Attempts were made to incorporate all systems commercially available or publicly proposed at the time of survey deployment that feature both adaptive cruise control and a lane centering component, yet require the driver to engage in some of the dynamic aspects of driving, either actively or as a fallback-ready user (e.g. Level 1 Level 3). Researchers were particularly interested in how common English terms might affect perceptions of system capabilities; as such, systems that included the name of the manufacturer in their title were not included (e.g. Honda Sensing). Four fabricated system names were included in the survey to explore differences between terms typically used in systems at higher levels of automation and those typically used for systems at lower levels. Automation'Categories' Seven descriptions of differing levels of automation were created for participants to classify systems (Figure 1). These categories were developed based on the six SAE J3016 levels of automation [24], plus an additional level (“L1.5,” conceptually between 1 & 2) to accommodate commercially available systems that require the driver to keep their hands on the wheel at certain frequencies, as a function of the adopted implementation strategy, in order to perform continuous lane centering. Care was taken to ensure these categories accurately represented J3016 levels, while simultaneously being understandable to the layman in terms of the division of driving task responsibility. Particular attention was paid to the distinction between general tasks the driver would be responsible for, versus general tasks the driving assistance system would be responsible for, while the system was engaged or active. Categorizations generalized ODD and dynamic driving task (DDT) into broad categories of responsibility, rather than listing and requesting classifications for individual ODDs and DDTs, in an attempt to avoid overwhelming the survey respondents (Figure 1). Survey'Design'Methodology' After selecting systems for inclusion and developing a first draft of automation categories, a survey instrument was developed by the research team. This instrument went through a series of internal revisions before piloting with additional staff members not involved in the project to ensure layman understanding of all terms and definitions involved. After piloting, research staff spoke informally with pilot subjects about the survey design, format, and clarity of questions. Pilot subject feedback was integrated into the final instrument detailed within this report. Survey'Procedure' Participants were first presented with a brief introduction to the survey and a description of each level of automation (Figure 1). After reading the introduction and level classifications, participants were asked to imagine they were the driver in a vehicle equipped with an automated system. Participants were then provided with the list of 19 systems. For each system, participants selected the category from the seven levels of automation that best described the division of task responsibility that they would expect to exist between them as the driver and the system. In order to maximize the likelihood that categorization would be made based on name alone, survey takers were instructed not to use any outside resources when making their categorization. After assigning a level to a system, participants rated their confidence in their level assignment on a 5-pt scale ranging from 1 (low confidence) to 5 (high confidence). This was repeated for all 19 systems. After assigning every system to a category of automation and rating their confidence in their assignment, participants were asked, “before taking this survey, how familiar were you with any of the systems?” and provided a 5-pt scale ranging from “Not familiar at all” to “Extremely familiar.” Participants were asked six questions to gauge their early adopter status, vehicle information, and whether or not any of their vehicles had any of the survey systems installed. System Manufacturer Availability LoA Active Cruise Control BMW Available 1 AutoCruise N/A N/A N/A Autopilot Tesla Available 2 Distronic Plus Mercedes-Benz Available 1 Drive Pilot Mercedes-Benz Available 1.5 Driving Assistant Plus BMW Available 1.5 Enhanced Autopilot Tesla In Development 3 Eyesight Subaru Available 1 Highway Pilot Audi In Development 3 Intelligent Assist N/A N/A N/A Intelligent Cruise Control Nissan Available 1 Intelligent Drive Mercedes-Benz Available 1 Intelligent Pilot N/A N/A N/A Pilot Assist Volvo Available 1.5 Pilot Plus N/A N/A N/A ProPilot Nissan In Development 1.5 Super Cruise GM In Development 2 Traffic Jam Assist Audi Available 1.5 Traffic Jam Pilot Audi In Development 3 Figure 1. After a survey introduction, participants were presented with this graphic representing seven categories of automation, ranging from SAE Level 0 (fully manual, far left box) to SAE Level 5 (fully automated, far right box). These seven categories provide layman’s definitions of the division of driving task responsibility between driver and system. The survey ended by collecting demographic information, including date of birth, highest level of education, employment status, household income, gender, and zip code. Participants who completed the survey were offered the opportunity to enter a raffle to win one of 10$50 Amazon
gift cards. The survey was constructed in Qualtrics, and
participants were asked to take the survey online via a
desktop or laptop computer. The survey was open for data
collection from February 22ndMarch 6th 2017.
RESULTS'
Data were analyzed using SPSS Version 24. As analyses
were run multiple times (once for each system), a
Bonferroni correction was used to determine significance.
Significance was set at p < .0026 for analyses of all 19
systems (.05 / 19), and p < .0033 for analyses of the 15
deployed or in-development systems (.05 / 15). For age
analyses, respondents were grouped into five age ranges:
20-29, 30-39, 40-49, 50-59, and 60-69. '
Familiarity'&'Correctness'
Most participants selected not familiar at all” for
familiarity with each of the systems prior to taking the
survey (Figure 2). Two systems, Active Cruise Control and
Autopilot, had higher levels of familiarity than the other
systems in the sample. Importantly, it is unclear whether or
not respondents were familiar with Tesla’s Autopilot, the
term “autopilot” within the context of aviation, or the
colloquial “autopilot,used when referring to completing a
task absentmindedly or without focus. While more
respondents were familiar with these systems, more than
half (54.5% and 66.2% respectively) reported being either
not familiar at all or only slightly familiarwith either
system.
Table 2. Overall accuracy for system categorization was low. There was no relationship between correct categorization and
confidence. Most participants did not select L0 for most systems.
Figure 2. Participants rated themselves as being not at all familiar with most of the systems prior to taking the survey.
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Act iv e
Cruis e
Control
Autopilot Distronic
Plus
Dri ve Pi lo t Driving
Assistant
Plus
Enh an ced
Autopilot
Eyesight Hi g hway
Pilot
Intelligent
Cruis e
Control
Intelligent
Dri v e
Pilo t Assist ProPil ot Supercru ise
Traf fi c J am
Assist
Traffic
Jam Pilot
Not at al l fam il ia r
Slig htly familiar
Moderat ely familiar
Very fa mi li ar
Ext re mel y f ami li ar
Most respondents did not accurately classify most systems
into their correct level of automation. While most systems
had a slightly higher percentage of correct categorizations
than would be expected from random guessing (14%
correct), overall accuracy remained low (Table 2). Active
Cruise Control had the highest proportion of correct
categorizations, with 50% of respondents correctly
categorizing it as a Level 1 system. Intelligent Drive, also a
Level 1 system, had the lowest proportion of correct
categorizations (9%).
Confidence ratings varied across systems. On a scale of 1
(low confidence) to 5 (high confidence), over half of
participants rated their confidence as a 4 or 5 for Active
Cruise Control, AutoCruise, and Autopilot. Distronic Plus
had the highest proportion of low confidence ratings, with
66% of participants rating their confidence at a 1 or 2.
The Mann-Whitney U test was applied to investigate
differences in confidence of system categorization between
respondents who correctly versus incorrectly classified a
system. Most systems showed no significant difference in
confidence rating between individuals that correctly
categorized a system compared to those that incorrectly
categorized the system (Table 2).
One system (EyeSight) showed significant age differences
in correct categorization; no other system showed
significant differences between different age groups and
correct categorization (Table 3). There were no significant
gender differences in correct categorization of any system;
men were not more frequently correct than women and vice
versa. However, there were significant differences in
confidence and familiarity ratings between genders (Table
3). In all significant cases, men rated themselves more
confident than women in their responses. Men also reported
being more familiar with systems prior to taking the survey.
Categorization'Patterns'
Chi-square goodness of fit tests were used to explore
whether or not the distribution of level categorization
differed from random guessing; that is, equal distribution of
responses across the seven levels of automation. Chi-
squared residuals were explored to determine which cells
contributed most toward the results [29].
Every system showed a significant difference from equal
distribution of categorizations (Table 2). When exploring
the raw residuals, one system (EyeSight) showed residuals
close to random guessing. The remaining 18 systems
showed that a disproportionate number of participants were
not selecting L0 (no automation) for most systems. Aside
from EyeSight, every system had an L0 residual of less than
-25, and ten had residuals of less than -50. As the survey
instructions indicated each system was an automated
system, it seems plausible that respondents were not
selecting L0 due to the survey design, rather than any
impact of naming. As such, chi-square analysis was re-run
using only the data points assigned to L1-5. Each system
still showed a significant difference from equal distribution
of categorizations (Table 4).
The residuals for the second set of chi-square analyses
revealed two strong relationships between name and
categorization within the 19 systems (Table 4). Table 4 was
color-coded to more easily display patterns in the residuals.
Dark green cells are those with the highest residuals, dark
grey are those with the lowest, and white are those with
Table 3 No significant differences were exhibited in gender and accuracy, but significant gender differences were exhibited in
confidence and familiarity with systems.
residuals closest to zero. The first relationship was between
the systems with “cruise” in the name; these four systems
were consistently rated at the lower end of the automation
scale. They also generally received higher confidence
ratings. The second relationship was regarding the four
“assist” systems, which received high frequency of
categorizations in L1.5 and L3. Confidence ratings were
consistently in the middle for these systems. Four systems
(ProPilot, Highway Pilot, Distronic Plus, and EyeSight)
showed residuals that were widely distributed across the
automation scale; these systems were still significantly
different from random guessing, but had the lowest range of
residuals. They also showed the lowest confidence ratings
of all systems. The remaining 8 systems showed responses
different from guessing, but no easily identified pattern was
apparent between any of the 8 systems and their names.
An alternate visualization of the results appears in Figure 3.
Here, colored squares represent mean response, while black
lines indicate the bounds of the first and third quartiles.
While the mean categorization for most systems is higher
than the correct categorization, it is important to note that
mean has limited value for interpretation for two reasons:
first, the categories provided are ordinal with dissimilar
differences between each category. Second, as some
participants were likely guessing, there were more
opportunities to select levels of automation above the
correct category than below. The quartiles bounds indicate
some systems, such as Active Cruise Control, Intelligent
Cruise Control, and AutoCruise, have short distributions
ranging between L1 and L2. Others, such as Highway Pilot,
Traffic Jam Pilot, and Intelligent Assist, have wider
distributions spanning from L1.5 to L4.
DISCUSSION'
The first research question was, does the name of driver
assistance systems affect a customer’s perception of the
level of automation of that system. The survey results
indicate that the name of a system does have an impact on
the degree of responsibility that respondents expected to
have as a driver when using a partially automated system.
Overall reported familiarity with the systems was low and
participants were instructed not to use outside resources
when categorizing systems. Consequently, the primary
information contributing to significantly different
categorizations was likely centered on the name of the
system. Initial exploration into age effects suggests these
results are pervasive across all ages, though small sample
sizes for respondents in their 40s may limit interpretation of
these results.
Active Cruise Control was the only system that a majority
of respondents categorized correctly, and it received the
highest confidence and familiarity ratings. Tesla’s
Autopilot also received comparatively high familiarity
ratings, but accuracy was in line with other systems. Low
accuracy could be due to Autopilot’s name, but as prior
familiarity with the system was notable, little can be said
about the effects of solely the term “Autopilot” on
determination of system capability. Rather, the differing
results of these two higher familiarity systems suggest
Different from
Guessing, no L0
Residuals
Confidence**
System
X2
df
p
L1
L1.5
L2
L3
L4
L5
Median
Mode
"Cruise"
systems:
lower
levels
Active Cruise Control
473.9
5
<0.001
159
5
-41
-20
-53
-50
4
5
AutoCruise
175.3
5
<0.001
65
60
-9
-18
-51
-45
4
4
Intelligent Cruise Control
149.9
5
<0.001
63
52
-6
-30
-42
-39
3
4
Super Cruise
68.2
5
<0.001
-22
40
38
-4
-17
-34
3
1
"Assist"
systems:
split
between
1.5 & 3
Driving Assistant Plus
139.6
5
<0.001
-45
38
13
61
--18
-51
3
3
Intelligent Assist
38.4
5
<0.001
-23
35
-9
23
-4
-20
3
3
Pilot Assist
156.5
5
<0.001
-35
66
17
39
-32
-56
3
3
Traffic Jam Assist
70.0
5
<0.001
1
39
-14
34
-17
-43
3
3
Closest to
random
guessing
ProPilot
59.5
5
<0.001
-50
-22
20
20
20
13
3
1
Highway Pilot
67.6
5
<0.001
-42
10
24
33
10
-36
3
3
Traffic Jam Pilot
32.4
5
<0.001
-26
15
10
27
-1
-24
3
3
Distronic Plus
48.5
5
<0.001
-24
15
30
24
-24
-23
1
1
EyeSight*
36.8
5
<0.001
-11
23
-6
32
-20
-16
3
1
Drive Pilot
89.7
5
<0.001
-46
10
39
38
-3
-36
3
1
Pilot Plus
113.6
5
<0.001
-51
2
46
23
30
-49
3
1
Enhanced Autopilot
97.6
5
<0.001
-59
-28
-5
42
30
20
3
3
Intelligent Pilot
100.7
5
<0.001
-54
-2
41
46
-5
-25
3
3
Intelligent Drive
76.6
5
<0.001
-31
48
31
5
-25
-27
3
3
Autopilot
58.6
5
<0.001
-47
-2
43
14
-2
-5
3
4
*Eyesight also showed a large proportion of responses on L0
**Confidence was rated on a 5-pt scale, with 1 being low confidence & 5 being high confidence
Table 4. Raw residuals of Chi-Square Goodness of Fit Tests for equal distribution of responses across Levels 1 5 showed three
patterns in name and level categorization.
outside factors (e.g., hands-on experience, media reports,
educational material) likely have a greater impact than
name alone on setting expectations for a system.
Setting expectations from the beginning is important, and
first impressions have a long been known to influence use
[19]. Nevertheless, misconceptions in perceptions of a
system can be overridden as a consumer receives more
information and first-hand experience with a system.
Moving forward, manufacturers or other parties will need to
continue investing in appropriate ways of educating drivers
on responsible use of their partially automated driving
system. As one example of a more integrated education
approach, Subaru has developed asales and delivery system
for the EyeSight system that aims to help enhance
consumer understanding throughout the purchase process
[1]. While Subaru’s developments in this area may be
industry leading, other manufactures have the opportunity
to leverage observations surrounding consumersuse of
current systems [28] to inform and refine models for use
during the sales process or real-time coaching.
The second research question was, do commonly used
terms when branding ADAS (e.g. Auto, Pilot, Assist,
Cruise) direct consumer perceptions toward presumptions
of lower or higher levels of automation? Survey results
indicated that terms, to varying degrees, influence
consumers perceptions of automation level. For example,
Cruise Control is an established in-vehicle technology with
which many drivers are familiar. Unsurprisingly, “cruise”
systems were frequently rated at the lower end of the
automation scale. It appears drivers interpreted “cruise”
systems to be slightly more automated than cruise control,
setting an expectation that while a “cruise” system might be
able to handle part of the driving task, ultimate
responsibility remained on the driver. Though name may
not be the most important factor for setting consumer
expectations, manufacturers could benefit from leveraging
understood terminology from established and high-
familiarity systems to properly orient consumer
understanding of their responsibilities while driving and
using a system.
Care should be taken when using potentially ambiguous
terms to name systems. “Assist” systems, which attempt to
indicate that the driver should not be relying on the vehicle
to complete all of the driving tasks, were confused between
frequently either rated the system as L1.5, which involves
providing speed and steering support while requiring the
driver to keep their hands on the wheel, or L3, which
involves the system handling most tasks and the driver
being prepared to take over if requested. In one (L1.5), the
system is assisting the driver, who holds responsibility for
OEDR. In the other (L3), the driver is expected to be ready
to assist the system, which is responsible for the dynamic
driving tasks. These two levels require very different input
from the driver, and avoiding confusion regarding
his/herrole is crucial.
Overall, the results suggest that brand names do influence
perceptions of technologies; yet, brand names do not offer
enough information to appropriately set driver expectations
for their role while driving. The wide distribution of
responses for some systems, notably Highway Pilot,
Traffic Jam Pilot, and Intelligent Assist, indicate that name
of a system may be interpreted numerous different ways by
individual consumers. As many consumers own more than
one vehicle, a greater degree of commonality of design and
naming characteristics (e.g. ABS, ESC, etc.) in combination
with increased driver education may be critical in the
successful transformation of personal mobility from largely
manual control through to higher levels of automation. As
the aviation literature has long stipulated, with increasing
Figure 3. Simplified visualization of level classification distributions. Colored boxes indicate mean ranking, and black lines
represent bounds of the first and third quartiles.
levels of automation, increased education is required to
ensure operators are fully aware of their role [26]. It is
unclear how current automotive and technology developers
building automated driving systems across the levels are
fully embracing human-centered design principles from
conceptualization to technology naming, marketing,
delivery, and eventual use. It is clear that naming
conventions could be used to help amplify system
intuitiveness (e.g., where the drivers and systems mental
models by nature have a high degree of overlap), and to
better facilitate adoption of automated driving features that
have the potential to revolutionize mobility and increase
vehicle safety.
Consistent with previous gender research, men in this
survey were more confident in their categorizations when
they were in fact incorrect than were women who were
incorrect [16]. This overconfidence, combined with a
general male preference to seek out information on their
own rather than to be provided with assistance from
dealership staff or the car itself [3], could create additional
challenges for male consumers in setting appropriate driver
mental models at first exposure to a system. As these results
suggest, name alone is not enough to appropriately orient
drivers to system limitations and appropriate use.One
solution might be to necessitate a self-guided tutorial or
other training sessions run by the system itself, in which the
system could be locked until the driver completes the
tutorial.
As research on this topic expands, it is recommended that a
naming guide be created to provide insight for
manufacturers when designing and marketing new systems.
To that end, future research on this topic would benefit
from a larger, more nationally representative sample. This
survey was limited in the age ranges represented in the
sample, as well as the geographic distribution and education
level of respondents. Future research may also need to limit
the number of systems addressed. Due to the similarity and
overlapping nature of many of the technology names (e.g.,
Traffic Jam Pilot, Traffic Jam Assist, and Pilot Assist), it
was difficult to interpret which term was affecting
classification to the higher degree. A more targeted
approach could provide deeper insight toward the specific
portions of each name and how they relate to automation
and driver role expectations.
ACKNOWLEDGMENTS'
Support for this work was provided by the Advanced
Vehicle Technology (AVT) consortium at MIT. The views
and conclusions being expressed are those of the authors,
and have not been sponsored, approved, or endorsed by
members of the consortium.
REFERENCES'
1.!G. Abe and J. Richardson. 2006. Alarm timing, trust
and driver expectation for forward collision warning!
systems. Applied Ergonomics, 37(5), 577-586.
2.!Hillary Abraham, Hale McAnulty, Bruce Mehler, and
Bryan Reimer. 2017. A Case Study of Today’s
Automotove Dealerships: The Introduction and
Delivery of Advanced Driver Assistance Systems. In
Transportation Research Record: Journal of the
Transportation Research Board 2660. DOI:
10.3141/2660-02.
3.!Hillary Abraham, Chaiwoo Lee, Samantha Brady,
Craig Fitzgerald, Bruce Mehler, Joseph F. Coughlin.
2017. Autonomous Vehicles and Alternatives to
Driving: Trust, Preferences, and Effects of Age. In
Proceedings of the Transportation Research Board 96th
Annual Meeting (TRB ’17).
4.!M. Beggiato and J. F. Krems. 2013. The evolution of
mental model, trust and acceptance of adaptive cruise
control in relation to initial information. Transportation
research
5.!Neal E. Boudette. 2017. Tesla’s Self-Driving System
Cleared in Deadly Crash. Retrieved May 4, 2017 from
model-s-autopilot-fatal-crash.html
6.!J. K. Choi and Y. G. Ji. 2015. Investigating the
importance of trust on adopting an autonomous
vehicle. International Journal of Human-Computer
Interaction, 31(10), 692-702.
7.!Consumer Reports. 2016. Tesla’s Autopilot: Too Much
Autonomy Too Soon. Retrieved April 4, 2017 from
http://www.consumerreports.org/tesla/tesla-autopilot-
too-much-autonomy-too-soon/
8.!Alex Davies. 2017. Cadillac Cracks a Self-Driving
Puzzle by Shoving a Camera in your Face. Retrieved
May 15, 2017 from
driving-puzzle-shoving-camera-face/
9.!European Commission. 2016. Status of the review of
the General Safety and Pedestrian Safety Regulations.
Presentation. (14 December, 2016). Retrieved May 16,
2017 from
/wp29grsp/GRSP-60-21e.pdf
10.!M. Ghazizadeh, Y. Peng, J. D. Lee, and L. N. Boyle.
2012. Augmenting the technology acceptance model
with trust: Commercial drivers’ attitudes towards
monitoring and feedback. In Proceedings of the Human
Factors and Ergonomics Society Annual Meeting (Vol.
56, No. 1, pp. 2286-2290). Sage CA: Los Angeles, CA:
Sage Publications.
11.!J. L. Harbluk, Y. I. Noy, and M. Eizenman. 2002. The
impact of cognitive distraction on driver visual
behaviour and vehicle control (No. TP# 13889 E).
12.!T. A. Kazi, N. A. Stanton, G. H. Walker, and M. S.
Young, M. S. 2007. Designer driving: drivers'
conceptual models and level of trust in adaptive cruise
control. International journal of vehicle design, 45(3),
339-360.
13.!J. D. Lee and N. Moray. 1994. Trust, self-confidence,
and operators' adaptation to automation. International
Journal of Human-Computer Studies, 40, 153-184.
14.!J. D. Lee and K. A. See. 2004. Trust in automation:
Designing for appropriate reliance. Human
factors, 46(1), 50-80.
15.!Paul Lewis, Gregory Rogers, and Stanford Turner.
2017. Beyond Speculation: Automated Vehicles and
Public Policy. Retrieved May 16, 2017 from
https://www.enotrans.org/wp-
16.!Mary Lundenberg, Paul W. Fox, and Judith Punccohar.
1994. Highly Confident but Wrong: Gender
Differences and Similarities in Confidence Judgments.
Journal of Educational Psychology, 86, 114-121.
17.!D. Maheswaran, D. M. Mackie, and S. Chaiken. 1992.
Brand name as a heuristic cue: The effects of task
importance and expectancy confirmation on consumer
judgments. Journal of consumer psychology, 1(4), 317-
336.
18.!Russ Mitchell. 2016. Controversy over Tesla
“autopilot” name keeps growing. Retrieved May 12,
fi-hy-autopilot-controversy-20160721-snap-story.html
19.!Michael G. Morris and Andrew Dillon. 1997. How user
perceptions influence software use. IEEE software, 14,
4, 58-65.
20.!NHTSA. 2017. Investigation PE 16-007.
21.!D.A. Norman. 1990. The 'problem' with automation:
Inappropriate feedback and interaction, not 'over-
automation'. Philosophical Transactions of the Royal
Society London, Series B, Biological Sciences,
327(1241), 585-593.
22.!R. Parasuraman, R. and V. Riley. 1997. Humans and
automation: Use, misuse, disuse, abuse. Human
Factors: The Journal of the Human Factors and
Ergonomics Society, 39(2), 230-253.
23.!Bryan Reimer, Anthony Pettinato, Lex Fridman,
Joonbum Lee, Bruce Mehler, Bobbie Seppelt, Junghee
Park, and Karl Iagnemma. 2016. Behavioral Impact of
Drivers’ Roles in Automated Driving. In Proceedings
of the 8th International Conference on Automotive User
Interfaces and Interactive Vehicle Applications
(AutomotiveUI '16), 217-224. DOI:
10.1145/3003715.3005411
24.!SAE. 2016. Taxonomy and Definitions for Terms
Related to Driving Automation Systems for On-Road
Motor Vehicles. SAE Standard J3016
25.!S. Samuel and D. L. Fisher. 2015. Evaluation of the
Duration. Transportation Research Record: Journal of
the Transportation Research Board, (2518), 9-17.
26.!N. B. Sarter, D. D. Woods, and C. E. Billings. 1997.
Automation surprises. In G. Salvendy (Ed.), Handbook
of Human Factors & Ergonomics, second edition.
Wiley.
27.!Bobbie Seppelt and L. D. Lee. 2007. Making adaptive
cruise control (ACC) limits visible. International
journal of human-computer studies, 65(3), 192-205.
28.!Bobbie Seppelt, Bryan Reimer, Linda Angell, and Sean
Seaman. 2017. Considering the Human across Levels
of Automation: Implications for Reliance. In
Proceedings of the Ninth International Driving
Symposium on Human Factors in Driver Assessment,
Training and Vehicle Design (DA ’17). In press.
29.!Donald Sharpe. 2015. Your Chi-Square Test is
Statistically Significant: Now What? Practical
Assessment, Research & Evaluation 20, 8.
30.!N. Trübswetter and K. Bengler. 2013. Why should I
the elderly: knowledge, experience, and usage barriers.
In Proc. 7th Int. Driving Symposium on Human
Factors in Driver Assessment, Training and Vehicle
Design (pp. 495-501).
31.!Konrad Webner. 2016. Awareness and utilization of
the Autopilot Tesla Survey. Presentation of results:
Customer survey. Retrieved April 4, 2017 from
https://www.tesla.com/sites/default/files/blog_attachme
nts/tesla_survey_autopilot_awareness.pdf
... Levels 1 to 4 can sometimes lead to user confusion [5]. This is aggravated by the various names given to equivalent functions by different manufacturers, which can result in false user expectations [6]. These expectations are linked to the user's mental model, in which less than six levels of driving automation were identified [7,8]. ...
... Based on the previously presented model of information processing, the following more precise design space for optional interventions can be described. 6. The automation can reduce possibilities for optional interventions at parameter and maneuver level through mandatory constraints to avoid a decision with a negative impact on mandatory goals. ...
Chapter
The SAE International plays a major role in shaping research and development in the field of automated driving through its SAE J3016 automation taxonomy. Although the taxonomy contributed significantly to classification and development of automated driving, it has certain limitations. SAE J3016 implies an “all or nothing” approach for the human operation of the driving task. Within this paper, we describe the potential of moving considerations regarding automated driving beyond the SAE J3016. To this end, we have taken a structured look at the system consisting of the human driver and the automated vehicle. This paper presents an abstraction hierarchy based on a literature review. The focus lies particularly on the functional purpose of the system under consideration. In particular, optional parts of the functional purpose like driver satisfaction are introduced as a main part of the target function. We extend the classification into optional and mandatory aspects to the lower levels of abstraction within the developed hierarchy. Especially the decisions on movement and dynamics in terms of driving parameters and driving maneuvers offer a so far underestimated design space for (optional) driver interventions. This paper reveals that the SAE J3016 lacks a consideration of these kind of interventions. The identified design space does not replace the SAE J3016, it does however broaden the perspective provided by this important taxonomy.
... Among studies related to partial automation, most have addressed trust in the technology (Abraham et al., 2017), the intention to use these controls under different roads, weather and traffic conditions (Hardman et al., 2019), the learning process (Abraham et al., 2018), and the relevance of perceived usefulness and attitudes (May et al., 2017). For example, May et al., (2017) revealed that the intention to use partially automated driving is determined by attitudes, perceived usefulness, compatibility, and external variables such as tech experience, driving confidence, and driving enjoyment. ...
Article
Full-text available
The present study investigates the role of psychological factors on the choice of three controls (modes) in driving a vehicle, namely highly automated, partially automated, and manual control. Traditional driving habits, resistance to change, and behavioural beliefs were all assessed along with individual and socioeconomic variables. Using survey data (n=595) of car users, a model was developed to predict the share of different driving controls and determine the effects of psychological variables. Results indicate that up to 55% of people prefer driving with highly automated control, and 30% prefer partially automated control. Behavioural beliefs (e.g., attitudes toward highly automated control) are not as critical to driving control as habits. People with stronger driving habits are less likely to use highly automated controls. A one-unit increase in worry could reduce driving in highly automated control by 5.5% and increase manual control by 4.5%, and those who welcome the new technologies are more likely to prefer highly automated control. Some practical policy solutions are also provided.
... People have incorrectly perceived SAE L2 vehicles as AVs, usually due to inappropriate communication by their developers and because of their terminology (AAA, 2020;Zmud et al., 2020). People misunderstand the functionality implied by L2 automation and its actual capability (Abraham et al., 2017;Nees, 2018;Teoh, 2020), which might lead to poor expectations, overtrust, and misuse behaviors (Dixon, 2020;Endsley, 2020). For instance, Tesla's Autopilot (an L2 system) induces people to perceive it as equivalent to "high automation" and "autonomous" (Nees, 2018) and to believe it to be safe to conduct many non-driving behaviors (e.g., texting, reading a book, and using a tablet) that are illegal even when the system is functioning properly (Teoh, 2020). ...
Article
Automated vehicles (AVs) have potential to impact transportation, mobility, and society considerably in the future. Many beliefs surrounding this technology are criticized as “misconceptions” by transport experts, developers, journalists, and communicators. Understanding how the public views these beliefs offers insights for improving public communication and policymaking. We conducted the first study on views of 24 of these beliefs, including 21 arguable misconceptions (seven optimistic beliefs, 10 pessimistic beliefs, two beliefs of low requirements for AVs, two beliefs of high requirements for AVs) and three factual misconceptions about current AVs (e.g., “AVs are already available in the market”). During June 2020, Chinese participants (N = 1209) rated their agreement with these beliefs. They reached consensus on 16 beliefs. More than 50% of participants rejected nine beliefs and supported seven beliefs. They had some misconceptions about AVs. Nearly one third believed that AVs are already available in the market. Four classes of participants emerged through latent class analysis, labeled as “don’t know” (19.2%), “neutral to positive” (32.6%), “naïve enthusiasts” (28.3%), and “sober skeptics” (19.9%). Comparison of the latter two classes demonstrated the irony that those holding more misconceptions about AVs were more receptive to AVs, whereas those holding fewer misconceptions about AVs were more skeptical about AVs. Knowing more about AVs was associated viewing AVs more negatively. Effective public communication is urgent to dispel myths about AVs and prevent AV technology from becoming controversial.
... These behavioral changes may be a consequence of a misunderstanding of the proper use of AP and false expectations that are reinforced when automation performs relatively well (Abraham et al., 2017a;Abraham et al., 2017b;Lin et al., 2018;Seppelt et al., 2017;Seppelt et al., 2019;Seppelt & Victor, 2016;Teoh, 2020). Given these behavioral changes, drivers may be unprepared to regain manual control when automation reaches its operational limits (Lin et al., 2018;Parasuraman & Riley, 1997;Reagan et al., 2020;Victor et al., 2018). ...
Conference Paper
Full-text available
Previous research indicates that drivers may forgo their supervisory role with partial-automation. We investigated if this behavior change is the result of the time automation was active. Naturalistic data was collected from 16 Tesla owners driving under free-flow highway conditions. We coded glance location and steering-wheel control level around Tesla Autopilot (AP) engagements, driver-initiated AP disengagements, and AP steady-state use in-between engagement and disengagement. Results indicated that immediately after AP engagement, glances downwards and to the center-stack increased above 18% and there was a 32% increase in the proportion of hands-free driving. The decrease in driver engagement in driving was not gradual overtime but occurred immediately after engaging AP. These behaviors were maintained throughout the drive with AP until drivers approached AP disengagement. In conclusion, drivers may not be using AP as recommended (intentionally or not), reinforcing the call for improved ways to ensure drivers' supervisory role when using partial-automation.
Article
In this study we investigate changes in travel due to level 2 automation among owners of electric vehicles in California. Level 2 automation has the potential to reduce driver fatigue and make driving less stressful which could mean drivers choose to travel more. We use questionnaire survey data to investigate changes to long distance travel and annual vehicle miles travelled (VMT) due to automation. Results show those who report doing more long-distance travel because of automation are younger; have a lower household income; live in urban areas; own a longer-range electric vehicle; use automation in a variety of conditions; and have pro-technology attitudes and prefer outdoor lifestyles. We use propensity score matching to investigate whether automation leads to an increase in annual VMT. The results of this show 4059–4971 more miles per year among users of level 2 automation compared to owners of similar vehicles without automation.
Chapter
The human body (especially the hand) can sense a wide variety of haptic (tactile and kinaesthetic) features e.g. edges, textures, contours and force feedback. Consequently, it is possible to convey a wide range of information of importance to a user of an automated vehicle without using the traditional and limiting visual or auditory systems. This chapter introduces the cutaneous (skin) senses and argues for their importance in the automotive design field. Different haptic-based technologies are described, pertaining to physical controls, touchscreens and gesture interfaces, and the numerous research studies investigating the various fundamental Human Factors/HCI issues such as discrimination, task compatibility, individual sensitivity, subjective preference etc. are presented. In this respect, the potential for the use of “in-the-air” haptic feedback via ultrasound is introduced based on recent and ongoing research. The chapter concludes by discussing several additional benefits associated with haptic-augmented contactless interaction (hygiene, trust etc.) and highlighting the future potential of ultrasound technology in the context of automated vehicles.
Chapter
Trust calibration takes place prior to and during system interaction along the available information. In an online study N = 519 participants were introduced to a conditionally automated driving (CAD) system and received different a priori information about the automation's reliability (low vs high) and brand of the CAD system (below average vs average vs above average reputation). Trust was measured three times during the study. Additionally, need for cognition (NFC) and other personality traits were assessed. Both heuristic brand information and reliability information influenced trust in automation. In line with the Elaboration Likelihood Model (ELM), participants with high NFC relied on the reliability information more than those with lower NFC. In terms of personality traits, materialism, the regulatory focus and the perfect automation scheme predicted trust in automation. These findings show that a priori information can influence a driver's trust in CAD and that such information is interpreted individually.
Article
Robot vacuums are a type of every-day automation and may serve as a possible domain for studying human-automation interactions. Online reviews of four models of autonomous robot vacuums, representing different levels of price and automation, were collected from multiple retailers to understand user interactions and issues encountered. Using a thematic analysis, reviews were categorized into three themes: prior experience with robot vacuums, navigation, and mapping, and troubleshooting issues. The results suggested that new users were more satisfied with their robots than experienced users. Furthermore, price and brand name may influence users’ mental models of their vacuums. Owners of more expensive vacuums appeared less willing to change their behavior to accommodate the vacuum’s limitations. These findings suggest that robot vacuums may be a useful gateway domain for understanding everyday-users’ interactions with automation, and that online reviews are a potential source of information on user experiences.
Article
Full-text available
Objective We present a model for visual behavior that can simulate the glance pattern observed around driver-initiated, non-critical disengagements of Tesla’s Autopilot (AP) in naturalistic highway driving. Background Drivers may become inattentive when using partially-automated driving systems. The safety effects associated with inattention are unknown until we have a quantitative reference on how visual behavior changes with automation. Methods The model is based on glance data from 290 human initiated AP disengagement epochs. Glance duration and transition were modelled with Bayesian Generalized Linear Mixed models. Results The model replicates the observed glance pattern across drivers. The model’s components show that off-road glances were longer with AP active than without and that their frequency characteristics changed. Driving-related off-road glances were less frequent with AP active than in manual driving, while non-driving related glances to the down/center-stack areas were the most frequent and the longest (22% of the glances exceeded 2 s). Little difference was found in on-road glance duration. Conclusion Visual behavior patterns change before and after AP disengagement. Before disengagement, drivers looked less on road and focused more on non-driving related areas compared to after the transition to manual driving. The higher proportion of off-road glances before disengagement to manual driving were not compensated by longer glances ahead. Application The model can be used as a reference for safety assessment or to formulate design targets for driver management systems.
Article
Full-text available
Trust is an important factor in building acceptance of autonomous vehicles within our society, but the complex nature of trust makes it challenging to design for an appropriate level of trust. This can lead to instances of mistrust and/or distrust between users and AV’s. Designing for calibrated trust is a possible option to address this challenge. Existing research on designing for calibrated trust focuses on the human machine interaction (HMI), while from literature we infer that trust creation beings much before the first interaction between a user and an AV. The goal of our research is to broaden the scope of calibrated trust, by exploring the pre-use phase and understand the challenges faced in calibration of trust. Within our study 16 mobility experts were interviewed and a thematic analysis of the interviews was conducted. The analysis revealed the lack of clear communication between stakeholders, a solutionism approach towards designing and lack of transparency in design as the prominent challenges. Building on the research insights, we briefly introduce the Calibrated Trust Toolkit as our design solution, and conclude by proposing a sweet spot for achieving calibration of trust between users and autonomous vehicles.
Conference Paper
Full-text available
New vehicle technologies and transportation alternatives offer the potential of expanded mobility solutions for users of all generations. While many industries are focused on creating these options, only limited research has explored their use, adoption, and appeal as they apply to older generations. An online survey was fielded in order to gather information on satisfaction with current in-vehicle technology, inclination to use differing levels of automation, transportation alternatives to driving your own car, and methods of learning to use in-vehicle technology across users of all ages. The survey found that respondents reported generally being satisfied with technology in their vehicles, but are not learning to use the systems with their preferred methods of learning. A majority of respondents indicated a willingness to consider transportation alternatives, but far fewer had taken advantage of the alternatives in the past year. Older adult respondents, in particular, are not taking advantage of new mobility solutions at the levels that they might. Finally, while many older adults generally expressed a willingness to use some level of automation, they expressed less interested in full autonomy than younger drivers.
Article
Full-text available
Vehicle manufacturers have developed advanced driver assistance systems (ADASs) to reduce driver workload and enhance safety. The delivery of these systems to consumers occurs through dealerships not owned by manufacturers. Limited research is available on how dealerships provide consumers with information and training on ADASs. In an exploratory study seeking information on new safety technologies, semi-structured blind interviews of salespeople at 18 Boston, Massachusetts, area dealerships were conducted in the context of potential vehicle purchase across six vehicle brands. Although some dealerships were making concerted efforts to introduce and educate customers on ADASs, a number of salespeople interviewed were not well positioned to provide adequate information to their customers. In select instances, salespeople explicitly provided inaccurate information on safety-critical systems. The dealerships in the sample that represented mass-market brands (Ford and Chevrolet) were the poorest performers. Sales staff at Subaru dealers were well trained and had print and digital content to drive consumer engagement. Educational staff, or “geniuses,” at BMW dealers presented a potentially innovative way of segmenting the sales process from technology education. In the absence of some technology introduction and education at dealerships, consumers may remain underinformed or misinformed about the disruptive safety technologies that are rapidly being introduced across the vehicle fleet.
Article
Full-text available
The objective of this study is to examine the user’s adoption aspects of autonomous vehicle, as well as to investigate what factors drive people to trust an autonomous vehicle. A model explaining the impact of different factors on autonomous vehicles’ intention is developed based on the technology acceptance model and trust theory. A survey of 552 drivers was conducted and the results were analyzed using partial least squares. The results demonstrated that perceived usefulness and trust are major important determinants of intention to use autonomous vehicles. The results also show that three constructs—system transparency, technical competence, and situation management—have a positive effect on trust. The study identified that trust has a negative effect on perceived risk. Among the driving-related personality traits, locus of control has significant effects on behavioral intention, whereas sensation seeking did not. This study investigated that the developed model explains the factors that influence the acceptance of autonomous vehicle. The results of this study provide evidence on the importance of trust in the user’s acceptance of an autonomous vehicle.
Conference Paper
Full-text available
A vast number of Advanced Driver Assistance Systems (ADAS) are commercially available, all of which have the potential to increase the safety and comfort of driving a car. Due to age-specific performance limitations, older drivers could benefit a great deal from such in-vehicle technologies, provided that they are purchased and used. Based on the results of several market research studies, awareness of ADAS is significantly higher than their usage rate, which is still very low. To analyze the discrepancy between awareness and willingness to use ADAS, 32 older car drivers were surveyed in a semi-structured interview study. This paper examines the knowledge, experience, and barriers toward the use of ADAS in the elderly.
Article
We analyze the dynamics of inflationary models with a coupling of the inflaton $\phi$ to gauge fields of the form $\phi F \tilde{F}/f$, as in the case of axions. It is known that this leads to an instability, with exponential amplification of gauge fields, controlled by the parameter $\xi= \dot{\phi}/(2fH)$, which can strongly affect the generation of cosmological perturbations and even the background. We show that scattering rates involving gauge fields can become larger than the expansion rate $H$, due to the very large occupation numbers, and create a thermal bath of particles of temperature $T$ during inflation. In the thermal regime, energy is transferred to smaller scales, radically modifying the predictions of this scenario. We thus argue that previous constraints on $\xi$ are alleviated. If the gauge fields have Standard Model interactions, which naturally provides reheating, they thermalize already at $\xi\gtrsim2.9$, before perturbativity constraints and also before backreaction takes place. In absence of SM interactions ({\it i.e.} for a dark photon), we find that gauge fields and inflaton perturbations thermalize if $\xi\gtrsim3.4$; however, observations require $\xi\gtrsim6$, which is above the perturbativity and backreaction bounds and so a dedicated study is required. After thermalization, though, the system should evolve non-trivially due to the competition between the instability and the gauge field thermal mass. If the thermal mass and the instabilities equilibrate, we expect an equilibrium temperature of $T_{eq} \simeq \xi H/\bar{g}$ where $\bar{g}$ is the effective gauge coupling. Finally, we estimate the spectrum of perturbations if $\phi$ is thermal and find interesting features: ({\it i}) a tensor to scalar ratio suppressed by $H/(2T)$ if tensors do not thermalize; ({\it ii}) the possibility of blue tensor modes, in the opposite case.
Conference Paper
This study explores the effects of minor changes in automation level on drivers' engagement in secondary activities. Three levels of automation were tested: manual, semi-autonomous, and fully-autonomous. Potential distractor items were present and participants were instructed they could use them if they felt it was safe. Hand positions and engagement in secondary activities were manually coded. Participants were significantly less likely to engage in a secondary activity in semi-autonomous than fully-autonomous mode. Likewise, they were significantly less likely to use two hands to interact with a secondary activity in semi-autonomous mode than fully-autonomous mode. Gaze classification for each of the driver roles revealed that increasing levels of automation resulted in an increasing percentage of off-road glance durations. These observations suggest that in the event of automation failures, a driver in semi-autonomous driving may be in a somewhat better position to retake control and avoid collisions than during fully autonomous driving.
Article