ChapterPDF Available

Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and Application to Vehicle Automation

Authors:

Abstract and Figures

Progress in sensors, computer power and increasing connectivity allow to build and operate more and more powerful assistance and automation systems, e.g., whereas other problems occur due to this process e.g. in human-machine-interaction. In the field of robotics the metaphor of an uncanny valley is known, where robots showing high, however not perfect, similarities to e.g. humans are perceived by humans as uncanny and unsafe. In the field of automation, e.g. vehicle automation, a comparable, metaphorical design correlation is implied, an unsafe valley e.g. between partially- and highly-automated automation levels, in which due to misperceptions a loss of safety could occur. This contribution sketches the concept of the (uncanny and) unsafe valley of automation, summarizes early affirmative studies, gives first hints towards an explanation of the valley, outlines the design space how to secure the borders of the valley, and how to bridge the valley.
Content may be subject to copyright.
Uncanny and Unsafe Valley of Assistance
and Automation: First Sketch
and Application to Vehicle Automation
Frank Flemisch, Eugen Altendorf, Yigiterkut Canpolat, Gina Weßel,
Marcel Baltzer, Daniel Lopez, Nicolas Daniel Herzberger, Gudrun
Mechthild Irmgard Voß, Maximilian Schwalm, and Paul Schutte
Abstract
Progress in sensors, computer power and increasing connectivity allow to build
and operate more and more powerful assistance and automation systems, e.g. in
aviation, cars and manufacturing. Besides many benefits, new problems occur e.
g. in human-machine-interaction. In the field of automation, e.g. vehicle auto-
mation, a comparable, metaphorical design correlation is implied, an unsafe
valley e.g. between partially- and highly-automated automation levels, in which
due to misperceptions a loss of safety could occur. This contribution sketches the
concept of the (uncanny and) unsafe valley of automation, summarizes early
affirmative studies, gives first hints towards an explanation of the valley, outlines
the design space how to secure the borders of the valley, and how to bridge the
valley.
F. Flemisch (*)
IAW Institut f
ur Arbeitswissenschaft, RWTH Aachen, Aachen, Germany
FKIE Fraunhofer Institut f
ur Kommunikation, Informationsverarbeitung und Ergonomie,
Fraunhoferstr. 20, 53343 Wachtberg, Germany
e-mail: frank.flemisch@fkie.fraunhofer.de
E. Altendorf • Y. Canpolat • G. Weßel
IAW Institut f
ur Arbeitswissenschaft, RWTH Aachen, Aachen, Germany
M. Baltzer • D. Lopez
FKIE Fraunhofer Institut f
ur Kommunikation, Informationsverarbeitung und Ergonomie,
Fraunhoferstr. 20, 53343 Wachtberg, Germany
N.D. Herzberger • G.M.I. Voß • M. Schwalm
ika Institut f
ur Kraftfahrzeuge, RWTH Aachen, Aachen, Germany
P. Schutte
Aviation Development Directorate AMRDEC/RDECOM, US Army, Brooklyn, NY, USA
#Springer-Verlag GmbH Germany 2017
C.M. Schlick et al. (eds.), Advances in Ergonomic Design of Systems, Products and
Processes, DOI 10.1007/978-3-662-53305-5_23
319
j.nelles@iaw.rwth-aachen.de
Keywords
Automation • Assistance • Robotics • Human-machine systems • Uncanny
unsafe valley
1 Introduction: Assistance, Automation and Robotics
Enabled by technical advancements in the field of sensors, computers and connec-
tivity, as well as motivated by cost-pressure along with ever-increasing perfor-
mance requirements, the complexity of information systems has steadily grown in
the last decades (cf. Hollnagel 2007). A part of this complexity can be compensated
with assistance systems and automation, however, unwanted side effects such as
“Operator/pilot out of the loop” or “Mode confusion” (cf. Billings 1997) are
reported in a variety of domains like aviation, nuclear power plants and automotive.
Rather than speaking about over-automation in an undifferentiated manner,
Norman (1990) points out that not over-automation is the problem, but inappropri-
ate feedback and interaction.
There is a concept in robotics, which can be considered as a specific form of
automation, known as “The Uncanny Valley”, where robots showing high,
however imperfect similarities to humans are perceived by humans as uncanny
and disconcerting (Mori 1970;Morietal.2012). Conscious of the Uncanny
Valley, research and development in robotics is focusing on cooperative robot-
ics, where to a certain extent humans and highly automated robots work together
in the same work spaces, instead of fully-automated robots (cf. Mayer 2012;
Kuz et al. 2015).
A similar development regarding cooperative assistance and automation is
currently emerging in the area of ground vehicles ensuing from the aviation domain
(e.g., Flemisch and Onken 1998; Schutte 1999; Goodrich et al. 2006). It became
increasingly clear, through basic concepts such as Levels of Automation, that
assistant and automated systems are related and, (a) should be discussed holistically
and (b) could be depicted on a scale—that is, a spectrum of assistance and
automation, (cf. Flemisch et al. 2003,2008). This point of view was later applied
in the standard categorization of vehicle automation (cf. BASt 2012a;SAE 2014),
which differentiates between assisted, partially- and highly-automated systems.
Figure 1shows a simplified scale of assistance and automation related to the control
Manual
Fully
automa ted
manually
/assisted
Conditionally/
highly
automated
partially
automated
assisted
Fig. 1 Control distribution between the human and automation represented as an assistance- and
automation-scale, here with explicit automation-levels/modes (inspired by Sheridan 1980;
Flemisch et al. 2003,2008,2012,2014,2015a,b; Gasser et al. 2012b;SAE 2014)
320 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
distribution between the human and the automation in the assistance- and
automation-levels including manual, assisted, partially-, highly- and fully-
automated/autonomous.
A possible unsafe valley of automation can be found in the right half of the scale
between partly- and highly-automated, which could be rather uncanny for the user,
and more importantly, rather unsafe, as described further down.
2 Early Indicators for the Existence of an Unsafe Valley
There is a good chance that the metaphor of an (uncanny and) unsafe valley can be
applied to automation in all kinds of domains. Early systematic explorations within
the area of partly- and highly-automated vehicle control have been conducted for
ground and air vehicles since 2003 (NASA-H-Mode) and since 2004 for ground
vehicles as part of DFG-H(orse)-Mode-Projects.
These were inspired by the H-metaphor, a design metaphor that takes the rider-
horse interaction as a blueprint for vehicle automation (Flemisch et al. 2003,2015a,
b; Bengler and Flemisch 2011; Altendorf et al. 2015). The initial base research
sparked a series of national and EU-projects, introduced the term highly-automated
driving (e.g. Flemisch et al. 2006; Hoeger et al. 2008,2011) and inspired the
development of partially-automated “piloted” driving e.g. of Volkswagen, Audi,
Mercedes and the more chauffeur inspired Tesla.
At the beginning of this research and development in 2000, it was debated as to
whether one or multiple modes between assisted- and fully-automated automation
levels would be advisable and how they should be designed, especially regarding
the involvement-degree of the operators, here the drivers, and the extent of the
automation’s intervention and the required safety measures, e.g. by operator
monitoring.
With accumulating research, it became clear that there are combinations of
partially- as well as highly-automated modes that are functional, while other
implementations are not. An example of a well-functioning implementation of a
lower automation level in the car domain is presented by Ma and Kaber (2005). In
their implementation automation only supports longitudinal control. They revealed
that an Advanced Cruise Control (ACC) system is able to enhance system perfor-
mance in terms of lane deviations and speed control in tracking a lead vehicle and
increase drivers situation awareness, even when drivers are distracted by a
non-driving related task.
In the early decade of this century, research explored whether assistive (i.e., not
fully autonomous) solutions beyond ACC, with coupled longitudinal and lateral
control could be successful. Starting from the H-Mode projects, a series of studies
(e.g. Schieben and Flemisch 2008; Schieben et al. 2008; Petermann and Schlag
2009), have shown that designs differentiating between partly- and highly-
automated systems and using well-arranged transitions could succeed. Rauch
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 321
j.nelles@iaw.rwth-aachen.de
et al. (2010) demonstrated that an integrated driver-state detection may improve the
safety and acceptance of the automation systems.
Furthermore, Merat et al. (2012) presented a safe implementation of a highly-
automated vehicle, which takes longitudinal and lateral control and can perform
gentle maneuvers itself. They conducted an experiment in which the automation
had complete control of the automobile in normal situations and drivers were
warned when approaching an obstacle and manual control had to be resumed.
The results showed that, in their implementation of highly-automated driving, no
negative effects on system performance emerged.
Besides these successful examples, there are clear hints that areas between well-
functioning modes and their variants exist, which are clearly less safe: For example,
simulator-based studies conducted at DLR showed that partially-automated driving
designs, where drivers did not have to apply steering torques any more, can cause
problems in compensating system failures. However, when the driver still needed to
apply some of the required steering torques, these failures could be absorbed
(Schieben and Flemisch 2008). Furthermore, additional studies, (e.g., Damb
ock
2013; Schwalm et al. 2015; Schwalm and Ladwig 2015; Voß and Schwalm 2015)
have shown, that highly automated designs could lead to reduced take-over
capabilities of drivers. This correlation is proven, e.g., by a lack of compensatory
reaction in terms of reducing activity in non-driving-related tasks that would be
needed for an appropriate preparation of a take-over situation (Voß and Schwalm
2015; Schwalm et al. 2015; Schwalm and Ladwig 2015). Moreover, a real-world
indicator for the existence of an uncanny/unsafe valley might be the first fatal crash
of an automated ground vehicle, a partially automated Tesla, which has become
public in 2016, just shortly after the first publication of the unsafe valley in
February 2016 (Flemisch et al. 2016a).
As of 2016, everyday- and user-experience with highly automated ground
vehicles is sparse, however additional indicators for the existence of an unsafe
valley can be derived from the aviation domain. As the applicability of the
H-Mode has shown, ergonomic principles for cooperative guidance and control
can be applied to both domains. Moreover, Schutte et al. (2016) argue for a similar
phenomenon in the aviation area. They present examples of systemic failure
resulting from highly automated flying (National Transportation Safety Board
2010; et d’Analyses 2012) and argue that these incidents were partly due to bad
system design instead of pilot failure alone. They presented an alternative cockpit
design, which keeps the pilot in the loop. In terms of the unsafe valley they avoid
falling into it by staying on the left side of the abyss. During scientific discussion in
the context of this publication it turned out that also other researchers already had
resembling ideas like the valley of automation. Thus e.g. Maurer (2016) describes
an u-shaped curve for the relation between user-transperancy and driver assistance
systems. Furthermore Maurer (2016) and Eckstein (2016) describe the grand
canyon of the driver-assistance systems, which refers especially to the differences
in effort and protection systems between partially and highly automated vehicles.
322 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
3 A First Glimpse: What Happens in an Unsafe Valley?
Are there scientific phenomena that could determine the existence and the
characteristics of such an unsafe valley? Figure 2pictures an example of a potential
“crash” into an uncanny valley, here applied to the car domain. Within that
example, a system failure with an adjacent take-over occurs in front of a narrow
curve. In one condition (which here is named partially-/highly-automated, because
it is was supposed to be operated as partially automated, but drivers used it in a way
highly automated systems are used) this leads to an unsafe driving performance, i.e.
a “crash” into an uncanny valley. This does not occur in another condition
(partially-automated). This suggests that there might be a correlation between the
control repartition—which ranges from manual to fully automated driving—and an
output quantity, such as performance or driving safety. Figure 2conceptualizes this
issue: While a mode with a low degree of automation (M3.1: e.g., partly-automated)
still provides sufficient safety, a higher automation level (M3.2) seems to be rather
unsafe. However, passing this unsafe valley, an even higher automation level
(M4) could be safe again.
An important element, which could lead to the safety drop in M3.2, is the
connection between the automation performance and the operator’s performance
in case of a take-over situation: A potential correlation between the operator’s
involvement and his take-over capabilities could be that, if the former is too low,
then the latter will not happen in time. A nominally highly reliable and capable
automation that infrequently is insufficient and incapable will induce a pure moni-
toring role for the operator, i.e. “supervisory control”, for which humans are not
well prepared (e.g., Sheridan 1976; Endsley 1995). A pertinent approach from
Schwalm et al. (e.g., Schwalm et al. 2015; Schwalm and Ladwig 2015; Voß and
Schwalm 2015) postulates that operators, here drivers, abandon a continuous
control and regulative process in case of a (too) high automation level. Due to
this, they supposedly are no longer capable of applying regulatory measures in
terms of a functional situation management. Instead, if the drivers are fully
involved in the driving task, i.e. manual driving, this assumed regulatory process
Level of
Automation
Actual Safety
(Perceived
Safety? Trus t?)
?
Uncanny/ unsafe
Valley
M 1
M 4
M 3.2
M 3.1
Driving
Direction
Automation
Failure
Fig. 2 An example of a “crash” in an Uncanny Valley: Intercepting a system failure prior to a
curve (green: system variant partially-automated; red: system variant partially-/highly automated)
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 323
j.nelles@iaw.rwth-aachen.de
would allow them to analyze and anticipate the driving situation, and to adequately
distribute available cognitive resources.
In order to conceptualize this idea of reduced driver’s involvement as a risk
factor for the appearance of an unsafe valley, a schema was developed for the
context of driving, in which the different automation levels were combined with
possible states of driver’s involvement (cf. Fig. 3: Herzberger et al. 2016). These
states depict minimal requirements for each level of automation regarding the
driver’s involvement which is necessary to guarantee safe driving performance.
These minimum requirements are separately reported for the three levels of a
driving task [navigation, guidance and stabilization; see Donges (2015)] in order
to provide a more detailed sketch of the driver-system interaction. In Fig. 3, the
orange shaded area represents driver-sided task fulfillment, while the blue shaded
area represents system-sided task fulfillment (Herzberger et al. 2016).
In total, five driver states were defined on the basis of the SAE Levels (SAE
International 2014). In the following, these five driving states will be described
regarding the required extent of involvement, presented from high to low. The state
Fully (F) requires an active performance of the entire driving task. The second state,
BASt SAE Navigation Guidance Stabilisation
0
FFF
---
1
F I1 I1
-AA
2
FI2I2
-LL
3
FRR
-LL
4
---
FFF
5
---
FFF
minimal requirements for driver involvment
potential distribution of control between human and machine
Driver Only
Assisted
Partially
automated
Highly
automated
Fully
automated
-
Fig. 3 Minimal requirements for driver states. Note: The crosshatched area depicts the unsafe
valley
324 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
Involvement (I), is divided into Involvement 1 (I1) and Involvement 2 (I2). State I1
still requires an active performance, either longitudinal or lateral control, and a
cognitive monitoring of the entire driving task. Moreover, drivers in state I1 have to
be ready to take over the driving task at any time without a take-over request
(TOR). In contrast to I1, drivers in the state of I2 do not actively perform a driving
task. However, they are still required to perform the cognitive monitoring. Still,
drivers in state I2 have to be ready to take over the driving task at any time without a
TOR. Regarding the state of Retrievable (R), a cognitive monitoring of the driving
task is no more required. Nevertheless, the driver has to be able to resume the
driving task after an appropriate TOR. In the final state, Non-Retrievable (NR), the
driver cannot take over control of the vehicle. This state is reached, if the driver e.g.
sleeps or if a transition from Retrievable to a higher level of involvement is not
possible. These five states only define the minimal requirements for a driver’s
involvement for the safe management of the driving task. Any requirements for a
possible intervention by the driver are not considered by now.
Two different vehicle conditions were specified: Firstly, in the condition of
assisted driving (A) the vehicle either takes over longitudinal or lateral control
during the driving task. Secondly, the condition of lead driving (L) which posits that
a vehicle controls any longitudinal or lateral actions to a full extent. On top of that,
the earlier introduced driver state Fully (F) can be transferred to the vehicle. It
requires an active performance of the entire driving task by the vehicle.
Based on this concept of states of driver’s involvement in the context of
automated driving, it is possible to derive an explanation for a “crash” into the
uncanny/unsafe valley: For drivers, the SAE Levels 2 and 3 seem to pose the same
requirements in normal traffic of automated driving, as both lateral and longitudinal
guidance are carried out by the system. Nevertheless, the actual requirements of a
Level 2 system to the driver are clearly higher due to the missing TOR in case of a
system failure. In Level 2 drivers have to constantly monitor the driving task (I2),
while in Level 3 it is not required (R). In other words, drivers tend to misconceive a
Level 2 (I2) state with a Level 3 (R) state (cf. orange crosshatched area in Fig. 3).
Only if system limits are reached, the differences between the two (required) states
of involvement will appear and the driver potentially crashes into the uncanny/
unsafe valley.
Another advantage of the classification of driver states is the timely recognition
of a decreasing driver involvement. If this risk is recognized, possible consequences
can be rebalanced and/ or reduced. In the following, a time course of occurrence
probability of an uncanny/ unsafe valley is presented.
Figure 4highlights the temporal course of a “crash” into an unsafe valley: At
first, everything is working properly, because the automation’s capabilities are
sufficient for handling the vehicle control situation and the operator is still able to
take over (T1). Over time, habituation effects can emerge (T2). If in that case a
system failure occurs, a human operator’s take-over capability would be insufficient
for the safe management of required take-over maneuver (T3). Within that scenario,
the unsafe valley metaphorically resembles a crevasse (ice crevice) that seems to be
crossable over a stable bridge. This bridge, however, will break down as soon as it is
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 325
j.nelles@iaw.rwth-aachen.de
used and will devour the all too trustful operator. Another possible metaphor
depicts a pair of scissors: One blade represents the operator’s take-over capabilities,
the other the automation’s availability. The first blade is slowly closing when the
operator’s take-over capability decreases with increasing trust. As soon as the
automation availability decreases temporarily, the movement of the two blades is
cutting off the operator from the control of the process, e.g., of the vehicle.
The deeper mechanism of the “scissor” could be located between the operator’s
confidence in the system’s performance capability and the operator’s take-over
capability, as sketched, e.g., by Manzey and Bahner (2005). Here, the relationship
between the perceived/attributed automation capacities on the part of the operator
and the actual capacity of the automation seems to be relevant. First, there is the
ideal scenario in which the expectations with regard to the technical system’s
abilities are realistic and fit the actual capacities. In that case, it is to be expected
that even after critical events, which will outline the system’s boundaries, no
adjustments to the attributions are necessary. The trust in the system is on an
appropriate and constant level. However, there is a second scenario, in which the
actual capacities do not meet the (too high) expectations. According to Lee and See
(2004) this inadequate calibration might be traced back to an insufficient precision
and specification of the judgment. In the literature, this phenomenon is discussed
under the notions of overtrust/overreliance (e.g., Inagaki and Itoh 2013; Lee and
See 2004) and automation bias/complacency (e.g., Bahner 2008; Mosier and Skitka
1996). While overtrust and automation bias might be understood as cognitive
components that postulate the exaggerated system trust, overreliance and compla-
cency depict the behavioral component. The latter two terms often are used
complementary. They might be conceptualized by means of an insufficient or
infrequent operator-sided system monitoring compared to what actually would be
T 1 :
Habituation
Degradation
T 2 :
T 3 :
Fig. 4 A possible connection between control-distribution and certainty
326 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
required with regard to the system’s capacities (Bahner 2008). Only in those critical
situations that evince the automation limits, an adaptation of the attributions with
regard to the system abilities takes place. Subsequently, a strong loss of trust occurs,
which is—according to existing literature (Hoffman et al. 2013; Lee and See
2004)—in most cases only difficult to compensate. Payre et al. (2016) affirm this
role of complacency with regard to the emergence of an unsafe valley. They
postulate that distinct complacency will lead to difficulties (e.g. slow reactions) in
the course of take-over maneuvers between automated and non-automated driving.
Here it also becomes clearer why “unsafe” is the better name for the valley, as the
drivers even feel too comfortable, at least before the incident or accident.
With regard to that phenomenon, there seems to be an additional irony in the
progress of automation: At the beginning of a technological development process,
an automated system still has various errors that an operator can react on by means
of an increased readiness for take-over. Due to the technical system’s increasing
availability over time, the user’s first experience with a system error or failure will
be postponed in time. As such, an undue confidence will build up. Under these
conditions, the same error that could be compensated beforehand will have a higher
impact on road safety and user expectations. Likewise, the loss of trust will be
stronger, the more difficult a mistake might be compensated and the more costs it
provokes (Bahner 2008) as well as the bigger it is and the more difficult it is to
predict (Lee and See 2004).
Choi and Ji (2015) provide a more holistic model of trust on adopting an
autonomous vehicle: Within their approach, they combined different relevant
concepts (e.g., trust, perceived risk and personality traits) within one model and
took it under empirical examination. They found that the concepts of trust (defined
via system transparency, technical competence and situation management) and
perceived usefulness have major influence on the behavioral intention to use a
system. Other factors such as perceived risk and ease of use as well as locus of
control and sensation seeking only have minor or no influence at all. According to
these results, it might be assumed that exaggerated trust and perceived usefulness
lead the naive driver into the uncanny and unsafe valley.
4 A Sketch of the Design Space: The Solution’s Dimensions
for Safeguarding an Unsafe Valley
Justified by the successful implementation of automated systems such as in
Goodrich et al. (2006), Hoeger et al. (2012) or Altendorf et al. (2015), the assump-
tion is that not automation or higher automation per se is unsafe, but that there are
unsafe regions around safe automation designs and combination of different assis-
tance and automation levels and transitions between levels or modes. How can the
unsafe regions be safeguarded in order to utilize the safe regions? Decisive
dimensions that form the design space of safeguards could be:
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 327
j.nelles@iaw.rwth-aachen.de
a. Abilities of the human and automation: The human capability might depend on a
selection process e.g. in domains like aviation, or on the distributions of a
general population e.g. in the driving domain. The abilities of the automation
depend on their interplay with the environment, and might be structured
according to normal operations, systems limits and system failure. Increasingly
important will be the (meta-) ability to describe its own ability to sense and act, e.
g. in case of a sensor degradation due to changes in the environment like bad
weather.
b. The distribution of tasks, authority, responsibility and a minimum of autonomy
for both the automation and the human, as described e.g. by Flemisch et al.
(2011).
c. Combining (a) and (b), the control-distribution of the corresponding automa-
tion-level, which interact with the human’s involvement, and which could be
organized in clear modes (Fig. 1.5). It is an open research and engineering
question as to how many and which modes are needed and/or wanted at all,
but there are clear hints that to many modes can lead to mode confusion
especially in time critical situations. It is especially not clear whether partially
and conditionally automated levels are needed at all, even if there are hints that a
well designed level of partial automation might improve the take-over ability
and might also be fun. Another open research and engineering question is how
many different modes can be differentiated and operated safely, and how
potential migration paths could look like in order to ensure upwards and down-
wards compatibility.
Another dimension of the design design space are the transitions between the
modes, that can be initiated either by the human (Fig. 5red) or by the automation
(Fig. 5blue). It is an open research and development question as to how the
MRM/
MRS
SAE 2
unsafe
SAE 3
safe
unsafe
safe
SAE 0
Fig. 5 Uncanny Valleys: Modi and Transitions
328 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
transitions are balanced and secured against false and failed transition. A safeguard,
e.g., for a transition from right side of the automation scale, crossing the valley
towards the left side of the scale, could be an interlocked transition, where the
control is only handed over to the operator if it is really clear that the operator has
taken over control. This interlocked transition was for cars and trucks successfully
implemented and tested in the HAVEit project. This safeguard works best if the
right rim is also secured with a transition to a minimum risk state, e.g., via a
minimum risk maneuver, as is described in Hoeger et al. (2011).
A safeguarding measure, e.g., of the left rim of the unsafe valley can be, for
example, that an insufficient involvement of the human is determined by monitor-
ing attention and reacting accordingly e.g. through prompts, as described in
e.g. Rauch et al. (2010) and in Schwalm et al. (2015) and Schwalm and
Ladwig (2015).
Another safeguard on the left rim of the valley is the communication of the
ability of the automation, as shown by Beller et al. (2013) based on Heesen et al.
(2010) as a concept to communicate an uncertainty value of the automation.
A similar direction of safeguarding against falling into an unsafe valley is
provided by different authors who promote a so-called “trust management”: First,
systems could already be adapted within the design phase (e.g., a system’s trans-
parency could be highlighted, see Bahner 2008). Alternatively, it could be tried to
improve users’ system perception. Last, system capabilities and limits should be
communicated more overt (cf. Muir 1994). In this context, Payre et al. (2016)
postulate that intensive practice is required, in order to bridge the uncanny valley.
According to them, the outline of the system’s working and its boundaries is
indispensable in order to avoid safety-critical automation effects.
An additional concept that could help prevent people from falling into the unsafe
valley could be nudging, the art of promoting certain behaviors through small
changes in the environment, which nevertheless leave the individual to decide for
herself (e.g. Thaler and Sunstein 2014). Nudging could be used for influencing
humans to avoid the rims of the unsafe valley, for example through promoting more
involvement (Flemisch et al. 2016b). A new concept which links nudging even
stronger to self-determination is currently being developed at the Institute for
Industrial Engineering and Ergonomics at RWTH Aachen University.
Putting those safeguards together, a holistic picture or concept of human-
machine resilience should be derived how the human machine system acts in
normal operations, reacts to disturbances that try to push the system to system
limits and how it reacts to system failure. Figure 6shows such transitions between
normal operations, system limits and system failure. The upper part of Fig. 6shows
a degradation of the machine that might result in a situation with a control deficit,
where the human might be somehow prepared (upper part) or not enough prepared
(middle). The lower part of Fig. 6shows a degradation of the human that might
result in a complete dropout of the human, where the machine might only be able to
handle the situation for a limited time (lower part). A minimum risk maneuver that
hopefully results in a state with a minimum risk or maximal possible safety might
be the last resort in order to keep the human-machine system safe. From a minimum
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 329
j.nelles@iaw.rwth-aachen.de
risk state the human might be able take over again to normal operations if possible.
The benefit of thinking in a layered approach of normal operations, system limits
and failure could be in the flexibility to stabilize the human-machine system at the
limits and, instead of going into a complete, unrecoverable failure, recover or
gracefully degrade from system limits and failures.
Putting those safeguards together systematically could result in a highly cooper-
ative human-machine system, that could be inspired by natural examples of coop-
eration in movement, for example the H(orse)-Metaphor (Flemisch et al. 2003), it’s
generalization of shared and cooperative guidance and control, complemation
(Schutte 1999) or cooperative automation (e.g. Flemisch et al. 2014; Bengler
et al. 2014), and it’s instantiation H-Mode (e.g., Goodrich et al. 2006; Altendorf
et al. 2015) or Conduct-by-Wire (Winner and Hakuli 2006). The key seems to be to
combine partially and highly automated automation levels in a way, that in partially
automated level, the operator is being involved and prevented from losing her
situation and mode awareness, e.g., with continuous haptic feedback and attention
monitoring, while in highly automated, the human-machine system is so secured
that it keeps safe even in the extreme case that the operator cannot come back into
the loop again.
MRM
A
BC
DE
Standard operaon System limit System failure
F
Fig. 6 Resilience through recovery or graceful degradation between normal operations, system
limits or system failure
330 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
5 Outlook: Balancing Risks and Chances of Assistance
and Automation by Securing the Unsafe Valley
Although the causative reasons for the emergence of such a valley might not be
finally deduced yet, it becomes increasingly clear that there is at least one unsafe
valley on the scale of assistance and automation, with clear applications at least in
air and ground vehicle automation, and a good chance that the concept might also
be valuable in other domains. It is undoubtedly necessary to secure the boundaries
of an uncanny valley.
There is justified hope that in most cases, the unsafe valley can be well structured
and therefore comparably well secured. We should nevertheless be aware that there
might be cases for which the structure of the unsafe valley could also be more
complex, more like a mountain landscape. In those cases the unsafe valley(s) should
be reasonably mapped at first, before the boundaries are secured and viable bridges
could be built over it.
In order to increase or maintain safety, and to harvest many chances of automa-
tion, the risks of automation have to be controlled. The systematic mapping and
development of safeguarding measures down to safe combinations will require an
interdisciplinary research and development, but will hopefully prevent naive and
probably too trustful operators, system designers and engineers from falling into the
unsafe valley, sometimes even an abyss of automation.
Acknowledgments Thanks to the initial supporters at NASA, as well as the DFG und its referees
for the support in the H-Mode projects and in the research program “Kooperativ Interagierende
Fahrzeuge“, the EU for the support in the projects HAVEit and InteractIVe. The colleges at DLR,
the TU Darmstadt, TU M
unchen and RWTH Aachen for the rich discussion.
References
Altendorf E, Baltzer M, Heesen M, Kienle M, Weißgerber T, Flemisch F (2015) H-Mode, a
Haptic-multimodal interaction concept for cooperative guidance and control of partially and
highly automated vehicles. In: Winner H et al (eds) Handbook of driver assistance systems.
Springer, Cham
Bahner JE (2008) U
¨bersteigertes Vertrauen in automation: Der Einfluss von Fehlererfahrungen auf
complacency und automation bias. Dissertation, TU Berlin
Beller J, Heesen M, Vollrath M (2013) Improving the driver–automation interaction an approach
using automation uncertainty. Hum Factors 55(6):1130–1141
Bengler K, Flemisch F (2011) Von H-Mode zur kooperativen Fahrzeugf
uhrung—Grundlegende
ergonomische Fragestellungen. In: 5. Darmsta
¨dter Kolloquium: kooperativ oder autonom?.
Darmstadt
Bengler K, Dietmayer K, Farber B, Maurer M, Stiller C, Winner H (2014) Three decades of driver
assistance systems: review and future perspectives. IEEE Intell Transport Syst Mag 6(4):6–22
Billings CE (1997) Aviation automation: the search for a human centered approach. Lawrence
Erlbaum Associates, Mahwah, NJ
Choi JK, Ji YG (2015) Investigating the importance of trust on adopting an autonomous vehicle.
Int J Hum Comput Interact 31:692–702
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 331
j.nelles@iaw.rwth-aachen.de
Damb
ock D (2013) Automationseffekte im Fahrzeug—von der Reaktion zur U
¨bernahme.
Disseration, TU M
unchen
Donges E (2015) Fahrerverhaltensmodelle. In: Handbuch Fahrerassistenzsysteme. Springer
Fachmedien, Wiesbaden, pp 17–26
Eckstein L (2016) Personal communication
Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Hum Factors 37
(1):32–64
et d’Analyses BDE (2012) Final report on the accident on 1st June 2009 to the Airbus A330-203
registered F-GZCP operated by Air France flight AF 447 Rio de Janeiro–Paris. BEA, Paris
Flemisch FO, Onken R (1998) The cognitive assistant system and its contribution to effective
man/machine interaction. NATO RTO MP-3, Monterey, CA
Flemisch FO, Adams CA, Conway SR, Goodrich KH, Palmer MT, Schutte PC (2003) The
H-metaphor as a guideline for vehicle automation and interaction (No. NASA/TM—2003-
212672). NASA, Langley Research Center, Hampton
Flemisch F, Kelsch J, Schieben A, Schindler J (2006) St
ucke des Puzzles hochautomatisiertes
Fahren: H-Metapher und H-Mode; 4. Workshop Fahrerassistenzsysteme; L
owenstein
Flemisch F, Kelsch J, L
oper C, Schieben A, Schindler J (2008) Automation spectrum, inner/outer
compatibility and other potentially useful human factors concepts for assistance and automa-
tion. In: de Waard D, Flemisch FO, Lorenz B, Oberheid H, Brookhuis KA (eds) Human factors
for assistance and automation. Shaker, Maastricht, pp 1–16
Flemisch F, Schieben A, Temme G, Rauch N, Heesen M (2009) HAVEit Public Deliverable D33.2
“Preliminary concept on optimum task repartition for HAVEit systems”, Brussels
Flemisch F, Schieben A, Strauss M, L
uke S, Heyden A (2011) Design of human-machine
interfaces for highly automated vehicles in the EU-project HAVEit. In: Proceedings of 14th
international conference on human-computer interaction
Flemisch F, Heesen M, Hesse T, Kelsch J, Schieben A, Beller J (2012) Towards a dynamic balance
between humans and automation: authority, ability, responsibility and control in shared and
cooperative control situations. Int J Cognit Tech Work 14(1):3–18
Flemisch F, Bengler K, Bubb H, Winner H, Bruder R (2014) Towards cooperative guidance and
control of highly automated vehicles: H-mode and conduct-by-wire. Ergonomics Special Issue
Beyond Human-Centred Automation 57(3). Online 24.2.2014
Flemisch F, Schwalm M, Deml B (2015a) Systemergonomie kooperativ interagierende Fahrzeuge.
Projektantrag an die DFG
Flemisch F, Winner H, Bruder R, Bengler K (2015a) Cooperative guidance, control and automa-
tion. In: Winner H et al (eds) Handbook of driver assistance systems. Springer, Cham
Flemisch F, Altendorf E, Baltzer M, Rudolph C, Lopez D, VOß G, Schwalm M (2016a) Arbeiten
in komplexen Mensch-Automations-Systemen: Das Unheimliche und unsichere Tal (Uncanny
Valley) der Automation am Beispiel der Fahrzeugautomatisierung; 62. GfA-Fr
uhjahrskongress
“Arbeit in komplexen Systemen – Digital, vernetzt, human?!” Aachen
Flemisch F, Altendorf E, Weßel G Canpolat Y (2016b) Personal Communcation
Gasser TM, Arzt C, Ayoubi M, Bartels A, Buerkle L, Eier J, Flemisch F, Haecker D, Hesse T,
Huber W, Lotz C, Maurer M, Ruth-Schumacher S, Schwarz J, Vogt W (2012a) Rechtsfolgen
zunehmender Fahrzeugautomatisierung—Gemeinsamer Schlussbericht der Projektgruppe.
Fahrzeugtechnik F 83, Bundesanstalt fur Straßenwesen (BASt)
Gasser TM et al (2012b) Rechtsfolgen zunehmender Fahrzeugautomatisierung - Gemeinsamer
Schlussbericht der Projektgruppe Bundesanstalt f
ur Straßenwesen (BASt), (F 83)
Goodrich K, Flemisch F, Schutte P, Williams R (2006) A design and interaction concept for
aircraft with variable autonomy: application of the H-Mode. In: Digital Avionics Systems
Conference, USA
Heesen M, Kelsch J, L
oper C, Flemisch F (2010) Haptisch-multimodale Interaktion f
ur
hochautomatisierte, kooperative Fahrzeugf
uhrung bei Fahrstreifenwechsel-, Brems- und
Ausweichman
overn; Gesamtzentrum f
ur Verkehr Braunschweig (Hrsg.): Automatisierungs-,
Assistenzsysteme und eingebettete Systeme f
ur Transportmittel AAET, Braunschweig
332 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
Herzberger ND, Voß GMI, Schwalm M (2016) Personal communication
Hoeger R, Amditis A, Kunert M, Hoess A, Flemisch F, Krueger H-P, Bartels A, Beutner A (2008)
Highly automated vehicles for intelligent transport: Haveit approach. ITS World Congress,
New York
Hoeger R, Zeng H, Hoess A, Kranz T, Boverie S, Strauss M et al (2011) Final report, Deliverable
D61. 1. Highly automated vehicles for intelligent transport (HAVEit). 7th Framework
programme
Hoeger R, Wiethof M, Rheker T (2012) Complexity measures of traffic scenarios: psychological
aspects and practical applications. In: International conference on driver behaviour and
training 2011, Paris
Hoffman RR, Johnson M, Bradshaw JM, Underbrink A (2013) Trust in automation. IEEE Intell
Syst 28(1):84–88
Hollnagel E (2007) Keynote zur 7. Berliner Werkstatt “Prospektive Gestaltung von Mensch-
Technik-Interaktion” 2007, Berlin
Inagaki T, Itoh M (2013) Human’s overtrust in and overreliance on advanced driver assistance
systems: a theoretical framework. Int J Veh Technol 2013:1–8
Kuz S, B
utzler J, Schlick CM (2015) Anthropomorphic design of robotic arm trajectories in
assembly cells. J Occup Ergon 12(3):73–82
Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46
(1):50–80
Ma R, Kaber DB (2005) Situation awareness and workload in driving while using adaptive cruise
control and a cell phone. Int J Ind Ergon 35(10):939–953
Manzey D, Bahner JE (2005) Vertrauen in Automation als Aspekt der Verla
¨sslichkeit von
Mensch-Maschine-Systemen. Beitra
¨ge zur Mensch-Maschine-Systemtechnik aus Forschung
und Praxis. Festschrift f
ur Klaus-Peter Timpe 93–109
Maurer M (2016) Personal communication
Mayer MP (2012) Entwicklung eines kognitionsergonomischen Konzepts und eines
Simulationssystems f
ur die robotergest
utzte Montage, Dissertation. Shaker, Aachen
Merat N, Jamson AH, Lai FC, Carsten O (2012) Highly automated driving, secondary task
performance, and driver state. Hum Factors 54(5):762–771
Mori M (1970) The uncanny valley. Energy 7(4):33–35 (in Japanese)
Mori M, MacDorman KF, Kageki N (2012) The uncanny valley [from the field]. IEEE Robot
Autom Mag 19(2):98–100
Mosier KL, Skitka LJ (1996) Human decision-makers and automated decision aids: made for each
other? In: Parasuraman R, Mouloua M (eds) Automation and human performance: theory and
applications. Lawrence Erlbaum Associates, Mahwah, NJ, pp 201–220
Muir BM (1994) Trust in automation: Part I. Theoretical issues in the study of trust and human
intervention in automated systems. Ergonomics 37(11):1905–1922. doi:10.1080/
00140139408964957
National Transportation Safety Board (2010) “Loss of Control on Approach, Colgan Air, Inc.
Operating as Continental Connection Flight 3407 Bombardier DHC-8-400, N200WQ,
Clarence Center, New York, February 12, 2009”. NTSB/AAR-10/01. National Transportation
Safety Board, Washington, DC
Norman DA (1990) The problem with automation. Philos Trans R Soc Lond B 327:585–593
Payre W, Cestac J, Delhomme P (2016) Fully automated driving: impact of trust and practice on
manual control recovery. Hum Factors 58(2):229–241
Petermann I, Schlag B (2009) Auswirkungen der Synthese von Assistenz und Automation auf
das Fahrer-Fahrzeug System. Paper presented at the 11. Braunschweiger Symposium
Automatisierungs-, Assistenzsysteme und eingebettete Systeme f
ur Transportmittel (AAET)
2011. Braunschweig
Rauch N, Kaussner A, Krueger H-P, Boverie S, Flemisch F (2010) Measures and countermeasures
for impaired driver’s state within highly automated driving. Transport Research Arena,
Brussels
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 333
j.nelles@iaw.rwth-aachen.de
SAE J 3016:2014 Taxonomy and definitions for terms related to on-road motor vehicle automated
driving systems. Society of Automotive Engineers
Schieben A, Flemisch F (2008) Who is in control? Exploration of transitions of control between
driver and an eLane vehicle automation. VDI/VW Tagung Fahrer im 21. Jahrhundert 2008.
Wolfsburg
Schieben A, Dambo
¨ck D, Kelsch J, Rausch H, Flemisch F (2008) Haptisches feedback im
spektrum von fahrerassistenz und automation. In: 3. Tagung Aktive Sicherheit durch
Fahrerassistenz, Garching
Schutte PC (1999) Complemation: an alternative to automation. J Inform Tech Impact 1
(3):113–118
Schutte P, Goodrich K, Williams R (2016) Synergistic allocation of flight expertise on the flight
deck (SAFEdeck): a design concept to combat mode confusion, complacency, and skill loss in
the flight deck. In: Stanton NA, Landry S, Di Bucchianico G, Vallicelli A (eds) Advances in
human aspects of transportation. Springer, Berlin, pp 899–911
Schwalm M, Ladwig S (2015) How do we solve demanding situations—a discussion on driver
skills and abilities. In: 57th Conference of experimental psychologists 2015, Hildesheim
Schwalm M, Voß GMI, Ladwig S (2015) Inverting traditional views on human task-processing
behavior by focusing on abilities instead of disabilities–a discussion on the functional situation
management of drivers to solve demanding situations. In: Engineering psychology and cogni-
tive ergonomics. Springer, Cham, pp 286–296
Sheridan TB (ed) (1976) Monitoring behavior and supervisory control. Springer, Berlin
Sheridan TB (1980) Computer control and human alienation. Technol Rev 83(1):65–73
Thaler RH, Sunstein CR, Balz JP (2014) Choice architecture. In: The behavioral foundations of
public policy. Princeton University Press, Princeton, NJ
Voß GMI, Schwalm M (2015) 1. Kongress der Fachgruppe Verkehrspsychologie 2015.
Braunschweig
Weßel G, Altendorf E, Flemisch F (2016) Self-induced nudging in conditionally and highly
automated driving. Working Paper, Aachen
Winner H, Hakuli S (2006) Conduct-by-wire–following a new paradigm for driving into the
future. In: Proceedings of FISITA world automotive congress, Oct 2006, vol 22, p 27
334 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
... Tesla was one of the first companies to sell vehicles with a function termed "Autopilot", which however was a partial driving automation (SAE Level 2) that still requires the driver's full attention throughout the journey. Flemisch et al. (2017; described an uncanny and unsafe valley of automation with controllability problems between an SAE-level 2 and an SAE level 3 vehicle and used the Tesla "Autopilot" as an example of a system right in the unsafe valley. Shortly after that, a Tesla Model S was also involved in the first fatal crash of an automated vehicle where the driver died while using the "Autopilot". ...
... These terms do not only describe a different geographic scope, but also a different scope in time: Tactical is the most direct control, operational control still has a focused area and time frame, but is usually not connected in real time feedback loops, while strategic has a much wider and longer scope. Flemisch et al. (2017) describe such a layered model which connects the strategic, tactical and operational perspective with the concepts of cooperative, shared and traded control, with a transversal perspective on cooperation, therefore allowing to effectively "joining the blunt with the pointy end of the spear" or forging a complex chain from society to organizations to system-of-systems, human-machine systems and individuals. ...
... Meaningful Human Control also involves a meaningful interplay of ability, authority, autonomy, control and finally accountability, respecting the double and triple binds between these concepts. Examples for this are the unsafe valley of AI and automation (Flemisch et al., 2017) or the moral crumple zone (Elish, 2019), where humans are made accountable, but do not have sufficient abilities to really control the artifact. Both effective and Meaningful Human Control, or better Meaningful Human Control over effective systems, also need good situation awareness, a not-too-high and not-too-low workload, and calibrated trust, in order to develop enough ability, which enables control. ...
... More generally, as automated driving technology becomes more reliable and Operational Design Domains (ODDs) are extended, driver misconceptions concerning the automated vehicle's capabilities are likely to increase. This could potentially lead to underestimation of the probability and consequences of an automation failure (Seppelt and Victor, 2016;Flemisch et al., 2017;Victor et al., 2018;Wagner et al., 2018;Carsten and Martens, 2019;Holländer et al., 2019). ...
... Loss of situational awareness and slow or inadequate human response in case of automation failures can often be interpreted as an excess of trust, or "overtrust" (also described as "complacency"; Muir, 1987;Parasuraman et al., 1993;Parasuraman and Riley, 1997;Lee and See, 2004;Inagaki and Itoh, 2013;Hoff and Bashir, 2015;Payre et al., 2016;Boubin et al., 2017;Flemisch et al., 2017;Noah et al., 2017;Lee et al., 2021;Lee and Ji, 2023). However, there are also situations in which users do not place enough trust in a reliable system (Muir, 1987;Parasuraman and Riley, 1997;Lee and See, 2004;Hoff and Bashir, 2015;Carsten and Martens, 2019). ...
Article
Full-text available
There is a growing body of research on trust in driving automation systems. In this paper, we seek to clarify the way trust is conceptualized, calibrated and measured taking into account issues related to specific levels of driving automation. We find that: (1) experience plays a vital role in trust calibration; (2) experience should be measured not just in terms of distance traveled, but in terms of the range of situations encountered; (3) system malfunctions and recovery from such malfunctions is a fundamental part of this experience. We summarize our findings in a framework describing the dynamics of trust calibration. We observe that methods used to quantify trust often lack objectivity, reliability, and validity, and propose a set of recommendations for researchers seeking to select suitable trust measures for their studies. In conclusion, we argue that the safe deployment of current and future automated vehicles depends on drivers developing appropriate levels of trust. Given the potentially severe consequences of miscalibrated trust, it is essential that drivers incorporate the possibility of new and unexpected driving situations in their mental models of system capabilities. It is vitally important that we develop methods that contribute to this goal.
... Application of the confidence horizon start (human) and end (automation) to the driving simulator (right) situation awareness for the driving task, especially when engaging in a non-driving related task (NDRT) [ 28]. Even in lower automation levels (automation according to SAE Level 2), despite the driver's obligation to be ready to intervene and ongoing liability for the vehicle's actions, the driver may tend to lose awareness, a mechanism described as the unsafe valley of automation [ 11]. With the confidence horizon concept, we propose to make this unsafe valley visible at least to the automation and its developers, as an option also for the driver, so that she can act accordingly. ...
Chapter
Full-text available
This chapter presents the concept of confidence horizon for cooperative vehicles. The confidence horizon is designed to let the automation predict its own and the human’s abilities to control the vehicle in the near future. Based on the pattern approach originating from Alexander et al. [1], the confidence horizon concept is instantiated with a pattern framework. In case of a necessary takeover of the driving task by the human, a mode transition pattern is initiated. In order to determine when the takeover is required, which pattern to start and when to omit the takeover attempt and directly start a minimum risk maneuver, the confidence horizon for both human and co-system is an important parameter. A visual representation of the confidence horizon for the driver in different scenarios prior to a takeover request was explored. Intermediate results of a simulator study are presented, which assess the confidence horizon in automation safety-critical takeover scenarios involving an intersection and a broken-down vehicle on a highway.
... In the field of automated driving, this issue has been discussed for several years. Examples for this are the uncanny and unsafe valley of automation, which describes the difficulty of the handover between automated driving and the human driver [17], or the explainability of automated driving systems [42]. The application of AI to the aircraft flight deck is also increasingly being discussed, as can be seen in [27]. ...
Conference Paper
Abstract This paper presents a concept for operationalizing Artificial Intelligence (AI) explainability for the Intelligent Pilot Advisory System (IPAS) as requested in the European Aviation Safety Agency’s AI Roadmap 2.0 in order to meet the requirement of Trustworthy AI. The IPAS is currently being developed to provide AI-based decision support in commercial aircraft to assist the flight crew, especially in emergency situations. The development of the IPAS is following a user-centred and exploratory design approach, with the active involvement of airline pilots in the early stages of development to iteratively tailor the system to their requirements. The concept presented in this paper aims to provide interpretability cues to achieve “operational explainability of AI”, which should enable commercial aircraft pilots to understand and adequately trust the recommendations generated by AI when making decisions in emergencies. Focus of the research was to identify initial interpretability requirements and to answer the question of what interpretation cues pilots need from the AI-based system. Based on a user study with airline pilots, four requirements for interpretation cues were formulated. These results will form the basis for the next iteration of the IPAS, where the requirements will be implemented.
... To ensure that humans can influence the outcome of actions and hence stay morally responsible and legally accountable for their actions, concepts like Meaningful Human Control (MHC) [6] need to be addressed. A main aspect in MHC is that humans need to be able to make informed decisions and do not become part of the moral crumble zone [7] or unsafe valley of highly automated systems [8], i.e. take the responsibility when automated systems fail while humans are obliged by law to supervise the automated system all the time. Such is the current situation (2023) in vehicles with partial driving automation. ...
Conference Paper
The current advancement of technology with not only physical, but increasing cognitive functions widely coined as Artificial Intelligence (AI) leads to multiple situations where humans willingly or unwillingly accept the decisions of algorithms and Neural Network (NN) Models resulting in humans being decreasingly involved in the decision-making process. Human-machine cooperation and Human Autonomy Teaming (HAT) are an answer to this problem, joining the best of both sides into joined cognitive systems. Meaningful Human Control (MHC) is the concept of ensuring that humans have enough influence on the outcome of actions in HAT and hence stay morally responsible and legally accountable for their actions. This paper extends the concept of using an interaction mediator that facilitates the interaction between an automated and a human agent with interaction patterns to maintain MHC. The concept is applicable in almost any domain, and will be shown with the example of guiding a highly automated vehicle.
... First, the acceptance of autonomous vehicles can depend on the characteristics of the AI involved as well as the service context. Regarding the level of AI autonomy, while there are reports about negative emotions and cognitions for partial automation of vehicles (Mintel, 2019), full automation may induce even stronger negative responses (Flemisch et al., 2017). Second, individuals are much more involved in deciding about autonomous vehicle services that focus on transporting people, especially themselves or their loved ones, than goods (e.g., food delivery by robots). ...
Article
Amidst rising interest in autonomous vehicle services, extant literature reveals a paucity of research examining: 1) both cognitive and emotional evaluations; 2) characteristics of the service context (e.g. risk) and artificial intelligence (e.g. autonomy); and 3) heterogenous outcomes. Moreover, there are mixed findings on autonomous vehicle adoption/resistance. To address these gaps, we develop the Customer Responses to Unmanned Intelligent-transport Services based on Emotions and Cognitions (CRUISE-C) framework by extending the earlier CRUISE framework and building on the Elaboration Likelihood Model. The framework further delineates four segments, which differ in cognitive and emotional evaluations of fully autonomous vehicle services. We test CRUISE-C using three experimental studies. Study 1 shows that the resistant segments consider fully autonomous (vs. regular) vehicle services to be more vulnerable, and less reliable and convenient. Study 2 shows that a service failure involving fully autonomous (vs. regular) vehicles does not increase negative emotions in any segment, but attenuates perceived severity in Segment 1 ("Avoiders") and slightly amplifies perceived severity in Segment 4 ("Aficionados"). In Study 3, a machine learning model reveals segment membership as the strongest predictor of individuals' readiness to adopt autonomous vehicles, followed by reliance on taxis, female vs. male, and cognitive and emotional evaluations.
... As a result, there is a trade-off between immersiveness (with tangible inputs) and responsiveness to design changes (with intangible inputs), leading to the need of a multi-modal interaction with different I/O devices, both tangible and intangible [31]. This is especially relevant in CPPS with its continuous increase in complexity due to rising automation and interconnectivity [33], as well as machine learning based decisions [31]. For further overview of the research regarding human-systems integration should be referred to Boy (2020) [34]. ...
Article
Full-text available
Human cyber-physical production systems (HCPS) - as an extension of cyber-physical production systems (CPPS) - focus on the human being in the system and the development of socio-technical systems. Humans and therefore anthropogenic behavior have to be integrated into the manufacturing system and its processes. Due to the shift towards customized products, CPPS require high reconfigurability, flexibility and individual manufacturing processes. As a result, the role of and requirements for humans are fundamentally changing, creating the necessity for new interfaces to interact with machines within reconfigurable manufacturing process. Scalable human machine interfaces (HMI) are needed that incorporate emerging technologies as well as allowing mobility for the operator while interacting with different machines. Therefore, new approaches for HMI in the context of CPPS and HCPS are needed, which are simultaneously sufficiently mobile, scalable, and modular as well as human centered. High mobility of HMIs can be ensured by using the 5G communication standard that enables wireless migration of computational resources to an edge server with high reliability, low latency, and high data rates. This paper develops, implements, and evaluates a 5G-based, framework for highly mobile, scalable HMI in CPPS by utilizing the new 5G communication technology. The capabilities of the framework and of the new communication standard are demonstrated and evaluated for a use case, where a brain-computer interface (BCI) is used to control a robot arm. For better accuracy, the BCI is supported by an eye tracker and visual feedback is received via an augmented reality environment, and all devices are embedded via 5G communication. In particular, the influence of 5G communication on the system performance is examined, evaluated, and discussed. For this purpose, experiments are designed and conducted with different network configurations.
Article
Full-text available
Automated Driving Systems (ADS) are aimed to improve traffic efficiency and safety, however these systems are not yet capable of handling all driving tasks in all types of road conditions. The role of a human driver remains crucial in taking over control, if an ADS fails or reaches its operational limits. Takeover performance of human drivers in authority transitions is typically assessed by means of the takeover time (TOT) taken within an available time budget (TB). This approach assumes a uniform perception and reaction time of human drivers in ADS disengagements, and does not include the time needed to execute the actual driving maneuver required to ensure safety. This paper aims to develop and test a set of new indicators to reflect takeover performance and its safety attributes, namely the ‘time to control’ (TC) and the ‘safe time budget’ (STB), in which the actual task execution (i.e. braking) time is taken into account, in addition to the perception and reaction time. It also proposes new thresholds for identifying critical conflicts in takeover situations and assessing the safety of authority transitions. A traffic simulation experimental setup is used with mixed traffic of conventional vehicles and ACC/CACC platoons in order to test these indicators and thresholds. The results suggest that the time difference between TC and STB is a more sensitive and potentially more realistic safety indicator, as it may capture the variability of driver behavior in takeovers and identify critical conflicts, as well as virtual crashes, that would not have been identified by the previously used indicators (TOT and TB). Takeover performance worsens when the speed difference of the vehicles involved is higher, and the initial speed of the rear vehicle is higher. These findings can be useful towards a more dynamic design of takeover request strategies.
Book
Durch die bedeutenden technischen Fortschritte der letzten Jahre bei der Entwicklung von immer leistungsfähigeren Assistenzsystemen im Fahrkontext, die inzwischen sogar zu der ersten Zulassung eines SAE Level 3 Systems geführt haben, ist die (Teil-) Automation mittlerweile im allgemeinen Straßenverkehr angekommen. Diese Systeme, die noch vor wenigen Jahren als Science-Fiction gegolten hätten, sind aktuell jedoch noch weit davon entfernt den Menschen bei der Fahrzeugführung überflüssig zu machen. So ist die Fahrperson bei teilautomatisierten Fahrsystemen (ab SAE Level 1) stets für die Fahraufgabe verantwortlich und muss diese auch kontinuierlich überwachen. Aber auch automatisierte Fahrsysteme (ab SAE Level 3) benötigen die Fahrperson zunächst noch als Rückfallebene und werden auch Stand heute in naher Zukunft noch nicht alle Anwendungsfälle selbständig ausführen können. Gerade jedoch, wenn der Mensch als Rückfallebene dienen soll, ist es unerlässlich zu wissen, ob die Fahrperson auch in der Lage ist, die Fahraufgabe wieder sicher zu übernehmen. Dieser Herausforderung widmet sich die vorliegende Dissertation.
Article
Full-text available
As the first automated driving functions are now finding their way into serial production vehicles, the focus of research and development has shifted from purely automated capabilities to cooperative systems, i.e. cooperation between vehicles, and vehicle automation with drivers. Especially in partially and highly automated cooperative driving the driver should be able to take over the driving task or adapt the driving behavior. This paper presents the pattern approach to cooperation as a method to recognize and solve reoccurring problems. As an example, the pattern approach is applied to the use case of a takeover request on a highway. The concept of Confidence Horizons, which balance the capabilities of the driver and the automation based on cooperative interaction patterns, is introduced. To estimate the human capabilities for this Confidence Horizon, a Diagnostic Takeover Request is used, in which the automation analyzes the driver’s orientation reaction to a takeover request. This allows the early detection of potentially unsafe takeovers reducing possible transitions to a Minimum Risk Maneuver (MRM).
Conference Paper
Full-text available
Kurzfassung: Digitalisierung und Vernetzung ermöglichen immer leistungsfähigere Assistenz-und Automationssysteme z.B. in der Luftfahrt, in der Produktion oder für Automobile, die allerdings auch Nachteile z.B. im Zusammenspiel mit dem Menschen mit sich bringen können. In der Robotik ist ein sog. Uncanny Valley, ein unheimliches Tal bekannt, welches beschreibt, dass Roboter mit einer hohen, aber nicht perfekten Ähnlichkeit und Fähigkeit verglichen z.B. mit dem Menschen als unheim-lich und unsicher wahrgenommen werden. Für die Fahrzeugautomation deutet sich ein vergleichbarer metaphorischer Designzusammenhang an, bei dem es zwischen teil-und hochautomatisierten Automationsgraden eine Zone gibt, bei denen es zu Fehlwahrnehmungen und Sicherheits-einbußen kommen kann. Der Beitrag skizziert den Zusammenhang, fasst erste bestätigende Studien zusammen und entwirft den Gestaltungsraum.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Conference Paper
Assistenz, aktive Sicherheit und Automation im Kraftfahrzeug werden in diesem Beitrag als Teile einer Gesamtentwicklung hin zum hochautomatisierten Fahren verstanden. Als mögliches Puzzlestück dieser Entwicklung wird eine intuitiv verständliche Beschreibung hochautomatisierter Fahrzeuge in Form einer Designmetapher (H-Metapher), basierend auf einem natürlichen Vorbild, skizziert. Als weiteres Puzzlestück werden die Aktivitäten in Richtung einer durchgängigen, haptisch-multimodale Interaktionssprache für hochautomatisierte Fahrzeuge (H-Mode) beschrieben. Weitere Stücke des Gesamtpuzzles sind über verschiedene Hersteller, Forschungseinrichtungen und Behörden verteilt, können aber nur integriert zu einem sinnvollen Ergebnis kommen können, das nicht notwendigerweise das fahrerlose Fahrzeug sein wird.
Chapter
With increasing technical possibilities in the area of assistance and automation, diverse challenges, risks, and chances arise in the design of assisted, partially and fully automated driving. One of the greatest challenges consists of integrating and offering a multitude of complex technical functions in such a way that the human driver intuitively understands them as a cohesive, cooperative system. A solution to this problem can be found in the H-Mode. It is inspired by the role model of horse and rider and offers an integrated haptic-multimodal user interface for all kinds of movement control. The H-Mode, as presented in this chapter, has been designed for ground vehicles and includes several comfort and security systems on three assistance and automation levels, which can be interchanged fluidly. © Springer International Publishing Switzerland 2016. All rights reserved.
Thesis
Durch die zunehmende Zuverlässigkeit automatisierter Systeme konnte in den vergangenen Jahren eine Vielzahl potenzieller Fehlerquellen der Mensch-Maschine Interaktion reduziert werden. Zugleich sind dadurch jedoch auch neue Risiken entstanden: Gerade bei hoch reliablen automatisierten Systemen besteht die Gefahr eines übersteigerten Vertrauens in das System, wobei complacency und automation bias mögliche Folgen auf Verhaltensebene darstellen. Complacency stammt aus dem Kontext klassischer monitoring-Aufgaben. Zu verstehen ist darunter eine unzureichende Überwachung oder Überprüfung der Automation, die zu einem Übersehen kritischer Systemzustände führen kann. Demgegenüber stammt das Konzept automation bias aus dem Kontext der Nutzung von Entscheidungsassistenzsystemen. Gefasst werden darunter zwei verschiedene Fehlertypen: Während commission Fehler darin bestehen, dass ein Operateur einer fehlerhaften Empfehlung eines Assistenzsystems folgt, äußern sich omission Fehler darin, dass kritische Systemzustände übersehen werden, sofern diese vom Assistenzsystem nicht angezeigt werden. Trotz der engen konzeptuellen Nähe von complacency und automation bias, sind die Phänomene bislang nur getrennt voneinander empirisch untersucht worden. Zentrales Anliegen der vorliegenden Arbeit war es, methodische Schwächen bisheriger Studien zu überwinden und einen Beitrag zu einer empirisch fundierten Klärung der Konzepte complacency und automation bias sowie des Bezugs der Konzepte untereinander zu leisten. Darüber hinaus wurden mögliche Gegenmaßnahmen im Sinne von Automationsfehlern im Training untersucht. Zwei experimentelle Studien (jeweils N = 24) wurden durchgeführt, wobei eine Mikrowelt als Versuchsumgebung diente, in der die Probanden bei der Detektion, Diagnose und Behebung von Fehlfunktionen einer Prozessteuerung durch ein Assistenzsystem unterstützt wurden. In der ersten Studie wurde der Einfluss von Fehldiagnosen im Training untersucht, während im zweiten Experiment der Einfluss von Systemausfällen im Fokus stand. Die Ergebnisse dieser beiden Studien lassen folgende Schlussfolgerungen zu: 1) Complacency stellt im Sinne einer unzureichenden Überwachung eine Ursache für omission Fehler und im Sinne einer unzureichenden Überprüfung einen beitragenden Faktor für die Entstehung von commission Fehlern dar. 2) Die Erfahrung von Automationsfehlern während des Trainings reduziert das Auftreten von complacency, verhindern es jedoch nicht gänzlich. 3) Automationsfehler wirken spezifisch: Während Fehldiagnosen im Training zu einer verbesserten Überprüfung der Diagnosefunktion des Assistenzsystems führen, bleibt die Überwachung der Alarmfunktion dadurch unbeeinflusst. Demgegenüber führen Ausfälle des Assistenzsystems im Training später zu einer vermehrten Überwachung der Alarmfunktion, nicht aber zu einer verbesserten Diagnoseüberprüfung. Dies gilt es bei der Gestaltung von Trainingsmaßnahmen zu berücksichtigen.
Chapter
This paper presents a new design and function allocation philosophy between pilots and automation that seeks to support the human in mitigating innate weaknesses (e.g., memory, vigilance) while enhancing their strengths (e.g., adaptability, resourcefulness). In this new allocation strategy, called Synergistic Allocation of Flight Expertise in the Flight Deck (SAFEdeck), the automation and the human provide complementary support and backup for each other. Automation is designed to be compliant with the practices of Crew Resource Management. The human takes a more active role in the normal operation of the aircraft without adversely increasing workload over the current automation paradigm. This designed involvement encourages the pilot to be engaged and ready to respond to unexpected situations. As such, the human may be less prone to error than the current automation paradigm.
Chapter
The technological feasibility of more and more assistant systems and automation in vehicles leads to the necessity of a better integration and cooperation with the driver and with other traffic participants. This chapter describes an integrated cooperative guidance of vehicles including assisted, partially automated, and highly automated modes. Starting with the basic concepts and philosophy, the design space, parallel and serial aspects, the connections between abilities, authority, autonomy, control, and responsibility, vertical versus horizontal and centralized versus decentralized cooperation are discussed, before two follow-on chapters of H-Mode and Conduct-by-Wire describe instantiations of cooperative guidance and control.
Article
BACKGROUND: Anthropomorphism is attribution of human form or behavior to non-human agents. Its application in a robot increases occupational safety and user acceptance and reduces the mental effort needed to anticipate robot behavior. OBJECTIVE: The research question focuses on how the anthropomorphic trajectory and velocity profile of a virtual gantry robot affects the predictability of its behavior in a placement task. METHODS: To investigate the research question, we developed a virtual environment consisting of a robotized assembly cell. The robot was given human movements, acquired through the use of an infrared based motion capture system. The experiment compared anthropomorphic and constant velocity profiles. The trajectories were based on human movements of the hand-arm system. The task of the participants was to predict the target position of the placing movement as accurately and quickly as possible. RESULTS: Results show that the anthropomorphic velocity profile leads to a significantly shorter prediction time (α 0.05). Moreover, the error rate and the mental effort were significantly less for the anthropomorphic velocity profile. Based on these findings, a speed-accuracy trade-off can be excluded. CONCLUSIONS: Participants were able to estimate and predict the target position of the presented movement significantly faster and more accurately when the robot was controlled by the human-like velocity profile.