Content uploaded by Frank Flemisch
Author content
All content in this area was uploaded by Frank Flemisch on Feb 13, 2019
Content may be subject to copyright.
Content uploaded by Frank Flemisch
Author content
All content in this area was uploaded by Frank Flemisch on Sep 24, 2018
Content may be subject to copyright.
Uncanny and Unsafe Valley of Assistance
and Automation: First Sketch
and Application to Vehicle Automation
Frank Flemisch, Eugen Altendorf, Yigiterkut Canpolat, Gina Weßel,
Marcel Baltzer, Daniel Lopez, Nicolas Daniel Herzberger, Gudrun
Mechthild Irmgard Voß, Maximilian Schwalm, and Paul Schutte
Abstract
Progress in sensors, computer power and increasing connectivity allow to build
and operate more and more powerful assistance and automation systems, e.g. in
aviation, cars and manufacturing. Besides many benefits, new problems occur e.
g. in human-machine-interaction. In the field of automation, e.g. vehicle auto-
mation, a comparable, metaphorical design correlation is implied, an unsafe
valley e.g. between partially- and highly-automated automation levels, in which
due to misperceptions a loss of safety could occur. This contribution sketches the
concept of the (uncanny and) unsafe valley of automation, summarizes early
affirmative studies, gives first hints towards an explanation of the valley, outlines
the design space how to secure the borders of the valley, and how to bridge the
valley.
F. Flemisch (*)
IAW Institut f€
ur Arbeitswissenschaft, RWTH Aachen, Aachen, Germany
FKIE Fraunhofer Institut f€
ur Kommunikation, Informationsverarbeitung und Ergonomie,
Fraunhoferstr. 20, 53343 Wachtberg, Germany
e-mail: frank.flemisch@fkie.fraunhofer.de
E. Altendorf • Y. Canpolat • G. Weßel
IAW Institut f€
ur Arbeitswissenschaft, RWTH Aachen, Aachen, Germany
M. Baltzer • D. Lopez
FKIE Fraunhofer Institut f€
ur Kommunikation, Informationsverarbeitung und Ergonomie,
Fraunhoferstr. 20, 53343 Wachtberg, Germany
N.D. Herzberger • G.M.I. Voß • M. Schwalm
ika Institut f€
ur Kraftfahrzeuge, RWTH Aachen, Aachen, Germany
P. Schutte
Aviation Development Directorate AMRDEC/RDECOM, US Army, Brooklyn, NY, USA
#Springer-Verlag GmbH Germany 2017
C.M. Schlick et al. (eds.), Advances in Ergonomic Design of Systems, Products and
Processes, DOI 10.1007/978-3-662-53305-5_23
319
j.nelles@iaw.rwth-aachen.de
Keywords
Automation • Assistance • Robotics • Human-machine systems • Uncanny
unsafe valley
1 Introduction: Assistance, Automation and Robotics
Enabled by technical advancements in the field of sensors, computers and connec-
tivity, as well as motivated by cost-pressure along with ever-increasing perfor-
mance requirements, the complexity of information systems has steadily grown in
the last decades (cf. Hollnagel 2007). A part of this complexity can be compensated
with assistance systems and automation, however, unwanted side effects such as
“Operator/pilot out of the loop” or “Mode confusion” (cf. Billings 1997) are
reported in a variety of domains like aviation, nuclear power plants and automotive.
Rather than speaking about over-automation in an undifferentiated manner,
Norman (1990) points out that not over-automation is the problem, but inappropri-
ate feedback and interaction.
There is a concept in robotics, which can be considered as a specific form of
automation, known as “The Uncanny Valley”, where robots showing high,
however imperfect similarities to humans are perceived by humans as uncanny
and disconcerting (Mori 1970;Morietal.2012). Conscious of the Uncanny
Valley, research and development in robotics is focusing on cooperative robot-
ics, where to a certain extent humans and highly automated robots work together
in the same work spaces, instead of fully-automated robots (cf. Mayer 2012;
Kuz et al. 2015).
A similar development regarding cooperative assistance and automation is
currently emerging in the area of ground vehicles ensuing from the aviation domain
(e.g., Flemisch and Onken 1998; Schutte 1999; Goodrich et al. 2006). It became
increasingly clear, through basic concepts such as Levels of Automation, that
assistant and automated systems are related and, (a) should be discussed holistically
and (b) could be depicted on a scale—that is, a spectrum of assistance and
automation, (cf. Flemisch et al. 2003,2008). This point of view was later applied
in the standard categorization of vehicle automation (cf. BASt 2012a;SAE 2014),
which differentiates between assisted, partially- and highly-automated systems.
Figure 1shows a simplified scale of assistance and automation related to the control
Manual
Fully
automa ted
manually
/assisted
Conditionally/
highly
automated
partially
automated
assisted
Fig. 1 Control distribution between the human and automation represented as an assistance- and
automation-scale, here with explicit automation-levels/modes (inspired by Sheridan 1980;
Flemisch et al. 2003,2008,2012,2014,2015a,b; Gasser et al. 2012b;SAE 2014)
320 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
distribution between the human and the automation in the assistance- and
automation-levels including manual, assisted, partially-, highly- and fully-
automated/autonomous.
A possible unsafe valley of automation can be found in the right half of the scale
between partly- and highly-automated, which could be rather uncanny for the user,
and more importantly, rather unsafe, as described further down.
2 Early Indicators for the Existence of an Unsafe Valley
There is a good chance that the metaphor of an (uncanny and) unsafe valley can be
applied to automation in all kinds of domains. Early systematic explorations within
the area of partly- and highly-automated vehicle control have been conducted for
ground and air vehicles since 2003 (NASA-H-Mode) and since 2004 for ground
vehicles as part of DFG-H(orse)-Mode-Projects.
These were inspired by the H-metaphor, a design metaphor that takes the rider-
horse interaction as a blueprint for vehicle automation (Flemisch et al. 2003,2015a,
b; Bengler and Flemisch 2011; Altendorf et al. 2015). The initial base research
sparked a series of national and EU-projects, introduced the term highly-automated
driving (e.g. Flemisch et al. 2006; Hoeger et al. 2008,2011) and inspired the
development of partially-automated “piloted” driving e.g. of Volkswagen, Audi,
Mercedes and the more chauffeur inspired Tesla.
At the beginning of this research and development in 2000, it was debated as to
whether one or multiple modes between assisted- and fully-automated automation
levels would be advisable and how they should be designed, especially regarding
the involvement-degree of the operators, here the drivers, and the extent of the
automation’s intervention and the required safety measures, e.g. by operator
monitoring.
With accumulating research, it became clear that there are combinations of
partially- as well as highly-automated modes that are functional, while other
implementations are not. An example of a well-functioning implementation of a
lower automation level in the car domain is presented by Ma and Kaber (2005). In
their implementation automation only supports longitudinal control. They revealed
that an Advanced Cruise Control (ACC) system is able to enhance system perfor-
mance in terms of lane deviations and speed control in tracking a lead vehicle and
increase drivers situation awareness, even when drivers are distracted by a
non-driving related task.
In the early decade of this century, research explored whether assistive (i.e., not
fully autonomous) solutions beyond ACC, with coupled longitudinal and lateral
control could be successful. Starting from the H-Mode projects, a series of studies
(e.g. Schieben and Flemisch 2008; Schieben et al. 2008; Petermann and Schlag
2009), have shown that designs differentiating between partly- and highly-
automated systems and using well-arranged transitions could succeed. Rauch
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 321
j.nelles@iaw.rwth-aachen.de
et al. (2010) demonstrated that an integrated driver-state detection may improve the
safety and acceptance of the automation systems.
Furthermore, Merat et al. (2012) presented a safe implementation of a highly-
automated vehicle, which takes longitudinal and lateral control and can perform
gentle maneuvers itself. They conducted an experiment in which the automation
had complete control of the automobile in normal situations and drivers were
warned when approaching an obstacle and manual control had to be resumed.
The results showed that, in their implementation of highly-automated driving, no
negative effects on system performance emerged.
Besides these successful examples, there are clear hints that areas between well-
functioning modes and their variants exist, which are clearly less safe: For example,
simulator-based studies conducted at DLR showed that partially-automated driving
designs, where drivers did not have to apply steering torques any more, can cause
problems in compensating system failures. However, when the driver still needed to
apply some of the required steering torques, these failures could be absorbed
(Schieben and Flemisch 2008). Furthermore, additional studies, (e.g., Damb€
ock
2013; Schwalm et al. 2015; Schwalm and Ladwig 2015; Voß and Schwalm 2015)
have shown, that highly automated designs could lead to reduced take-over
capabilities of drivers. This correlation is proven, e.g., by a lack of compensatory
reaction in terms of reducing activity in non-driving-related tasks that would be
needed for an appropriate preparation of a take-over situation (Voß and Schwalm
2015; Schwalm et al. 2015; Schwalm and Ladwig 2015). Moreover, a real-world
indicator for the existence of an uncanny/unsafe valley might be the first fatal crash
of an automated ground vehicle, a partially automated Tesla, which has become
public in 2016, just shortly after the first publication of the unsafe valley in
February 2016 (Flemisch et al. 2016a).
As of 2016, everyday- and user-experience with highly automated ground
vehicles is sparse, however additional indicators for the existence of an unsafe
valley can be derived from the aviation domain. As the applicability of the
H-Mode has shown, ergonomic principles for cooperative guidance and control
can be applied to both domains. Moreover, Schutte et al. (2016) argue for a similar
phenomenon in the aviation area. They present examples of systemic failure
resulting from highly automated flying (National Transportation Safety Board
2010; et d’Analyses 2012) and argue that these incidents were partly due to bad
system design instead of pilot failure alone. They presented an alternative cockpit
design, which keeps the pilot in the loop. In terms of the unsafe valley they avoid
falling into it by staying on the left side of the abyss. During scientific discussion in
the context of this publication it turned out that also other researchers already had
resembling ideas like the valley of automation. Thus e.g. Maurer (2016) describes
an u-shaped curve for the relation between user-transperancy and driver assistance
systems. Furthermore Maurer (2016) and Eckstein (2016) describe the grand
canyon of the driver-assistance systems, which refers especially to the differences
in effort and protection systems between partially and highly automated vehicles.
322 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
3 A First Glimpse: What Happens in an Unsafe Valley?
Are there scientific phenomena that could determine the existence and the
characteristics of such an unsafe valley? Figure 2pictures an example of a potential
“crash” into an uncanny valley, here applied to the car domain. Within that
example, a system failure with an adjacent take-over occurs in front of a narrow
curve. In one condition (which here is named partially-/highly-automated, because
it is was supposed to be operated as partially automated, but drivers used it in a way
highly automated systems are used) this leads to an unsafe driving performance, i.e.
a “crash” into an uncanny valley. This does not occur in another condition
(partially-automated). This suggests that there might be a correlation between the
control repartition—which ranges from manual to fully automated driving—and an
output quantity, such as performance or driving safety. Figure 2conceptualizes this
issue: While a mode with a low degree of automation (M3.1: e.g., partly-automated)
still provides sufficient safety, a higher automation level (M3.2) seems to be rather
unsafe. However, passing this unsafe valley, an even higher automation level
(M4) could be safe again.
An important element, which could lead to the safety drop in M3.2, is the
connection between the automation performance and the operator’s performance
in case of a take-over situation: A potential correlation between the operator’s
involvement and his take-over capabilities could be that, if the former is too low,
then the latter will not happen in time. A nominally highly reliable and capable
automation that infrequently is insufficient and incapable will induce a pure moni-
toring role for the operator, i.e. “supervisory control”, for which humans are not
well prepared (e.g., Sheridan 1976; Endsley 1995). A pertinent approach from
Schwalm et al. (e.g., Schwalm et al. 2015; Schwalm and Ladwig 2015; Voß and
Schwalm 2015) postulates that operators, here drivers, abandon a continuous
control and regulative process in case of a (too) high automation level. Due to
this, they supposedly are no longer capable of applying regulatory measures in
terms of a functional situation management. Instead, if the drivers are fully
involved in the driving task, i.e. manual driving, this assumed regulatory process
Level of
Automation
Actual Safety
(Perceived
Safety? Trus t?)
?
Uncanny/ unsafe
Valley
M 1
M 4
M 3.2
M 3.1
Driving
Direction
Automation
Failure
Fig. 2 An example of a “crash” in an Uncanny Valley: Intercepting a system failure prior to a
curve (green: system variant partially-automated; red: system variant partially-/highly automated)
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 323
j.nelles@iaw.rwth-aachen.de
would allow them to analyze and anticipate the driving situation, and to adequately
distribute available cognitive resources.
In order to conceptualize this idea of reduced driver’s involvement as a risk
factor for the appearance of an unsafe valley, a schema was developed for the
context of driving, in which the different automation levels were combined with
possible states of driver’s involvement (cf. Fig. 3: Herzberger et al. 2016). These
states depict minimal requirements for each level of automation regarding the
driver’s involvement which is necessary to guarantee safe driving performance.
These minimum requirements are separately reported for the three levels of a
driving task [navigation, guidance and stabilization; see Donges (2015)] in order
to provide a more detailed sketch of the driver-system interaction. In Fig. 3, the
orange shaded area represents driver-sided task fulfillment, while the blue shaded
area represents system-sided task fulfillment (Herzberger et al. 2016).
In total, five driver states were defined on the basis of the SAE Levels (SAE
International 2014). In the following, these five driving states will be described
regarding the required extent of involvement, presented from high to low. The state
Fully (F) requires an active performance of the entire driving task. The second state,
BASt SAE Navigation Guidance Stabilisation
0
FFF
---
1
F I1 I1
-AA
2
FI2I2
-LL
3
FRR
-LL
4
---
FFF
5
---
FFF
minimal requirements for driver involvment
potential distribution of control between human and machine
Driver Only
Assisted
Partially
automated
Highly
automated
Fully
automated
-
Fig. 3 Minimal requirements for driver states. Note: The crosshatched area depicts the unsafe
valley
324 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
Involvement (I), is divided into Involvement 1 (I1) and Involvement 2 (I2). State I1
still requires an active performance, either longitudinal or lateral control, and a
cognitive monitoring of the entire driving task. Moreover, drivers in state I1 have to
be ready to take over the driving task at any time without a take-over request
(TOR). In contrast to I1, drivers in the state of I2 do not actively perform a driving
task. However, they are still required to perform the cognitive monitoring. Still,
drivers in state I2 have to be ready to take over the driving task at any time without a
TOR. Regarding the state of Retrievable (R), a cognitive monitoring of the driving
task is no more required. Nevertheless, the driver has to be able to resume the
driving task after an appropriate TOR. In the final state, Non-Retrievable (NR), the
driver cannot take over control of the vehicle. This state is reached, if the driver e.g.
sleeps or if a transition from Retrievable to a higher level of involvement is not
possible. These five states only define the minimal requirements for a driver’s
involvement for the safe management of the driving task. Any requirements for a
possible intervention by the driver are not considered by now.
Two different vehicle conditions were specified: Firstly, in the condition of
assisted driving (A) the vehicle either takes over longitudinal or lateral control
during the driving task. Secondly, the condition of lead driving (L) which posits that
a vehicle controls any longitudinal or lateral actions to a full extent. On top of that,
the earlier introduced driver state Fully (F) can be transferred to the vehicle. It
requires an active performance of the entire driving task by the vehicle.
Based on this concept of states of driver’s involvement in the context of
automated driving, it is possible to derive an explanation for a “crash” into the
uncanny/unsafe valley: For drivers, the SAE Levels 2 and 3 seem to pose the same
requirements in normal traffic of automated driving, as both lateral and longitudinal
guidance are carried out by the system. Nevertheless, the actual requirements of a
Level 2 system to the driver are clearly higher due to the missing TOR in case of a
system failure. In Level 2 drivers have to constantly monitor the driving task (I2),
while in Level 3 it is not required (R). In other words, drivers tend to misconceive a
Level 2 (I2) state with a Level 3 (R) state (cf. orange crosshatched area in Fig. 3).
Only if system limits are reached, the differences between the two (required) states
of involvement will appear and the driver potentially crashes into the uncanny/
unsafe valley.
Another advantage of the classification of driver states is the timely recognition
of a decreasing driver involvement. If this risk is recognized, possible consequences
can be rebalanced and/ or reduced. In the following, a time course of occurrence
probability of an uncanny/ unsafe valley is presented.
Figure 4highlights the temporal course of a “crash” into an unsafe valley: At
first, everything is working properly, because the automation’s capabilities are
sufficient for handling the vehicle control situation and the operator is still able to
take over (T1). Over time, habituation effects can emerge (T2). If in that case a
system failure occurs, a human operator’s take-over capability would be insufficient
for the safe management of required take-over maneuver (T3). Within that scenario,
the unsafe valley metaphorically resembles a crevasse (ice crevice) that seems to be
crossable over a stable bridge. This bridge, however, will break down as soon as it is
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 325
j.nelles@iaw.rwth-aachen.de
used and will devour the all too trustful operator. Another possible metaphor
depicts a pair of scissors: One blade represents the operator’s take-over capabilities,
the other the automation’s availability. The first blade is slowly closing when the
operator’s take-over capability decreases with increasing trust. As soon as the
automation availability decreases temporarily, the movement of the two blades is
cutting off the operator from the control of the process, e.g., of the vehicle.
The deeper mechanism of the “scissor” could be located between the operator’s
confidence in the system’s performance capability and the operator’s take-over
capability, as sketched, e.g., by Manzey and Bahner (2005). Here, the relationship
between the perceived/attributed automation capacities on the part of the operator
and the actual capacity of the automation seems to be relevant. First, there is the
ideal scenario in which the expectations with regard to the technical system’s
abilities are realistic and fit the actual capacities. In that case, it is to be expected
that even after critical events, which will outline the system’s boundaries, no
adjustments to the attributions are necessary. The trust in the system is on an
appropriate and constant level. However, there is a second scenario, in which the
actual capacities do not meet the (too high) expectations. According to Lee and See
(2004) this inadequate calibration might be traced back to an insufficient precision
and specification of the judgment. In the literature, this phenomenon is discussed
under the notions of overtrust/overreliance (e.g., Inagaki and Itoh 2013; Lee and
See 2004) and automation bias/complacency (e.g., Bahner 2008; Mosier and Skitka
1996). While overtrust and automation bias might be understood as cognitive
components that postulate the exaggerated system trust, overreliance and compla-
cency depict the behavioral component. The latter two terms often are used
complementary. They might be conceptualized by means of an insufficient or
infrequent operator-sided system monitoring compared to what actually would be
T 1 :
Habituation
Degradation
T 2 :
T 3 :
Fig. 4 A possible connection between control-distribution and certainty
326 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
required with regard to the system’s capacities (Bahner 2008). Only in those critical
situations that evince the automation limits, an adaptation of the attributions with
regard to the system abilities takes place. Subsequently, a strong loss of trust occurs,
which is—according to existing literature (Hoffman et al. 2013; Lee and See
2004)—in most cases only difficult to compensate. Payre et al. (2016) affirm this
role of complacency with regard to the emergence of an unsafe valley. They
postulate that distinct complacency will lead to difficulties (e.g. slow reactions) in
the course of take-over maneuvers between automated and non-automated driving.
Here it also becomes clearer why “unsafe” is the better name for the valley, as the
drivers even feel too comfortable, at least before the incident or accident.
With regard to that phenomenon, there seems to be an additional irony in the
progress of automation: At the beginning of a technological development process,
an automated system still has various errors that an operator can react on by means
of an increased readiness for take-over. Due to the technical system’s increasing
availability over time, the user’s first experience with a system error or failure will
be postponed in time. As such, an undue confidence will build up. Under these
conditions, the same error that could be compensated beforehand will have a higher
impact on road safety and user expectations. Likewise, the loss of trust will be
stronger, the more difficult a mistake might be compensated and the more costs it
provokes (Bahner 2008) as well as the bigger it is and the more difficult it is to
predict (Lee and See 2004).
Choi and Ji (2015) provide a more holistic model of trust on adopting an
autonomous vehicle: Within their approach, they combined different relevant
concepts (e.g., trust, perceived risk and personality traits) within one model and
took it under empirical examination. They found that the concepts of trust (defined
via system transparency, technical competence and situation management) and
perceived usefulness have major influence on the behavioral intention to use a
system. Other factors such as perceived risk and ease of use as well as locus of
control and sensation seeking only have minor or no influence at all. According to
these results, it might be assumed that exaggerated trust and perceived usefulness
lead the naive driver into the uncanny and unsafe valley.
4 A Sketch of the Design Space: The Solution’s Dimensions
for Safeguarding an Unsafe Valley
Justified by the successful implementation of automated systems such as in
Goodrich et al. (2006), Hoeger et al. (2012) or Altendorf et al. (2015), the assump-
tion is that not automation or higher automation per se is unsafe, but that there are
unsafe regions around safe automation designs and combination of different assis-
tance and automation levels and transitions between levels or modes. How can the
unsafe regions be safeguarded in order to utilize the safe regions? Decisive
dimensions that form the design space of safeguards could be:
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 327
j.nelles@iaw.rwth-aachen.de
a. Abilities of the human and automation: The human capability might depend on a
selection process e.g. in domains like aviation, or on the distributions of a
general population e.g. in the driving domain. The abilities of the automation
depend on their interplay with the environment, and might be structured
according to normal operations, systems limits and system failure. Increasingly
important will be the (meta-) ability to describe its own ability to sense and act, e.
g. in case of a sensor degradation due to changes in the environment like bad
weather.
b. The distribution of tasks, authority, responsibility and a minimum of autonomy
for both the automation and the human, as described e.g. by Flemisch et al.
(2011).
c. Combining (a) and (b), the control-distribution of the corresponding automa-
tion-level, which interact with the human’s involvement, and which could be
organized in clear modes (Fig. 1.5). It is an open research and engineering
question as to how many and which modes are needed and/or wanted at all,
but there are clear hints that to many modes can lead to mode confusion
especially in time critical situations. It is especially not clear whether partially
and conditionally automated levels are needed at all, even if there are hints that a
well designed level of partial automation might improve the take-over ability
and might also be fun. Another open research and engineering question is how
many different modes can be differentiated and operated safely, and how
potential migration paths could look like in order to ensure upwards and down-
wards compatibility.
Another dimension of the design design space are the transitions between the
modes, that can be initiated either by the human (Fig. 5red) or by the automation
(Fig. 5blue). It is an open research and development question as to how the
MRM/
MRS
SAE 2
unsafe
SAE 3
safe
unsafe
safe
SAE 0
Fig. 5 Uncanny Valleys: Modi and Transitions
328 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
transitions are balanced and secured against false and failed transition. A safeguard,
e.g., for a transition from right side of the automation scale, crossing the valley
towards the left side of the scale, could be an interlocked transition, where the
control is only handed over to the operator if it is really clear that the operator has
taken over control. This interlocked transition was for cars and trucks successfully
implemented and tested in the HAVEit project. This safeguard works best if the
right rim is also secured with a transition to a minimum risk state, e.g., via a
minimum risk maneuver, as is described in Hoeger et al. (2011).
A safeguarding measure, e.g., of the left rim of the unsafe valley can be, for
example, that an insufficient involvement of the human is determined by monitor-
ing attention and reacting accordingly e.g. through prompts, as described in
e.g. Rauch et al. (2010) and in Schwalm et al. (2015) and Schwalm and
Ladwig (2015).
Another safeguard on the left rim of the valley is the communication of the
ability of the automation, as shown by Beller et al. (2013) based on Heesen et al.
(2010) as a concept to communicate an uncertainty value of the automation.
A similar direction of safeguarding against falling into an unsafe valley is
provided by different authors who promote a so-called “trust management”: First,
systems could already be adapted within the design phase (e.g., a system’s trans-
parency could be highlighted, see Bahner 2008). Alternatively, it could be tried to
improve users’ system perception. Last, system capabilities and limits should be
communicated more overt (cf. Muir 1994). In this context, Payre et al. (2016)
postulate that intensive practice is required, in order to bridge the uncanny valley.
According to them, the outline of the system’s working and its boundaries is
indispensable in order to avoid safety-critical automation effects.
An additional concept that could help prevent people from falling into the unsafe
valley could be nudging, the art of promoting certain behaviors through small
changes in the environment, which nevertheless leave the individual to decide for
herself (e.g. Thaler and Sunstein 2014). Nudging could be used for influencing
humans to avoid the rims of the unsafe valley, for example through promoting more
involvement (Flemisch et al. 2016b). A new concept which links nudging even
stronger to self-determination is currently being developed at the Institute for
Industrial Engineering and Ergonomics at RWTH Aachen University.
Putting those safeguards together, a holistic picture or concept of human-
machine resilience should be derived how the human machine system acts in
normal operations, reacts to disturbances that try to push the system to system
limits and how it reacts to system failure. Figure 6shows such transitions between
normal operations, system limits and system failure. The upper part of Fig. 6shows
a degradation of the machine that might result in a situation with a control deficit,
where the human might be somehow prepared (upper part) or not enough prepared
(middle). The lower part of Fig. 6shows a degradation of the human that might
result in a complete dropout of the human, where the machine might only be able to
handle the situation for a limited time (lower part). A minimum risk maneuver that
hopefully results in a state with a minimum risk or maximal possible safety might
be the last resort in order to keep the human-machine system safe. From a minimum
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 329
j.nelles@iaw.rwth-aachen.de
risk state the human might be able take over again to normal operations if possible.
The benefit of thinking in a layered approach of normal operations, system limits
and failure could be in the flexibility to stabilize the human-machine system at the
limits and, instead of going into a complete, unrecoverable failure, recover or
gracefully degrade from system limits and failures.
Putting those safeguards together systematically could result in a highly cooper-
ative human-machine system, that could be inspired by natural examples of coop-
eration in movement, for example the H(orse)-Metaphor (Flemisch et al. 2003), it’s
generalization of shared and cooperative guidance and control, complemation
(Schutte 1999) or cooperative automation (e.g. Flemisch et al. 2014; Bengler
et al. 2014), and it’s instantiation H-Mode (e.g., Goodrich et al. 2006; Altendorf
et al. 2015) or Conduct-by-Wire (Winner and Hakuli 2006). The key seems to be to
combine partially and highly automated automation levels in a way, that in partially
automated level, the operator is being involved and prevented from losing her
situation and mode awareness, e.g., with continuous haptic feedback and attention
monitoring, while in highly automated, the human-machine system is so secured
that it keeps safe even in the extreme case that the operator cannot come back into
the loop again.
MRM
A
BC
DE
Standard operaon System limit System failure
F
Fig. 6 Resilience through recovery or graceful degradation between normal operations, system
limits or system failure
330 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
5 Outlook: Balancing Risks and Chances of Assistance
and Automation by Securing the Unsafe Valley
Although the causative reasons for the emergence of such a valley might not be
finally deduced yet, it becomes increasingly clear that there is at least one unsafe
valley on the scale of assistance and automation, with clear applications at least in
air and ground vehicle automation, and a good chance that the concept might also
be valuable in other domains. It is undoubtedly necessary to secure the boundaries
of an uncanny valley.
There is justified hope that in most cases, the unsafe valley can be well structured
and therefore comparably well secured. We should nevertheless be aware that there
might be cases for which the structure of the unsafe valley could also be more
complex, more like a mountain landscape. In those cases the unsafe valley(s) should
be reasonably mapped at first, before the boundaries are secured and viable bridges
could be built over it.
In order to increase or maintain safety, and to harvest many chances of automa-
tion, the risks of automation have to be controlled. The systematic mapping and
development of safeguarding measures down to safe combinations will require an
interdisciplinary research and development, but will hopefully prevent naive and
probably too trustful operators, system designers and engineers from falling into the
unsafe valley, sometimes even an abyss of automation.
Acknowledgments Thanks to the initial supporters at NASA, as well as the DFG und its referees
for the support in the H-Mode projects and in the research program “Kooperativ Interagierende
Fahrzeuge“, the EU for the support in the projects HAVEit and InteractIVe. The colleges at DLR,
the TU Darmstadt, TU M€
unchen and RWTH Aachen for the rich discussion.
References
Altendorf E, Baltzer M, Heesen M, Kienle M, Weißgerber T, Flemisch F (2015) H-Mode, a
Haptic-multimodal interaction concept for cooperative guidance and control of partially and
highly automated vehicles. In: Winner H et al (eds) Handbook of driver assistance systems.
Springer, Cham
Bahner JE (2008) U
¨bersteigertes Vertrauen in automation: Der Einfluss von Fehlererfahrungen auf
complacency und automation bias. Dissertation, TU Berlin
Beller J, Heesen M, Vollrath M (2013) Improving the driver–automation interaction an approach
using automation uncertainty. Hum Factors 55(6):1130–1141
Bengler K, Flemisch F (2011) Von H-Mode zur kooperativen Fahrzeugf€
uhrung—Grundlegende
ergonomische Fragestellungen. In: 5. Darmsta
¨dter Kolloquium: kooperativ oder autonom?.
Darmstadt
Bengler K, Dietmayer K, Farber B, Maurer M, Stiller C, Winner H (2014) Three decades of driver
assistance systems: review and future perspectives. IEEE Intell Transport Syst Mag 6(4):6–22
Billings CE (1997) Aviation automation: the search for a human centered approach. Lawrence
Erlbaum Associates, Mahwah, NJ
Choi JK, Ji YG (2015) Investigating the importance of trust on adopting an autonomous vehicle.
Int J Hum Comput Interact 31:692–702
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 331
j.nelles@iaw.rwth-aachen.de
Damb€
ock D (2013) Automationseffekte im Fahrzeug—von der Reaktion zur U
¨bernahme.
Disseration, TU M€
unchen
Donges E (2015) Fahrerverhaltensmodelle. In: Handbuch Fahrerassistenzsysteme. Springer
Fachmedien, Wiesbaden, pp 17–26
Eckstein L (2016) Personal communication
Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Hum Factors 37
(1):32–64
et d’Analyses BDE (2012) Final report on the accident on 1st June 2009 to the Airbus A330-203
registered F-GZCP operated by Air France flight AF 447 Rio de Janeiro–Paris. BEA, Paris
Flemisch FO, Onken R (1998) The cognitive assistant system and its contribution to effective
man/machine interaction. NATO RTO MP-3, Monterey, CA
Flemisch FO, Adams CA, Conway SR, Goodrich KH, Palmer MT, Schutte PC (2003) The
H-metaphor as a guideline for vehicle automation and interaction (No. NASA/TM—2003-
212672). NASA, Langley Research Center, Hampton
Flemisch F, Kelsch J, Schieben A, Schindler J (2006) St€
ucke des Puzzles hochautomatisiertes
Fahren: H-Metapher und H-Mode; 4. Workshop Fahrerassistenzsysteme; L€
owenstein
Flemisch F, Kelsch J, L€
oper C, Schieben A, Schindler J (2008) Automation spectrum, inner/outer
compatibility and other potentially useful human factors concepts for assistance and automa-
tion. In: de Waard D, Flemisch FO, Lorenz B, Oberheid H, Brookhuis KA (eds) Human factors
for assistance and automation. Shaker, Maastricht, pp 1–16
Flemisch F, Schieben A, Temme G, Rauch N, Heesen M (2009) HAVEit Public Deliverable D33.2
“Preliminary concept on optimum task repartition for HAVEit systems”, Brussels
Flemisch F, Schieben A, Strauss M, L€
uke S, Heyden A (2011) Design of human-machine
interfaces for highly automated vehicles in the EU-project HAVEit. In: Proceedings of 14th
international conference on human-computer interaction
Flemisch F, Heesen M, Hesse T, Kelsch J, Schieben A, Beller J (2012) Towards a dynamic balance
between humans and automation: authority, ability, responsibility and control in shared and
cooperative control situations. Int J Cognit Tech Work 14(1):3–18
Flemisch F, Bengler K, Bubb H, Winner H, Bruder R (2014) Towards cooperative guidance and
control of highly automated vehicles: H-mode and conduct-by-wire. Ergonomics Special Issue
Beyond Human-Centred Automation 57(3). Online 24.2.2014
Flemisch F, Schwalm M, Deml B (2015a) Systemergonomie kooperativ interagierende Fahrzeuge.
Projektantrag an die DFG
Flemisch F, Winner H, Bruder R, Bengler K (2015a) Cooperative guidance, control and automa-
tion. In: Winner H et al (eds) Handbook of driver assistance systems. Springer, Cham
Flemisch F, Altendorf E, Baltzer M, Rudolph C, Lopez D, VOß G, Schwalm M (2016a) Arbeiten
in komplexen Mensch-Automations-Systemen: Das Unheimliche und unsichere Tal (Uncanny
Valley) der Automation am Beispiel der Fahrzeugautomatisierung; 62. GfA-Fr€
uhjahrskongress
“Arbeit in komplexen Systemen – Digital, vernetzt, human?!” Aachen
Flemisch F, Altendorf E, Weßel G Canpolat Y (2016b) Personal Communcation
Gasser TM, Arzt C, Ayoubi M, Bartels A, Buerkle L, Eier J, Flemisch F, Haecker D, Hesse T,
Huber W, Lotz C, Maurer M, Ruth-Schumacher S, Schwarz J, Vogt W (2012a) Rechtsfolgen
zunehmender Fahrzeugautomatisierung—Gemeinsamer Schlussbericht der Projektgruppe.
Fahrzeugtechnik F 83, Bundesanstalt fur Straßenwesen (BASt)
Gasser TM et al (2012b) Rechtsfolgen zunehmender Fahrzeugautomatisierung - Gemeinsamer
Schlussbericht der Projektgruppe Bundesanstalt f€
ur Straßenwesen (BASt), (F 83)
Goodrich K, Flemisch F, Schutte P, Williams R (2006) A design and interaction concept for
aircraft with variable autonomy: application of the H-Mode. In: Digital Avionics Systems
Conference, USA
Heesen M, Kelsch J, L€
oper C, Flemisch F (2010) Haptisch-multimodale Interaktion f€
ur
hochautomatisierte, kooperative Fahrzeugf€
uhrung bei Fahrstreifenwechsel-, Brems- und
Ausweichman€
overn; Gesamtzentrum f€
ur Verkehr Braunschweig (Hrsg.): Automatisierungs-,
Assistenzsysteme und eingebettete Systeme f€
ur Transportmittel AAET, Braunschweig
332 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de
Herzberger ND, Voß GMI, Schwalm M (2016) Personal communication
Hoeger R, Amditis A, Kunert M, Hoess A, Flemisch F, Krueger H-P, Bartels A, Beutner A (2008)
Highly automated vehicles for intelligent transport: Haveit approach. ITS World Congress,
New York
Hoeger R, Zeng H, Hoess A, Kranz T, Boverie S, Strauss M et al (2011) Final report, Deliverable
D61. 1. Highly automated vehicles for intelligent transport (HAVEit). 7th Framework
programme
Hoeger R, Wiethof M, Rheker T (2012) Complexity measures of traffic scenarios: psychological
aspects and practical applications. In: International conference on driver behaviour and
training 2011, Paris
Hoffman RR, Johnson M, Bradshaw JM, Underbrink A (2013) Trust in automation. IEEE Intell
Syst 28(1):84–88
Hollnagel E (2007) Keynote zur 7. Berliner Werkstatt “Prospektive Gestaltung von Mensch-
Technik-Interaktion” 2007, Berlin
Inagaki T, Itoh M (2013) Human’s overtrust in and overreliance on advanced driver assistance
systems: a theoretical framework. Int J Veh Technol 2013:1–8
Kuz S, B€
utzler J, Schlick CM (2015) Anthropomorphic design of robotic arm trajectories in
assembly cells. J Occup Ergon 12(3):73–82
Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46
(1):50–80
Ma R, Kaber DB (2005) Situation awareness and workload in driving while using adaptive cruise
control and a cell phone. Int J Ind Ergon 35(10):939–953
Manzey D, Bahner JE (2005) Vertrauen in Automation als Aspekt der Verla
¨sslichkeit von
Mensch-Maschine-Systemen. Beitra
¨ge zur Mensch-Maschine-Systemtechnik aus Forschung
und Praxis. Festschrift f€
ur Klaus-Peter Timpe 93–109
Maurer M (2016) Personal communication
Mayer MP (2012) Entwicklung eines kognitionsergonomischen Konzepts und eines
Simulationssystems f€
ur die robotergest€
utzte Montage, Dissertation. Shaker, Aachen
Merat N, Jamson AH, Lai FC, Carsten O (2012) Highly automated driving, secondary task
performance, and driver state. Hum Factors 54(5):762–771
Mori M (1970) The uncanny valley. Energy 7(4):33–35 (in Japanese)
Mori M, MacDorman KF, Kageki N (2012) The uncanny valley [from the field]. IEEE Robot
Autom Mag 19(2):98–100
Mosier KL, Skitka LJ (1996) Human decision-makers and automated decision aids: made for each
other? In: Parasuraman R, Mouloua M (eds) Automation and human performance: theory and
applications. Lawrence Erlbaum Associates, Mahwah, NJ, pp 201–220
Muir BM (1994) Trust in automation: Part I. Theoretical issues in the study of trust and human
intervention in automated systems. Ergonomics 37(11):1905–1922. doi:10.1080/
00140139408964957
National Transportation Safety Board (2010) “Loss of Control on Approach, Colgan Air, Inc.
Operating as Continental Connection Flight 3407 Bombardier DHC-8-400, N200WQ,
Clarence Center, New York, February 12, 2009”. NTSB/AAR-10/01. National Transportation
Safety Board, Washington, DC
Norman DA (1990) The problem with automation. Philos Trans R Soc Lond B 327:585–593
Payre W, Cestac J, Delhomme P (2016) Fully automated driving: impact of trust and practice on
manual control recovery. Hum Factors 58(2):229–241
Petermann I, Schlag B (2009) Auswirkungen der Synthese von Assistenz und Automation auf
das Fahrer-Fahrzeug System. Paper presented at the 11. Braunschweiger Symposium
Automatisierungs-, Assistenzsysteme und eingebettete Systeme f€
ur Transportmittel (AAET)
2011. Braunschweig
Rauch N, Kaussner A, Krueger H-P, Boverie S, Flemisch F (2010) Measures and countermeasures
for impaired driver’s state within highly automated driving. Transport Research Arena,
Brussels
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 333
j.nelles@iaw.rwth-aachen.de
SAE J 3016:2014 Taxonomy and definitions for terms related to on-road motor vehicle automated
driving systems. Society of Automotive Engineers
Schieben A, Flemisch F (2008) Who is in control? Exploration of transitions of control between
driver and an eLane vehicle automation. VDI/VW Tagung Fahrer im 21. Jahrhundert 2008.
Wolfsburg
Schieben A, Dambo
¨ck D, Kelsch J, Rausch H, Flemisch F (2008) Haptisches feedback im
spektrum von fahrerassistenz und automation. In: 3. Tagung Aktive Sicherheit durch
Fahrerassistenz, Garching
Schutte PC (1999) Complemation: an alternative to automation. J Inform Tech Impact 1
(3):113–118
Schutte P, Goodrich K, Williams R (2016) Synergistic allocation of flight expertise on the flight
deck (SAFEdeck): a design concept to combat mode confusion, complacency, and skill loss in
the flight deck. In: Stanton NA, Landry S, Di Bucchianico G, Vallicelli A (eds) Advances in
human aspects of transportation. Springer, Berlin, pp 899–911
Schwalm M, Ladwig S (2015) How do we solve demanding situations—a discussion on driver
skills and abilities. In: 57th Conference of experimental psychologists 2015, Hildesheim
Schwalm M, Voß GMI, Ladwig S (2015) Inverting traditional views on human task-processing
behavior by focusing on abilities instead of disabilities–a discussion on the functional situation
management of drivers to solve demanding situations. In: Engineering psychology and cogni-
tive ergonomics. Springer, Cham, pp 286–296
Sheridan TB (ed) (1976) Monitoring behavior and supervisory control. Springer, Berlin
Sheridan TB (1980) Computer control and human alienation. Technol Rev 83(1):65–73
Thaler RH, Sunstein CR, Balz JP (2014) Choice architecture. In: The behavioral foundations of
public policy. Princeton University Press, Princeton, NJ
Voß GMI, Schwalm M (2015) 1. Kongress der Fachgruppe Verkehrspsychologie 2015.
Braunschweig
Weßel G, Altendorf E, Flemisch F (2016) Self-induced nudging in conditionally and highly
automated driving. Working Paper, Aachen
Winner H, Hakuli S (2006) Conduct-by-wire–following a new paradigm for driving into the
future. In: Proceedings of FISITA world automotive congress, Oct 2006, vol 22, p 27
334 F. Flemisch et al.
j.nelles@iaw.rwth-aachen.de