ChapterPDF Available

Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and Application to Vehicle Automation


Abstract and Figures

Progress in sensors, computer power and increasing connectivity allow to build and operate more and more powerful assistance and automation systems, e.g., whereas other problems occur due to this process e.g. in human-machine-interaction. In the field of robotics the metaphor of an uncanny valley is known, where robots showing high, however not perfect, similarities to e.g. humans are perceived by humans as uncanny and unsafe. In the field of automation, e.g. vehicle automation, a comparable, metaphorical design correlation is implied, an unsafe valley e.g. between partially- and highly-automated automation levels, in which due to misperceptions a loss of safety could occur. This contribution sketches the concept of the (uncanny and) unsafe valley of automation, summarizes early affirmative studies, gives first hints towards an explanation of the valley, outlines the design space how to secure the borders of the valley, and how to bridge the valley.
Content may be subject to copyright.
Uncanny and Unsafe Valley of Assistance
and Automation: First Sketch
and Application to Vehicle Automation
Frank Flemisch, Eugen Altendorf, Yigiterkut Canpolat, Gina Weßel,
Marcel Baltzer, Daniel Lopez, Nicolas Daniel Herzberger, Gudrun
Mechthild Irmgard Voß, Maximilian Schwalm, and Paul Schutte
Progress in sensors, computer power and increasing connectivity allow to build
and operate more and more powerful assistance and automation systems, e.g. in
aviation, cars and manufacturing. Besides many benefits, new problems occur e.
g. in human-machine-interaction. In the field of automation, e.g. vehicle auto-
mation, a comparable, metaphorical design correlation is implied, an unsafe
valley e.g. between partially- and highly-automated automation levels, in which
due to misperceptions a loss of safety could occur. This contribution sketches the
concept of the (uncanny and) unsafe valley of automation, summarizes early
affirmative studies, gives first hints towards an explanation of the valley, outlines
the design space how to secure the borders of the valley, and how to bridge the
F. Flemisch (*)
IAW Institut f
ur Arbeitswissenschaft, RWTH Aachen, Aachen, Germany
FKIE Fraunhofer Institut f
ur Kommunikation, Informationsverarbeitung und Ergonomie,
Fraunhoferstr. 20, 53343 Wachtberg, Germany
E. Altendorf • Y. Canpolat • G. Weßel
IAW Institut f
ur Arbeitswissenschaft, RWTH Aachen, Aachen, Germany
M. Baltzer • D. Lopez
FKIE Fraunhofer Institut f
ur Kommunikation, Informationsverarbeitung und Ergonomie,
Fraunhoferstr. 20, 53343 Wachtberg, Germany
N.D. Herzberger • G.M.I. Voß • M. Schwalm
ika Institut f
ur Kraftfahrzeuge, RWTH Aachen, Aachen, Germany
P. Schutte
Aviation Development Directorate AMRDEC/RDECOM, US Army, Brooklyn, NY, USA
#Springer-Verlag GmbH Germany 2017
C.M. Schlick et al. (eds.), Advances in Ergonomic Design of Systems, Products and
Processes, DOI 10.1007/978-3-662-53305-5_23
Automation • Assistance • Robotics • Human-machine systems • Uncanny
unsafe valley
1 Introduction: Assistance, Automation and Robotics
Enabled by technical advancements in the field of sensors, computers and connec-
tivity, as well as motivated by cost-pressure along with ever-increasing perfor-
mance requirements, the complexity of information systems has steadily grown in
the last decades (cf. Hollnagel 2007). A part of this complexity can be compensated
with assistance systems and automation, however, unwanted side effects such as
“Operator/pilot out of the loop” or “Mode confusion” (cf. Billings 1997) are
reported in a variety of domains like aviation, nuclear power plants and automotive.
Rather than speaking about over-automation in an undifferentiated manner,
Norman (1990) points out that not over-automation is the problem, but inappropri-
ate feedback and interaction.
There is a concept in robotics, which can be considered as a specific form of
automation, known as “The Uncanny Valley”, where robots showing high,
however imperfect similarities to humans are perceived by humans as uncanny
and disconcerting (Mori 1970;Morietal.2012). Conscious of the Uncanny
Valley, research and development in robotics is focusing on cooperative robot-
ics, where to a certain extent humans and highly automated robots work together
in the same work spaces, instead of fully-automated robots (cf. Mayer 2012;
Kuz et al. 2015).
A similar development regarding cooperative assistance and automation is
currently emerging in the area of ground vehicles ensuing from the aviation domain
(e.g., Flemisch and Onken 1998; Schutte 1999; Goodrich et al. 2006). It became
increasingly clear, through basic concepts such as Levels of Automation, that
assistant and automated systems are related and, (a) should be discussed holistically
and (b) could be depicted on a scale—that is, a spectrum of assistance and
automation, (cf. Flemisch et al. 2003,2008). This point of view was later applied
in the standard categorization of vehicle automation (cf. BASt 2012a;SAE 2014),
which differentiates between assisted, partially- and highly-automated systems.
Figure 1shows a simplified scale of assistance and automation related to the control
automa ted
Fig. 1 Control distribution between the human and automation represented as an assistance- and
automation-scale, here with explicit automation-levels/modes (inspired by Sheridan 1980;
Flemisch et al. 2003,2008,2012,2014,2015a,b; Gasser et al. 2012b;SAE 2014)
320 F. Flemisch et al.
distribution between the human and the automation in the assistance- and
automation-levels including manual, assisted, partially-, highly- and fully-
A possible unsafe valley of automation can be found in the right half of the scale
between partly- and highly-automated, which could be rather uncanny for the user,
and more importantly, rather unsafe, as described further down.
2 Early Indicators for the Existence of an Unsafe Valley
There is a good chance that the metaphor of an (uncanny and) unsafe valley can be
applied to automation in all kinds of domains. Early systematic explorations within
the area of partly- and highly-automated vehicle control have been conducted for
ground and air vehicles since 2003 (NASA-H-Mode) and since 2004 for ground
vehicles as part of DFG-H(orse)-Mode-Projects.
These were inspired by the H-metaphor, a design metaphor that takes the rider-
horse interaction as a blueprint for vehicle automation (Flemisch et al. 2003,2015a,
b; Bengler and Flemisch 2011; Altendorf et al. 2015). The initial base research
sparked a series of national and EU-projects, introduced the term highly-automated
driving (e.g. Flemisch et al. 2006; Hoeger et al. 2008,2011) and inspired the
development of partially-automated “piloted” driving e.g. of Volkswagen, Audi,
Mercedes and the more chauffeur inspired Tesla.
At the beginning of this research and development in 2000, it was debated as to
whether one or multiple modes between assisted- and fully-automated automation
levels would be advisable and how they should be designed, especially regarding
the involvement-degree of the operators, here the drivers, and the extent of the
automation’s intervention and the required safety measures, e.g. by operator
With accumulating research, it became clear that there are combinations of
partially- as well as highly-automated modes that are functional, while other
implementations are not. An example of a well-functioning implementation of a
lower automation level in the car domain is presented by Ma and Kaber (2005). In
their implementation automation only supports longitudinal control. They revealed
that an Advanced Cruise Control (ACC) system is able to enhance system perfor-
mance in terms of lane deviations and speed control in tracking a lead vehicle and
increase drivers situation awareness, even when drivers are distracted by a
non-driving related task.
In the early decade of this century, research explored whether assistive (i.e., not
fully autonomous) solutions beyond ACC, with coupled longitudinal and lateral
control could be successful. Starting from the H-Mode projects, a series of studies
(e.g. Schieben and Flemisch 2008; Schieben et al. 2008; Petermann and Schlag
2009), have shown that designs differentiating between partly- and highly-
automated systems and using well-arranged transitions could succeed. Rauch
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 321
et al. (2010) demonstrated that an integrated driver-state detection may improve the
safety and acceptance of the automation systems.
Furthermore, Merat et al. (2012) presented a safe implementation of a highly-
automated vehicle, which takes longitudinal and lateral control and can perform
gentle maneuvers itself. They conducted an experiment in which the automation
had complete control of the automobile in normal situations and drivers were
warned when approaching an obstacle and manual control had to be resumed.
The results showed that, in their implementation of highly-automated driving, no
negative effects on system performance emerged.
Besides these successful examples, there are clear hints that areas between well-
functioning modes and their variants exist, which are clearly less safe: For example,
simulator-based studies conducted at DLR showed that partially-automated driving
designs, where drivers did not have to apply steering torques any more, can cause
problems in compensating system failures. However, when the driver still needed to
apply some of the required steering torques, these failures could be absorbed
(Schieben and Flemisch 2008). Furthermore, additional studies, (e.g., Damb
2013; Schwalm et al. 2015; Schwalm and Ladwig 2015; Voß and Schwalm 2015)
have shown, that highly automated designs could lead to reduced take-over
capabilities of drivers. This correlation is proven, e.g., by a lack of compensatory
reaction in terms of reducing activity in non-driving-related tasks that would be
needed for an appropriate preparation of a take-over situation (Voß and Schwalm
2015; Schwalm et al. 2015; Schwalm and Ladwig 2015). Moreover, a real-world
indicator for the existence of an uncanny/unsafe valley might be the first fatal crash
of an automated ground vehicle, a partially automated Tesla, which has become
public in 2016, just shortly after the first publication of the unsafe valley in
February 2016 (Flemisch et al. 2016a).
As of 2016, everyday- and user-experience with highly automated ground
vehicles is sparse, however additional indicators for the existence of an unsafe
valley can be derived from the aviation domain. As the applicability of the
H-Mode has shown, ergonomic principles for cooperative guidance and control
can be applied to both domains. Moreover, Schutte et al. (2016) argue for a similar
phenomenon in the aviation area. They present examples of systemic failure
resulting from highly automated flying (National Transportation Safety Board
2010; et d’Analyses 2012) and argue that these incidents were partly due to bad
system design instead of pilot failure alone. They presented an alternative cockpit
design, which keeps the pilot in the loop. In terms of the unsafe valley they avoid
falling into it by staying on the left side of the abyss. During scientific discussion in
the context of this publication it turned out that also other researchers already had
resembling ideas like the valley of automation. Thus e.g. Maurer (2016) describes
an u-shaped curve for the relation between user-transperancy and driver assistance
systems. Furthermore Maurer (2016) and Eckstein (2016) describe the grand
canyon of the driver-assistance systems, which refers especially to the differences
in effort and protection systems between partially and highly automated vehicles.
322 F. Flemisch et al.
3 A First Glimpse: What Happens in an Unsafe Valley?
Are there scientific phenomena that could determine the existence and the
characteristics of such an unsafe valley? Figure 2pictures an example of a potential
“crash” into an uncanny valley, here applied to the car domain. Within that
example, a system failure with an adjacent take-over occurs in front of a narrow
curve. In one condition (which here is named partially-/highly-automated, because
it is was supposed to be operated as partially automated, but drivers used it in a way
highly automated systems are used) this leads to an unsafe driving performance, i.e.
a “crash” into an uncanny valley. This does not occur in another condition
(partially-automated). This suggests that there might be a correlation between the
control repartition—which ranges from manual to fully automated driving—and an
output quantity, such as performance or driving safety. Figure 2conceptualizes this
issue: While a mode with a low degree of automation (M3.1: e.g., partly-automated)
still provides sufficient safety, a higher automation level (M3.2) seems to be rather
unsafe. However, passing this unsafe valley, an even higher automation level
(M4) could be safe again.
An important element, which could lead to the safety drop in M3.2, is the
connection between the automation performance and the operator’s performance
in case of a take-over situation: A potential correlation between the operator’s
involvement and his take-over capabilities could be that, if the former is too low,
then the latter will not happen in time. A nominally highly reliable and capable
automation that infrequently is insufficient and incapable will induce a pure moni-
toring role for the operator, i.e. “supervisory control”, for which humans are not
well prepared (e.g., Sheridan 1976; Endsley 1995). A pertinent approach from
Schwalm et al. (e.g., Schwalm et al. 2015; Schwalm and Ladwig 2015; Voß and
Schwalm 2015) postulates that operators, here drivers, abandon a continuous
control and regulative process in case of a (too) high automation level. Due to
this, they supposedly are no longer capable of applying regulatory measures in
terms of a functional situation management. Instead, if the drivers are fully
involved in the driving task, i.e. manual driving, this assumed regulatory process
Level of
Actual Safety
Safety? Trus t?)
Uncanny/ unsafe
M 1
M 4
M 3.2
M 3.1
Fig. 2 An example of a “crash” in an Uncanny Valley: Intercepting a system failure prior to a
curve (green: system variant partially-automated; red: system variant partially-/highly automated)
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 323
would allow them to analyze and anticipate the driving situation, and to adequately
distribute available cognitive resources.
In order to conceptualize this idea of reduced driver’s involvement as a risk
factor for the appearance of an unsafe valley, a schema was developed for the
context of driving, in which the different automation levels were combined with
possible states of driver’s involvement (cf. Fig. 3: Herzberger et al. 2016). These
states depict minimal requirements for each level of automation regarding the
driver’s involvement which is necessary to guarantee safe driving performance.
These minimum requirements are separately reported for the three levels of a
driving task [navigation, guidance and stabilization; see Donges (2015)] in order
to provide a more detailed sketch of the driver-system interaction. In Fig. 3, the
orange shaded area represents driver-sided task fulfillment, while the blue shaded
area represents system-sided task fulfillment (Herzberger et al. 2016).
In total, five driver states were defined on the basis of the SAE Levels (SAE
International 2014). In the following, these five driving states will be described
regarding the required extent of involvement, presented from high to low. The state
Fully (F) requires an active performance of the entire driving task. The second state,
BASt SAE Navigation Guidance Stabilisation
F I1 I1
minimal requirements for driver involvment
potential distribution of control between human and machine
Driver Only
Fig. 3 Minimal requirements for driver states. Note: The crosshatched area depicts the unsafe
324 F. Flemisch et al.
Involvement (I), is divided into Involvement 1 (I1) and Involvement 2 (I2). State I1
still requires an active performance, either longitudinal or lateral control, and a
cognitive monitoring of the entire driving task. Moreover, drivers in state I1 have to
be ready to take over the driving task at any time without a take-over request
(TOR). In contrast to I1, drivers in the state of I2 do not actively perform a driving
task. However, they are still required to perform the cognitive monitoring. Still,
drivers in state I2 have to be ready to take over the driving task at any time without a
TOR. Regarding the state of Retrievable (R), a cognitive monitoring of the driving
task is no more required. Nevertheless, the driver has to be able to resume the
driving task after an appropriate TOR. In the final state, Non-Retrievable (NR), the
driver cannot take over control of the vehicle. This state is reached, if the driver e.g.
sleeps or if a transition from Retrievable to a higher level of involvement is not
possible. These five states only define the minimal requirements for a driver’s
involvement for the safe management of the driving task. Any requirements for a
possible intervention by the driver are not considered by now.
Two different vehicle conditions were specified: Firstly, in the condition of
assisted driving (A) the vehicle either takes over longitudinal or lateral control
during the driving task. Secondly, the condition of lead driving (L) which posits that
a vehicle controls any longitudinal or lateral actions to a full extent. On top of that,
the earlier introduced driver state Fully (F) can be transferred to the vehicle. It
requires an active performance of the entire driving task by the vehicle.
Based on this concept of states of driver’s involvement in the context of
automated driving, it is possible to derive an explanation for a “crash” into the
uncanny/unsafe valley: For drivers, the SAE Levels 2 and 3 seem to pose the same
requirements in normal traffic of automated driving, as both lateral and longitudinal
guidance are carried out by the system. Nevertheless, the actual requirements of a
Level 2 system to the driver are clearly higher due to the missing TOR in case of a
system failure. In Level 2 drivers have to constantly monitor the driving task (I2),
while in Level 3 it is not required (R). In other words, drivers tend to misconceive a
Level 2 (I2) state with a Level 3 (R) state (cf. orange crosshatched area in Fig. 3).
Only if system limits are reached, the differences between the two (required) states
of involvement will appear and the driver potentially crashes into the uncanny/
unsafe valley.
Another advantage of the classification of driver states is the timely recognition
of a decreasing driver involvement. If this risk is recognized, possible consequences
can be rebalanced and/ or reduced. In the following, a time course of occurrence
probability of an uncanny/ unsafe valley is presented.
Figure 4highlights the temporal course of a “crash” into an unsafe valley: At
first, everything is working properly, because the automation’s capabilities are
sufficient for handling the vehicle control situation and the operator is still able to
take over (T1). Over time, habituation effects can emerge (T2). If in that case a
system failure occurs, a human operator’s take-over capability would be insufficient
for the safe management of required take-over maneuver (T3). Within that scenario,
the unsafe valley metaphorically resembles a crevasse (ice crevice) that seems to be
crossable over a stable bridge. This bridge, however, will break down as soon as it is
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 325
used and will devour the all too trustful operator. Another possible metaphor
depicts a pair of scissors: One blade represents the operator’s take-over capabilities,
the other the automation’s availability. The first blade is slowly closing when the
operator’s take-over capability decreases with increasing trust. As soon as the
automation availability decreases temporarily, the movement of the two blades is
cutting off the operator from the control of the process, e.g., of the vehicle.
The deeper mechanism of the “scissor” could be located between the operator’s
confidence in the system’s performance capability and the operator’s take-over
capability, as sketched, e.g., by Manzey and Bahner (2005). Here, the relationship
between the perceived/attributed automation capacities on the part of the operator
and the actual capacity of the automation seems to be relevant. First, there is the
ideal scenario in which the expectations with regard to the technical system’s
abilities are realistic and fit the actual capacities. In that case, it is to be expected
that even after critical events, which will outline the system’s boundaries, no
adjustments to the attributions are necessary. The trust in the system is on an
appropriate and constant level. However, there is a second scenario, in which the
actual capacities do not meet the (too high) expectations. According to Lee and See
(2004) this inadequate calibration might be traced back to an insufficient precision
and specification of the judgment. In the literature, this phenomenon is discussed
under the notions of overtrust/overreliance (e.g., Inagaki and Itoh 2013; Lee and
See 2004) and automation bias/complacency (e.g., Bahner 2008; Mosier and Skitka
1996). While overtrust and automation bias might be understood as cognitive
components that postulate the exaggerated system trust, overreliance and compla-
cency depict the behavioral component. The latter two terms often are used
complementary. They might be conceptualized by means of an insufficient or
infrequent operator-sided system monitoring compared to what actually would be
T 1 :
T 2 :
T 3 :
Fig. 4 A possible connection between control-distribution and certainty
326 F. Flemisch et al.
required with regard to the system’s capacities (Bahner 2008). Only in those critical
situations that evince the automation limits, an adaptation of the attributions with
regard to the system abilities takes place. Subsequently, a strong loss of trust occurs,
which is—according to existing literature (Hoffman et al. 2013; Lee and See
2004)—in most cases only difficult to compensate. Payre et al. (2016) affirm this
role of complacency with regard to the emergence of an unsafe valley. They
postulate that distinct complacency will lead to difficulties (e.g. slow reactions) in
the course of take-over maneuvers between automated and non-automated driving.
Here it also becomes clearer why “unsafe” is the better name for the valley, as the
drivers even feel too comfortable, at least before the incident or accident.
With regard to that phenomenon, there seems to be an additional irony in the
progress of automation: At the beginning of a technological development process,
an automated system still has various errors that an operator can react on by means
of an increased readiness for take-over. Due to the technical system’s increasing
availability over time, the user’s first experience with a system error or failure will
be postponed in time. As such, an undue confidence will build up. Under these
conditions, the same error that could be compensated beforehand will have a higher
impact on road safety and user expectations. Likewise, the loss of trust will be
stronger, the more difficult a mistake might be compensated and the more costs it
provokes (Bahner 2008) as well as the bigger it is and the more difficult it is to
predict (Lee and See 2004).
Choi and Ji (2015) provide a more holistic model of trust on adopting an
autonomous vehicle: Within their approach, they combined different relevant
concepts (e.g., trust, perceived risk and personality traits) within one model and
took it under empirical examination. They found that the concepts of trust (defined
via system transparency, technical competence and situation management) and
perceived usefulness have major influence on the behavioral intention to use a
system. Other factors such as perceived risk and ease of use as well as locus of
control and sensation seeking only have minor or no influence at all. According to
these results, it might be assumed that exaggerated trust and perceived usefulness
lead the naive driver into the uncanny and unsafe valley.
4 A Sketch of the Design Space: The Solution’s Dimensions
for Safeguarding an Unsafe Valley
Justified by the successful implementation of automated systems such as in
Goodrich et al. (2006), Hoeger et al. (2012) or Altendorf et al. (2015), the assump-
tion is that not automation or higher automation per se is unsafe, but that there are
unsafe regions around safe automation designs and combination of different assis-
tance and automation levels and transitions between levels or modes. How can the
unsafe regions be safeguarded in order to utilize the safe regions? Decisive
dimensions that form the design space of safeguards could be:
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 327
a. Abilities of the human and automation: The human capability might depend on a
selection process e.g. in domains like aviation, or on the distributions of a
general population e.g. in the driving domain. The abilities of the automation
depend on their interplay with the environment, and might be structured
according to normal operations, systems limits and system failure. Increasingly
important will be the (meta-) ability to describe its own ability to sense and act, e.
g. in case of a sensor degradation due to changes in the environment like bad
b. The distribution of tasks, authority, responsibility and a minimum of autonomy
for both the automation and the human, as described e.g. by Flemisch et al.
c. Combining (a) and (b), the control-distribution of the corresponding automa-
tion-level, which interact with the human’s involvement, and which could be
organized in clear modes (Fig. 1.5). It is an open research and engineering
question as to how many and which modes are needed and/or wanted at all,
but there are clear hints that to many modes can lead to mode confusion
especially in time critical situations. It is especially not clear whether partially
and conditionally automated levels are needed at all, even if there are hints that a
well designed level of partial automation might improve the take-over ability
and might also be fun. Another open research and engineering question is how
many different modes can be differentiated and operated safely, and how
potential migration paths could look like in order to ensure upwards and down-
wards compatibility.
Another dimension of the design design space are the transitions between the
modes, that can be initiated either by the human (Fig. 5red) or by the automation
(Fig. 5blue). It is an open research and development question as to how the
Fig. 5 Uncanny Valleys: Modi and Transitions
328 F. Flemisch et al.
transitions are balanced and secured against false and failed transition. A safeguard,
e.g., for a transition from right side of the automation scale, crossing the valley
towards the left side of the scale, could be an interlocked transition, where the
control is only handed over to the operator if it is really clear that the operator has
taken over control. This interlocked transition was for cars and trucks successfully
implemented and tested in the HAVEit project. This safeguard works best if the
right rim is also secured with a transition to a minimum risk state, e.g., via a
minimum risk maneuver, as is described in Hoeger et al. (2011).
A safeguarding measure, e.g., of the left rim of the unsafe valley can be, for
example, that an insufficient involvement of the human is determined by monitor-
ing attention and reacting accordingly e.g. through prompts, as described in
e.g. Rauch et al. (2010) and in Schwalm et al. (2015) and Schwalm and
Ladwig (2015).
Another safeguard on the left rim of the valley is the communication of the
ability of the automation, as shown by Beller et al. (2013) based on Heesen et al.
(2010) as a concept to communicate an uncertainty value of the automation.
A similar direction of safeguarding against falling into an unsafe valley is
provided by different authors who promote a so-called “trust management”: First,
systems could already be adapted within the design phase (e.g., a system’s trans-
parency could be highlighted, see Bahner 2008). Alternatively, it could be tried to
improve users’ system perception. Last, system capabilities and limits should be
communicated more overt (cf. Muir 1994). In this context, Payre et al. (2016)
postulate that intensive practice is required, in order to bridge the uncanny valley.
According to them, the outline of the system’s working and its boundaries is
indispensable in order to avoid safety-critical automation effects.
An additional concept that could help prevent people from falling into the unsafe
valley could be nudging, the art of promoting certain behaviors through small
changes in the environment, which nevertheless leave the individual to decide for
herself (e.g. Thaler and Sunstein 2014). Nudging could be used for influencing
humans to avoid the rims of the unsafe valley, for example through promoting more
involvement (Flemisch et al. 2016b). A new concept which links nudging even
stronger to self-determination is currently being developed at the Institute for
Industrial Engineering and Ergonomics at RWTH Aachen University.
Putting those safeguards together, a holistic picture or concept of human-
machine resilience should be derived how the human machine system acts in
normal operations, reacts to disturbances that try to push the system to system
limits and how it reacts to system failure. Figure 6shows such transitions between
normal operations, system limits and system failure. The upper part of Fig. 6shows
a degradation of the machine that might result in a situation with a control deficit,
where the human might be somehow prepared (upper part) or not enough prepared
(middle). The lower part of Fig. 6shows a degradation of the human that might
result in a complete dropout of the human, where the machine might only be able to
handle the situation for a limited time (lower part). A minimum risk maneuver that
hopefully results in a state with a minimum risk or maximal possible safety might
be the last resort in order to keep the human-machine system safe. From a minimum
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 329
risk state the human might be able take over again to normal operations if possible.
The benefit of thinking in a layered approach of normal operations, system limits
and failure could be in the flexibility to stabilize the human-machine system at the
limits and, instead of going into a complete, unrecoverable failure, recover or
gracefully degrade from system limits and failures.
Putting those safeguards together systematically could result in a highly cooper-
ative human-machine system, that could be inspired by natural examples of coop-
eration in movement, for example the H(orse)-Metaphor (Flemisch et al. 2003), it’s
generalization of shared and cooperative guidance and control, complemation
(Schutte 1999) or cooperative automation (e.g. Flemisch et al. 2014; Bengler
et al. 2014), and it’s instantiation H-Mode (e.g., Goodrich et al. 2006; Altendorf
et al. 2015) or Conduct-by-Wire (Winner and Hakuli 2006). The key seems to be to
combine partially and highly automated automation levels in a way, that in partially
automated level, the operator is being involved and prevented from losing her
situation and mode awareness, e.g., with continuous haptic feedback and attention
monitoring, while in highly automated, the human-machine system is so secured
that it keeps safe even in the extreme case that the operator cannot come back into
the loop again.
Standard operaon System limit System failure
Fig. 6 Resilience through recovery or graceful degradation between normal operations, system
limits or system failure
330 F. Flemisch et al.
5 Outlook: Balancing Risks and Chances of Assistance
and Automation by Securing the Unsafe Valley
Although the causative reasons for the emergence of such a valley might not be
finally deduced yet, it becomes increasingly clear that there is at least one unsafe
valley on the scale of assistance and automation, with clear applications at least in
air and ground vehicle automation, and a good chance that the concept might also
be valuable in other domains. It is undoubtedly necessary to secure the boundaries
of an uncanny valley.
There is justified hope that in most cases, the unsafe valley can be well structured
and therefore comparably well secured. We should nevertheless be aware that there
might be cases for which the structure of the unsafe valley could also be more
complex, more like a mountain landscape. In those cases the unsafe valley(s) should
be reasonably mapped at first, before the boundaries are secured and viable bridges
could be built over it.
In order to increase or maintain safety, and to harvest many chances of automa-
tion, the risks of automation have to be controlled. The systematic mapping and
development of safeguarding measures down to safe combinations will require an
interdisciplinary research and development, but will hopefully prevent naive and
probably too trustful operators, system designers and engineers from falling into the
unsafe valley, sometimes even an abyss of automation.
Acknowledgments Thanks to the initial supporters at NASA, as well as the DFG und its referees
for the support in the H-Mode projects and in the research program “Kooperativ Interagierende
Fahrzeuge“, the EU for the support in the projects HAVEit and InteractIVe. The colleges at DLR,
the TU Darmstadt, TU M
unchen and RWTH Aachen for the rich discussion.
Altendorf E, Baltzer M, Heesen M, Kienle M, Weißgerber T, Flemisch F (2015) H-Mode, a
Haptic-multimodal interaction concept for cooperative guidance and control of partially and
highly automated vehicles. In: Winner H et al (eds) Handbook of driver assistance systems.
Springer, Cham
Bahner JE (2008) U
¨bersteigertes Vertrauen in automation: Der Einfluss von Fehlererfahrungen auf
complacency und automation bias. Dissertation, TU Berlin
Beller J, Heesen M, Vollrath M (2013) Improving the driver–automation interaction an approach
using automation uncertainty. Hum Factors 55(6):1130–1141
Bengler K, Flemisch F (2011) Von H-Mode zur kooperativen Fahrzeugf
ergonomische Fragestellungen. In: 5. Darmsta
¨dter Kolloquium: kooperativ oder autonom?.
Bengler K, Dietmayer K, Farber B, Maurer M, Stiller C, Winner H (2014) Three decades of driver
assistance systems: review and future perspectives. IEEE Intell Transport Syst Mag 6(4):6–22
Billings CE (1997) Aviation automation: the search for a human centered approach. Lawrence
Erlbaum Associates, Mahwah, NJ
Choi JK, Ji YG (2015) Investigating the importance of trust on adopting an autonomous vehicle.
Int J Hum Comput Interact 31:692–702
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 331
ock D (2013) Automationseffekte im Fahrzeug—von der Reaktion zur U
Disseration, TU M
Donges E (2015) Fahrerverhaltensmodelle. In: Handbuch Fahrerassistenzsysteme. Springer
Fachmedien, Wiesbaden, pp 17–26
Eckstein L (2016) Personal communication
Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Hum Factors 37
et d’Analyses BDE (2012) Final report on the accident on 1st June 2009 to the Airbus A330-203
registered F-GZCP operated by Air France flight AF 447 Rio de Janeiro–Paris. BEA, Paris
Flemisch FO, Onken R (1998) The cognitive assistant system and its contribution to effective
man/machine interaction. NATO RTO MP-3, Monterey, CA
Flemisch FO, Adams CA, Conway SR, Goodrich KH, Palmer MT, Schutte PC (2003) The
H-metaphor as a guideline for vehicle automation and interaction (No. NASA/TM—2003-
212672). NASA, Langley Research Center, Hampton
Flemisch F, Kelsch J, Schieben A, Schindler J (2006) St
ucke des Puzzles hochautomatisiertes
Fahren: H-Metapher und H-Mode; 4. Workshop Fahrerassistenzsysteme; L
Flemisch F, Kelsch J, L
oper C, Schieben A, Schindler J (2008) Automation spectrum, inner/outer
compatibility and other potentially useful human factors concepts for assistance and automa-
tion. In: de Waard D, Flemisch FO, Lorenz B, Oberheid H, Brookhuis KA (eds) Human factors
for assistance and automation. Shaker, Maastricht, pp 1–16
Flemisch F, Schieben A, Temme G, Rauch N, Heesen M (2009) HAVEit Public Deliverable D33.2
“Preliminary concept on optimum task repartition for HAVEit systems”, Brussels
Flemisch F, Schieben A, Strauss M, L
uke S, Heyden A (2011) Design of human-machine
interfaces for highly automated vehicles in the EU-project HAVEit. In: Proceedings of 14th
international conference on human-computer interaction
Flemisch F, Heesen M, Hesse T, Kelsch J, Schieben A, Beller J (2012) Towards a dynamic balance
between humans and automation: authority, ability, responsibility and control in shared and
cooperative control situations. Int J Cognit Tech Work 14(1):3–18
Flemisch F, Bengler K, Bubb H, Winner H, Bruder R (2014) Towards cooperative guidance and
control of highly automated vehicles: H-mode and conduct-by-wire. Ergonomics Special Issue
Beyond Human-Centred Automation 57(3). Online 24.2.2014
Flemisch F, Schwalm M, Deml B (2015a) Systemergonomie kooperativ interagierende Fahrzeuge.
Projektantrag an die DFG
Flemisch F, Winner H, Bruder R, Bengler K (2015a) Cooperative guidance, control and automa-
tion. In: Winner H et al (eds) Handbook of driver assistance systems. Springer, Cham
Flemisch F, Altendorf E, Baltzer M, Rudolph C, Lopez D, VOß G, Schwalm M (2016a) Arbeiten
in komplexen Mensch-Automations-Systemen: Das Unheimliche und unsichere Tal (Uncanny
Valley) der Automation am Beispiel der Fahrzeugautomatisierung; 62. GfA-Fr
“Arbeit in komplexen Systemen – Digital, vernetzt, human?!” Aachen
Flemisch F, Altendorf E, Weßel G Canpolat Y (2016b) Personal Communcation
Gasser TM, Arzt C, Ayoubi M, Bartels A, Buerkle L, Eier J, Flemisch F, Haecker D, Hesse T,
Huber W, Lotz C, Maurer M, Ruth-Schumacher S, Schwarz J, Vogt W (2012a) Rechtsfolgen
zunehmender Fahrzeugautomatisierung—Gemeinsamer Schlussbericht der Projektgruppe.
Fahrzeugtechnik F 83, Bundesanstalt fur Straßenwesen (BASt)
Gasser TM et al (2012b) Rechtsfolgen zunehmender Fahrzeugautomatisierung - Gemeinsamer
Schlussbericht der Projektgruppe Bundesanstalt f
ur Straßenwesen (BASt), (F 83)
Goodrich K, Flemisch F, Schutte P, Williams R (2006) A design and interaction concept for
aircraft with variable autonomy: application of the H-Mode. In: Digital Avionics Systems
Conference, USA
Heesen M, Kelsch J, L
oper C, Flemisch F (2010) Haptisch-multimodale Interaktion f
hochautomatisierte, kooperative Fahrzeugf
uhrung bei Fahrstreifenwechsel-, Brems- und
overn; Gesamtzentrum f
ur Verkehr Braunschweig (Hrsg.): Automatisierungs-,
Assistenzsysteme und eingebettete Systeme f
ur Transportmittel AAET, Braunschweig
332 F. Flemisch et al.
Herzberger ND, Voß GMI, Schwalm M (2016) Personal communication
Hoeger R, Amditis A, Kunert M, Hoess A, Flemisch F, Krueger H-P, Bartels A, Beutner A (2008)
Highly automated vehicles for intelligent transport: Haveit approach. ITS World Congress,
New York
Hoeger R, Zeng H, Hoess A, Kranz T, Boverie S, Strauss M et al (2011) Final report, Deliverable
D61. 1. Highly automated vehicles for intelligent transport (HAVEit). 7th Framework
Hoeger R, Wiethof M, Rheker T (2012) Complexity measures of traffic scenarios: psychological
aspects and practical applications. In: International conference on driver behaviour and
training 2011, Paris
Hoffman RR, Johnson M, Bradshaw JM, Underbrink A (2013) Trust in automation. IEEE Intell
Syst 28(1):84–88
Hollnagel E (2007) Keynote zur 7. Berliner Werkstatt “Prospektive Gestaltung von Mensch-
Technik-Interaktion” 2007, Berlin
Inagaki T, Itoh M (2013) Human’s overtrust in and overreliance on advanced driver assistance
systems: a theoretical framework. Int J Veh Technol 2013:1–8
Kuz S, B
utzler J, Schlick CM (2015) Anthropomorphic design of robotic arm trajectories in
assembly cells. J Occup Ergon 12(3):73–82
Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46
Ma R, Kaber DB (2005) Situation awareness and workload in driving while using adaptive cruise
control and a cell phone. Int J Ind Ergon 35(10):939–953
Manzey D, Bahner JE (2005) Vertrauen in Automation als Aspekt der Verla
¨sslichkeit von
Mensch-Maschine-Systemen. Beitra
¨ge zur Mensch-Maschine-Systemtechnik aus Forschung
und Praxis. Festschrift f
ur Klaus-Peter Timpe 93–109
Maurer M (2016) Personal communication
Mayer MP (2012) Entwicklung eines kognitionsergonomischen Konzepts und eines
Simulationssystems f
ur die robotergest
utzte Montage, Dissertation. Shaker, Aachen
Merat N, Jamson AH, Lai FC, Carsten O (2012) Highly automated driving, secondary task
performance, and driver state. Hum Factors 54(5):762–771
Mori M (1970) The uncanny valley. Energy 7(4):33–35 (in Japanese)
Mori M, MacDorman KF, Kageki N (2012) The uncanny valley [from the field]. IEEE Robot
Autom Mag 19(2):98–100
Mosier KL, Skitka LJ (1996) Human decision-makers and automated decision aids: made for each
other? In: Parasuraman R, Mouloua M (eds) Automation and human performance: theory and
applications. Lawrence Erlbaum Associates, Mahwah, NJ, pp 201–220
Muir BM (1994) Trust in automation: Part I. Theoretical issues in the study of trust and human
intervention in automated systems. Ergonomics 37(11):1905–1922. doi:10.1080/
National Transportation Safety Board (2010) “Loss of Control on Approach, Colgan Air, Inc.
Operating as Continental Connection Flight 3407 Bombardier DHC-8-400, N200WQ,
Clarence Center, New York, February 12, 2009”. NTSB/AAR-10/01. National Transportation
Safety Board, Washington, DC
Norman DA (1990) The problem with automation. Philos Trans R Soc Lond B 327:585–593
Payre W, Cestac J, Delhomme P (2016) Fully automated driving: impact of trust and practice on
manual control recovery. Hum Factors 58(2):229–241
Petermann I, Schlag B (2009) Auswirkungen der Synthese von Assistenz und Automation auf
das Fahrer-Fahrzeug System. Paper presented at the 11. Braunschweiger Symposium
Automatisierungs-, Assistenzsysteme und eingebettete Systeme f
ur Transportmittel (AAET)
2011. Braunschweig
Rauch N, Kaussner A, Krueger H-P, Boverie S, Flemisch F (2010) Measures and countermeasures
for impaired driver’s state within highly automated driving. Transport Research Arena,
Uncanny and Unsafe Valley of Assistance and Automation: First Sketch and... 333
SAE J 3016:2014 Taxonomy and definitions for terms related to on-road motor vehicle automated
driving systems. Society of Automotive Engineers
Schieben A, Flemisch F (2008) Who is in control? Exploration of transitions of control between
driver and an eLane vehicle automation. VDI/VW Tagung Fahrer im 21. Jahrhundert 2008.
Schieben A, Dambo
¨ck D, Kelsch J, Rausch H, Flemisch F (2008) Haptisches feedback im
spektrum von fahrerassistenz und automation. In: 3. Tagung Aktive Sicherheit durch
Fahrerassistenz, Garching
Schutte PC (1999) Complemation: an alternative to automation. J Inform Tech Impact 1
Schutte P, Goodrich K, Williams R (2016) Synergistic allocation of flight expertise on the flight
deck (SAFEdeck): a design concept to combat mode confusion, complacency, and skill loss in
the flight deck. In: Stanton NA, Landry S, Di Bucchianico G, Vallicelli A (eds) Advances in
human aspects of transportation. Springer, Berlin, pp 899–911
Schwalm M, Ladwig S (2015) How do we solve demanding situations—a discussion on driver
skills and abilities. In: 57th Conference of experimental psychologists 2015, Hildesheim
Schwalm M, Voß GMI, Ladwig S (2015) Inverting traditional views on human task-processing
behavior by focusing on abilities instead of disabilities–a discussion on the functional situation
management of drivers to solve demanding situations. In: Engineering psychology and cogni-
tive ergonomics. Springer, Cham, pp 286–296
Sheridan TB (ed) (1976) Monitoring behavior and supervisory control. Springer, Berlin
Sheridan TB (1980) Computer control and human alienation. Technol Rev 83(1):65–73
Thaler RH, Sunstein CR, Balz JP (2014) Choice architecture. In: The behavioral foundations of
public policy. Princeton University Press, Princeton, NJ
Voß GMI, Schwalm M (2015) 1. Kongress der Fachgruppe Verkehrspsychologie 2015.
Weßel G, Altendorf E, Flemisch F (2016) Self-induced nudging in conditionally and highly
automated driving. Working Paper, Aachen
Winner H, Hakuli S (2006) Conduct-by-wire–following a new paradigm for driving into the
future. In: Proceedings of FISITA world automotive congress, Oct 2006, vol 22, p 27
334 F. Flemisch et al.
... The presence of partially autonomous vehicles on the streets is starting to affect the traditional driver-vehicle interaction patterns. In fact, the addition of automation leads to a significant behavioral change in the way humans drive; interacting with partially automated systems disrupts the classic traffic dynamics, and it can cause unsafe interactions difficult to predict (Flemisch et al., 2017). Hence, the research community must place at the top of its agenda the issue of cognitive interaction between the driver and the automated system. ...
... transition from Levels 2-3 to Levels 4-5 is proceeding slowly, forcing human drivers to interact with partially automated systems-often without being aware that other vehicles are controlled by artificial agents. These interactions disrupt the classic traffic dynamics and can produce unsafe scenarios (e.g., disengagements) that are difficult to predict (Flemisch et al., 2017). The "distributed cognition" account of vehicle intelligence approaches the problem of driver-vehicle interaction patterns differently. ...
Full-text available
This paper focuses on the collaboration between human drivers and intelligent vehicles. We propose a collaboration mechanism grounded on the concept of distributed cognition. With distributed cognition, intelligence does not lie just in the single entity but also in the interaction with the other cognitive components in a system. We apply this idea to vehicle intelligence, proposing a system distributed into two cognitive entities—the human and the autonomous agent—that together contribute to drive the vehicle. This account of vehicle intelligence differs from the mainstream research effort on highly autonomous cars. The proposed mechanism follows one of the paradigm derived from distributed cognition, the rider-horse metaphor: just like the rider communicates their intention to the horse through the reins, the human influences the agent using the pedals and the steering wheel. We use a driving simulator to demonstrate the collaboration in action, showing how the human can communicate and interact with the agent in various ways with safe outcomes.
... But this, as we argue, cannot be granted if ADS are not under some relevant form of human control and there are gaps in responsibility. When this happens, the relevant human actors may not be sufficiently able, motivated and willing to prevent undesired outcomes (Elish, 2019;Flemisch et al., 2017). In addition, we consider human responsibility also to be important in order to prevent legitimate discontent among victims of accidents and distrust towards technology more generally (Danaher, 2016). ...
... Mackworth, 1948). This is where the so-called "unsafe valley of automation" begins (Flemisch et al., 2017). At a higher level of automation an operator controlling the vehicle remotely might ultimately have control over the vehicle instead of the driver, which might alleviate the driver from some untaught tasks, but could possibly also introduce increasingly complex novel tasks. ...
Full-text available
The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not hardware and software and their algorithms, should remain ultimately—though not necessarily directly—in control of, and thus morally responsible for, the potentially dangerous operation of driving in mixed traffic. We propose an Automated Driving System to be under meaningful human control if it behaves according to the relevant reasons of the relevant human actors (tracking), and that any potentially dangerous event can be related to a human actor (tracing). We operationalise the requirements for meaningful human control through multidisciplinary work in philosophy, behavioural psychology and traffic engineering. The tracking condition is operationalised via a proximal scale of reasons and the tracing condition via an evaluation cascade table. We review the implications and requirements for the behaviour and skills of human actors, in particular related to supervisory control and driver education. We show how the evaluation cascade table can be applied in concrete engineering use cases in combination with the definition of core components to expose deficiencies in traceability, thereby avoiding so-called responsibility gaps. Future research directions are proposed to expand the philosophical framework and use cases, supervisory control and driver education, real-world pilots and institutional embedding
... On the other hand, few studies have analysed the distinction between the different anatomic parts of robots and their implication for cognitive ergonomics and robot design [13]. Flemisch et al. [14] investigated the effect of the Uncanny Valley Theory and, in particular, of the high level of anthropomorphism, in the design phase of industrial devices. Their study highlighted the consequent potential risks for the user experience, such as feelings of estrangement and detachment from the robot. ...
... The results could also be linked to the first ergonomic law of Design for Robots proposed by Sosa et al. [14]. The law states that robotic devices and environments shall never present an inconvenience, threat, or annoyance to human users, particularly in critical contexts for the safety of the user such as the industrial one. ...
Conference Paper
This study aims to provide an original investigation of the morphological features and the anthropomorphic characteristics of industrial robots. In the introduction, we summarise some empirical findings on the topic, drawing to the Uncanny Valley hypothesis and other theoretical frameworks. Subsequently, we conduct an argumentative literature review to elicit the connection between industrial use and morphological features of robots, particularly in the European and Italian robotic context. We hypothesise that non-industrial robots are distinguishable from the other types of robots basing on their degree of Human Likeness and that facial features are crucial in determining such difference, whilst hands and fingers would report a higher level of HL in industrial robots. We tested our hypothesis using the open-source ABOT database, which aggregates descriptions of robots for industrial and non-industrial use. We found support for our hypothesis (p=.04, F=2.88). Ultimately, we offer some considerations about the physical features associated with the use of robots in the industrial context and their functionality.
... Autonomous vehicles extend previous AI-based driving assistance technologies to transport solutions with greater autonomy, endowed with the important additional capabilities of making and enacting decisions without substantial human involvement (Hulse Lynn et al., 2018). However, even partial autonomy in AVs can elicit negative emotional responses from customers (Mintel, 2019), while fully autonomous vehicles may trigger emotional phenomena akin to the uncanny valley effect observed in human-robot interactions (Flemisch et al. 2017). Nonetheless, there has been limited focus on emotional aspects relating to AV services in prior ...
Full-text available
Advances in artificial intelligence (AI) are increasingly enabling firms to develop services that utilize autonomous vehicles (AVs). Yet, there are significant psychological barriers to adoption, and insights from extant literature are insufficient to understand customer emotions regarding AV services. To allow for a holistic exploration of customer perspectives, we synthesize multidisciplinary literature to develop the Customer Responses to Unmanned Intelligent-transport Services based on Emotions (CRUISE) framework, which lays the foundation for improved strategizing, targeting, and positioning of AV services. We subsequently provide empirical support for several propositions underpinning the CRUISE framework using representative multinational panel data ( N = 27,565) and an implicit association test ( N = 300). We discover four distinct customer segments based on their preferred degree of service autonomy and service risk. The segments also differ in terms of the valence and intensity of emotional responses to fully autonomous vehicle services. Additionally, exposure to positive information about AV services negatively correlates with the likelihood of membership in the two most resistant segments. Our contribution to service research is chiefly twofold; we provide: 1) a formal treatise of AV services, emphasizing their uniqueness and breadth of application, and 2) empirically validated managerial directions for effective strategizing based on the CRUISE framework.
... Leaving aside the issue of driver's responsibility, which will be discussed in "Responsibility" section below, it is interesting to note that the riskiness of these handover situations was well-known, and this kind of accidents largely predicted by scientific studies. As reported in the discussion of Recommendation 2, handover scenarios require that the driver is given sufficient time to regain situational awareness (Flemisch et al., 2017), and in one simulator study none of the participants was able to regain control of the car within the 2 seconds they were given to react to a sudden failure in proximity of a curve (Flemisch et al., 2008). ...
Full-text available
The paper has two goals. The first is presenting the main results of the recent report Ethics of Connected and Automated Vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility written by the Horizon 2020 European Commission Expert Group to advise on specific ethical issues raised by driverless mobility, of which the author of this paper has been member and rapporteur. The second is presenting some broader ethical and philosophical implications of these recommendations, and using these to contribute to the establishment of Ethics of Transportation as an independent branch of applied ethics. The recent debate on the ethics of Connected and Automated Vehicles (CAVs) presents a paradox and an opportunity. The paradox is the presence of a flourishing debate on the ethics of one very specific transportation technology without ethics of transportation being in itself a well-established academic discipline. The opportunity is that now that a spotlight has been switched on the ethical dimensions of CAVs it may be easier to establish a broader debate on ethics of transportation. While the 20 recommendations of the EU report are grouped in three macro-areas: road safety, data ethics, and responsibility, in this paper they will be grouped according to eight philosophical themes: Responsible Innovation, road justice, road safety, freedom, human control, privacy, data fairness, responsibility. These are proposed as the first topics for a new ethics of transportation.
... Systems that parallel human-like characteristics and personas (i.e., humanoid robots, intelligent agents such as Alexa) tend to have more trust than systems designed with the same capacities and purpose, but with non-anthropomorphized characteristics (Hancock et al., 2011;de Visser et al., 2017;Calhoun et al., 2019). However, there is a point where extreme similarity between a technology and human can result in a significant drop in trust levels, often referred to as the uncanny valley (Flemisch et al., 2017). Because of this interaction between trust and human-like characteristics in technology, there are observable differences in trusting behavior, founded on emotional connection to the system rather than system capability (Jensen et al., 2020). ...
Full-text available
Investigations into physiological or neurological correlates of trust has increased in popularity due to the need for a continuous measure of trust, including for trust-sensitive or adaptive systems, measurements of trustworthiness or pain points of technology, or for human-in-the-loop cyber intrusion detection. Understanding the limitations and generalizability of the physiological responses between technology domains is important as the usefulness and relevance of results is impacted by fundamental characteristics of the technology domains, corresponding use cases, and socially acceptable behaviors of the technologies. While investigations into the neural correlates of trust in automation has grown in popularity, there is limited understanding of the neural correlates of trust, where the vast majority of current investigations are in cyber or decision aid technologies. Thus, the relevance of these correlates as a deployable measure for other domains and the robustness of the measures to varying use cases is unknown. As such, this manuscript discusses the current-state-of-knowledge in trust perceptions, factors that influence trust, and corresponding neural correlates of trust as generalizable between domains.
Durch die bedeutenden technischen Fortschritte der letzten Jahre bei der Entwicklung von immer leistungsfähigeren Assistenzsystemen im Fahrkontext, die inzwischen sogar zu der ersten Zulassung eines SAE Level 3 Systems geführt haben, ist die (Teil-) Automation mittlerweile im allgemeinen Straßenverkehr angekommen. Diese Systeme, die noch vor wenigen Jahren als Science-Fiction gegolten hätten, sind aktuell jedoch noch weit davon entfernt den Menschen bei der Fahrzeugführung überflüssig zu machen. So ist die Fahrperson bei teilautomatisierten Fahrsystemen (ab SAE Level 1) stets für die Fahraufgabe verantwortlich und muss diese auch kontinuierlich überwachen. Aber auch automatisierte Fahrsysteme (ab SAE Level 3) benötigen die Fahrperson zunächst noch als Rückfallebene und werden auch Stand heute in naher Zukunft noch nicht alle Anwendungsfälle selbständig ausführen können. Gerade jedoch, wenn der Mensch als Rückfallebene dienen soll, ist es unerlässlich zu wissen, ob die Fahrperson auch in der Lage ist, die Fahraufgabe wieder sicher zu übernehmen. Dieser Herausforderung widmet sich die vorliegende Dissertation.
As the first automated driving functions are now finding their way into serial production vehicles, the focus of research and development has shifted from purely automated capabilities to cooperative systems, i.e. cooperation between vehicles, and vehicle automation with drivers. Especially in partially and highly automated cooperative driving the driver should be able to take over the driving task or adapt the driving behavior. This paper presents the pattern approach to cooperation as a method to recognize and solve reoccurring problems. As an example, the pattern approach is applied to the use case of a takeover request on a highway. The concept of Confidence Horizons, which balance the capabilities of the driver and the automation based on cooperative interaction patterns, is introduced. To estimate the human capabilities for this Confidence Horizon, a Diagnostic Takeover Request is used, in which the automation analyzes the driver’s orientation reaction to a takeover request. This allows the early detection of potentially unsafe takeovers reducing possible transitions to a Minimum Risk Maneuver (MRM).
Human cyber-physical production systems (HCPS) - as an extension of cyber-physical production systems (CPPS) - focus on the human being in the system and the development of socio-technical systems. Humans and therefore anthropogenic behavior have to be integrated into the manufacturing system and its processes. Due to the shift towards customized products, CPPS require high reconfigurability, flexibility and individual manufacturing processes. As a result, the role of and requirements for humans are fundamentally changing, creating the necessity for new interfaces to interact with machines within reconfigurable manufacturing process. Scalable human machine interfaces (HMI) are needed that incorporate emerging technologies as well as allowing mobility for the operator while interacting with different machines. Therefore, new approaches for HMI in the context of CPPS and HCPS are needed, which are simultaneously sufficiently mobile, scalable, and modular as well as human centered. High mobility of HMIs can be ensured by using the 5G communication standard that enables wireless migration of computational resources to an edge server with high reliability, low latency, and high data rates. This paper develops, implements, and evaluates a 5G-based, framework for highly mobile, scalable HMI in CPPS by utilizing the new 5G communication technology. The capabilities of the framework and of the new communication standard are demonstrated and evaluated for a use case, where a brain-computer interface (BCI) is used to control a robot arm. For better accuracy, the BCI is supported by an eye tracker and visual feedback is received via an augmented reality environment, and all devices are embedded via 5G communication. In particular, the influence of 5G communication on the system performance is examined, evaluated, and discussed. For this purpose, experiments are designed and conducted with different network configurations.
Full-text available
Advances in AI are increasingly enabling firms to develop services that utilize autonomous vehicles (AVs). Yet, there are significant psychological barriers to adoption, and insights from extant literature are insufficient to understand customer emotions regarding AV services. To allow for a holistic exploration of customer perspectives, we synthesize multidisciplinary literature to develop the Customer Responses to Unmanned Intelligent-transport Services based on Emotions (CRUISE) framework, which lays the foundation for improved strategizing, targeting, and positioning of AV services. We subsequently provide empirical support for several propositions underpinning the CRUISE framework using representative multinational panel data (N = 27,565) and an implicit association test (N = 300). We discover four distinct customer segments based on their preferred degree of service autonomy and service risk. The segments also differ in terms of the valence and intensity of emotional responses to fully autonomous vehicle services. Additionally, exposure to positive information about AV services negatively correlates with the likelihood of membership in the two most resistant segments. Our contribution to service research is chiefly twofold; we provide: 1) a formal treatise of AV services, emphasizing their uniqueness and breadth of application, and 2) empirically validated managerial directions for effective strategizing based on the CRUISE framework.
Conference Paper
Full-text available
Kurzfassung: Digitalisierung und Vernetzung ermöglichen immer leistungsfähigere Assistenz-und Automationssysteme z.B. in der Luftfahrt, in der Produktion oder für Automobile, die allerdings auch Nachteile z.B. im Zusammenspiel mit dem Menschen mit sich bringen können. In der Robotik ist ein sog. Uncanny Valley, ein unheimliches Tal bekannt, welches beschreibt, dass Roboter mit einer hohen, aber nicht perfekten Ähnlichkeit und Fähigkeit verglichen z.B. mit dem Menschen als unheim-lich und unsicher wahrgenommen werden. Für die Fahrzeugautomation deutet sich ein vergleichbarer metaphorischer Designzusammenhang an, bei dem es zwischen teil-und hochautomatisierten Automationsgraden eine Zone gibt, bei denen es zu Fehlwahrnehmungen und Sicherheits-einbußen kommen kann. Der Beitrag skizziert den Zusammenhang, fasst erste bestätigende Studien zusammen und entwirft den Gestaltungsraum.
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Conference Paper
Assistenz, aktive Sicherheit und Automation im Kraftfahrzeug werden in diesem Beitrag als Teile einer Gesamtentwicklung hin zum hochautomatisierten Fahren verstanden. Als mögliches Puzzlestück dieser Entwicklung wird eine intuitiv verständliche Beschreibung hochautomatisierter Fahrzeuge in Form einer Designmetapher (H-Metapher), basierend auf einem natürlichen Vorbild, skizziert. Als weiteres Puzzlestück werden die Aktivitäten in Richtung einer durchgängigen, haptisch-multimodale Interaktionssprache für hochautomatisierte Fahrzeuge (H-Mode) beschrieben. Weitere Stücke des Gesamtpuzzles sind über verschiedene Hersteller, Forschungseinrichtungen und Behörden verteilt, können aber nur integriert zu einem sinnvollen Ergebnis kommen können, das nicht notwendigerweise das fahrerlose Fahrzeug sein wird.
With increasing technical possibilities in the area of assistance and automation, diverse challenges, risks, and chances arise in the design of assisted, partially and fully automated driving. One of the greatest challenges consists of integrating and offering a multitude of complex technical functions in such a way that the human driver intuitively understands them as a cohesive, cooperative system. A solution to this problem can be found in the H-Mode. It is inspired by the role model of horse and rider and offers an integrated haptic-multimodal user interface for all kinds of movement control. The H-Mode, as presented in this chapter, has been designed for ground vehicles and includes several comfort and security systems on three assistance and automation levels, which can be interchanged fluidly. © Springer International Publishing Switzerland 2016. All rights reserved.
Durch die zunehmende Zuverlässigkeit automatisierter Systeme konnte in den vergangenen Jahren eine Vielzahl potenzieller Fehlerquellen der Mensch-Maschine Interaktion reduziert werden. Zugleich sind dadurch jedoch auch neue Risiken entstanden: Gerade bei hoch reliablen automatisierten Systemen besteht die Gefahr eines übersteigerten Vertrauens in das System, wobei complacency und automation bias mögliche Folgen auf Verhaltensebene darstellen. Complacency stammt aus dem Kontext klassischer monitoring-Aufgaben. Zu verstehen ist darunter eine unzureichende Überwachung oder Überprüfung der Automation, die zu einem Übersehen kritischer Systemzustände führen kann. Demgegenüber stammt das Konzept automation bias aus dem Kontext der Nutzung von Entscheidungsassistenzsystemen. Gefasst werden darunter zwei verschiedene Fehlertypen: Während commission Fehler darin bestehen, dass ein Operateur einer fehlerhaften Empfehlung eines Assistenzsystems folgt, äußern sich omission Fehler darin, dass kritische Systemzustände übersehen werden, sofern diese vom Assistenzsystem nicht angezeigt werden. Trotz der engen konzeptuellen Nähe von complacency und automation bias, sind die Phänomene bislang nur getrennt voneinander empirisch untersucht worden. Zentrales Anliegen der vorliegenden Arbeit war es, methodische Schwächen bisheriger Studien zu überwinden und einen Beitrag zu einer empirisch fundierten Klärung der Konzepte complacency und automation bias sowie des Bezugs der Konzepte untereinander zu leisten. Darüber hinaus wurden mögliche Gegenmaßnahmen im Sinne von Automationsfehlern im Training untersucht. Zwei experimentelle Studien (jeweils N = 24) wurden durchgeführt, wobei eine Mikrowelt als Versuchsumgebung diente, in der die Probanden bei der Detektion, Diagnose und Behebung von Fehlfunktionen einer Prozessteuerung durch ein Assistenzsystem unterstützt wurden. In der ersten Studie wurde der Einfluss von Fehldiagnosen im Training untersucht, während im zweiten Experiment der Einfluss von Systemausfällen im Fokus stand. Die Ergebnisse dieser beiden Studien lassen folgende Schlussfolgerungen zu: 1) Complacency stellt im Sinne einer unzureichenden Überwachung eine Ursache für omission Fehler und im Sinne einer unzureichenden Überprüfung einen beitragenden Faktor für die Entstehung von commission Fehlern dar. 2) Die Erfahrung von Automationsfehlern während des Trainings reduziert das Auftreten von complacency, verhindern es jedoch nicht gänzlich. 3) Automationsfehler wirken spezifisch: Während Fehldiagnosen im Training zu einer verbesserten Überprüfung der Diagnosefunktion des Assistenzsystems führen, bleibt die Überwachung der Alarmfunktion dadurch unbeeinflusst. Demgegenüber führen Ausfälle des Assistenzsystems im Training später zu einer vermehrten Überwachung der Alarmfunktion, nicht aber zu einer verbesserten Diagnoseüberprüfung. Dies gilt es bei der Gestaltung von Trainingsmaßnahmen zu berücksichtigen.
This paper presents a new design and function allocation philosophy between pilots and automation that seeks to support the human in mitigating innate weaknesses (e.g., memory, vigilance) while enhancing their strengths (e.g., adaptability, resourcefulness). In this new allocation strategy, called Synergistic Allocation of Flight Expertise in the Flight Deck (SAFEdeck), the automation and the human provide complementary support and backup for each other. Automation is designed to be compliant with the practices of Crew Resource Management. The human takes a more active role in the normal operation of the aircraft without adversely increasing workload over the current automation paradigm. This designed involvement encourages the pilot to be engaged and ready to respond to unexpected situations. As such, the human may be less prone to error than the current automation paradigm.
The technological feasibility of more and more assistant systems and automation in vehicles leads to the necessity of a better integration and cooperation with the driver and with other traffic participants. This chapter describes an integrated cooperative guidance of vehicles including assisted, partially automated, and highly automated modes. Starting with the basic concepts and philosophy, the design space, parallel and serial aspects, the connections between abilities, authority, autonomy, control, and responsibility, vertical versus horizontal and centralized versus decentralized cooperation are discussed, before two follow-on chapters of H-Mode and Conduct-by-Wire describe instantiations of cooperative guidance and control.
BACKGROUND: Anthropomorphism is attribution of human form or behavior to non-human agents. Its application in a robot increases occupational safety and user acceptance and reduces the mental effort needed to anticipate robot behavior. OBJECTIVE: The research question focuses on how the anthropomorphic trajectory and velocity profile of a virtual gantry robot affects the predictability of its behavior in a placement task. METHODS: To investigate the research question, we developed a virtual environment consisting of a robotized assembly cell. The robot was given human movements, acquired through the use of an infrared based motion capture system. The experiment compared anthropomorphic and constant velocity profiles. The trajectories were based on human movements of the hand-arm system. The task of the participants was to predict the target position of the placing movement as accurately and quickly as possible. RESULTS: Results show that the anthropomorphic velocity profile leads to a significantly shorter prediction time (α 0.05). Moreover, the error rate and the mental effort were significantly less for the anthropomorphic velocity profile. Based on these findings, a speed-accuracy trade-off can be excluded. CONCLUSIONS: Participants were able to estimate and predict the target position of the presented movement significantly faster and more accurately when the robot was controlled by the human-like velocity profile.