ChapterPDF Available

A Literature Review of the Research on the Uncanny Valley

Authors:

Abstract and Figures

Depend on the development of science and technology, the demands for robots are not only limited to the use of functions but also pay more attention to the emotional experience brought by the products. However, as the robot’s appearance approach human-likeness, it makes people uncomfortable, which is called the Uncanny Valley (UV). In this paper, we systematically review the hypothesis and internal mechanisms of UV. Then we focus on the methodological limitations of previous studies, including terms, assessment, and materials. At last, we summarize the applications in interaction design to avoid the uncanny valley and propose future directions.
Content may be subject to copyright.
A Literature Review of the Research on the Uncanny Valley
Jie Zhang1,2, Shuo Li3, Jing-Yu Zhang1,2, Feng Du1,2, Yue Qi1,2*, Xun Liu1,2
1 CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing,
China
2 Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
3 Department of Psychology, Nankai University, Tianjin, China
qiy@psych.ac.cn
Abstract. Depend on the development of science and technology, the demands for robots are not
only limited to the use of functions but also pay more attention to the emotional experience brought
by the products. However, as the robot’s appearance approach human-likeness, it makes people
uncomfortable, which is called the Uncanny Valley (UV). In this paper, we systematically review
the hypothesis and internal mechanisms of UV. Then we focus on the methodological limitations
of previous studies, including terms, assessment, and materials. At last, we summarize the applica-
tions in interaction design to avoid the uncanny valley and propose future directions.
Keywords: Uncanny Valley, Humanoid Robots, Human-Computer Interaction, Affective Design,
Human-likeness
1 Introduction
With the boom of computer technology and the development of related hardware facilities, robots have
been used more and more widely in human society and provided many conveniences to peoples life
[1]. In the past 20 years, social robots have developed fast and been used to interact with humans in
many places, such as homes, hospitals, and shopping malls [1]. In order to improve human-robot inter-
action, engineers have designed robots that resemble humans highly [2]. There is a positive relationship
between the human-likeness of robots and feelings of comfort with them. However, it has a steep dip in
comfort and felt eeriness when robots looked almost but not entirely human, which called the “uncanny
valley” [3].
The concept of “uncanny valley” was first proposed by Mori in 1970 [4]. In his paper, he envi-
sioned people’s reactions to robots that looked and acted almost like a human and took some examples
to verify his thought. He proposed that the level of affinity for the robot increased up with its appear-
ance becoming more humanlike until people perceived the faces as eerie suddenly. However, as the
robot’s human-likeness went on increasing, the eeriness reverted to likeability. This concept is useful to
design a robot and works as a guide to improve human-robot interaction.
This paper systematically combs the explanation and internal mechanisms of the uncanny valley,
the problems and deficiencies in existing research, and its practical applications in interaction design.
The paper has the following structure. In Section 2, we describe different explanations of the uncanny
valley. In Section 3, we present the defects of existing research, including the terms, assessments, and
materials. In Section 4, we summarize the application of the phenomenon in the design of robots to
avoid the uncanny valley.
2 Explanations of the Uncanny Valley
Researchers have proposed a variety of explanations to account for the uncanny valley phenomenon
[2]. These hypotheses can be mainly divided into two categories. One category explains the phenome-
non from an evolutionary psychology perspective that the uncanny feeling comes from facial features
themselves, including the Threat Avoidance hypothesis [2, 3, 5, 6] and the Evolutionary Aesthetics
hypothesis [2, 5, 6]. The other category interprets the phenomenon based on cognitive conflicts, includ-
ing the Mind Perception hypothesis [1, 7], the Violation of Expectation hypothesis [1-3, 5], and the
Categorical Uncertainty hypothesis [1, 2, 8]. Most related empirical studies focus on the latter because
the cognitive response is easy to quantify and manipulate. However, the hypothesis of evolutionary
psychology has little empirical research.
2.1 Explanations based on evolutionary psychology
Threat Avoidance hypothesis. Mori [3] first pointed out that the UV phenomenon “may be important
to our self-preservation”. During the process of evolution, diseases and death are two main threats to
human beings. Thus, there are two explanations for the uncanny valley stemming from the avoidance
of threat. The first explanation is called pathogen avoidance, which indicates that when people perceive
the imperfections of humanoid robots, they will associate the defects with diseases [2]. Moreover, be-
cause of the high human-likeness, people may consider that humanoid robots are genetically close to
humans and are likely to transmit diseases to humans [2, 5, 6]. However, this hypothesis is just an in-
ference based on Rozins theory of disgust and has not been tested directly [2, 5]. Another explanation
named mortality salience was proposed based on the terror management theory. Hanson [9] indicated
that the flaws of humanoid robots combined with a humanlike appearance could remind us of mortali-
ty. From the aspect of this explanation, the uncanny feeling is the anxiety for mortality and the fear of
death triggered by humanoid robots. People may be reminded of death and consider humanoid robots
as dead individuals who come alive [2, 5]. However, there is only one study testing the hypothesis
directly and found that the sensitivity to the vulnerability and impermanence of the physical body was
significantly correlated with eerie ratings of android [10].
Evolutionary Aesthetics hypothesis. The hypothesis pays attention to the attractiveness of physical
features and regards the uncanny feeling as an aversion to unattractive individuals. By morphing the
images of abstract robots and realistic robots or real humans, Hanson’s research [9] found that the high-
attractive images were consistently rated low in eeriness. Attractiveness is judged based on specific
external characteristics that humans are sensitive to, such as bilateral symmetry, facial proportions, and
skin quality [6]. These traits are associated with health, fertility, and other aspects that are close to the
reproduction, and we inherit the preference for these traits from our ancestors who successfully repro-
duced under the selection pressure [2, 5, 6]. In a word, aesthetic properties are shaped by natural selec-
tion and determine the feeling of humanoid robots potentially.
These hypotheses explain the uncanny valley from the perspective of evolutionary psychology.
Although they focus on various mechanisms to suggest the explanations, the essence is to achieve self-
preservation and successful reproduction, which is the core of evolutionary psychology. However, the
empirical studies supporting these hypotheses are still insufficient [2].
2.2 Explanations based on cognitive conflicts
Mind Perception hypothesis. Gray & Wegner [7] proposed that humanoid robots are uncanny because
they are so realistic that people may ascribe to them the capacity to feel and sense. However, this ca-
pacity is considered as the unique characteristic of humans, which is not expected to emerge on the
robots [2, 7]. People are happy to have robots do works as human, but not have feelings like humans.
Violation of Expectation hypothesis. This hypothesis expands the mind perception hypothesis and
believes that people will elicit specific expectations of the humanoid robots whose appearance resem-
bles that of humans. For example, humanoid robots are expected to perform movements or speak as
smoothly as humans. However, the robots often violate these expectations: the movements may per-
form mechanically, and the voice may be synthetic [2, 5]. The mismatch between expectations and
reality results in negative emotional appraisal and avoidance behaviors, and leads to the feelings of
eeriness and coldness [1, 11].
Categorical Uncertainty hypothesis. The hypothesis emphasizes that the feeling of eeriness is caused
by the ambiguous boundary of categories [2, 5, 6]. There are many empirical studies on this hypothesis,
but the results are quite controversial. Some studies support the Mori’s uncanny valley that the most
humanlike robots are perceived as the robots. This perception blurs the category boundary between
humans and machines to the greatest extent [12]. However, Ferrey, Burleigh, & Fenske’s study [13]
employed human-robot and human-animal morphing images, and found that the negative peak is not
always close to the human end (Line 1 in Fig. 1). The perceptual ambiguity was maximum at the mid-
point of each continuum (Boundary 1 in Fig. 1). Furthermore, recent research found that the location of
the category boundary did not coincide with the classic uncanny valley either (Boundary 2 in Fig. 1),
and the negative peak was near the machine end (Line 2 in Fig. 1) [14].
These hypotheses interpret the uncanny valley based on cognitive conflicts. The conflict may exist
between deduction and stereotype, between expectation and reality, or between different categories.
Although there are many related empirical studies because the cognitive response is easy to quantify
and manipulate, the explanation of the uncanny valley is still controversial.
3 Defects of Existing Research
At present, the related research of the uncanny valley involves computer science, psychology, material
science, and other fields. Researchers studied the feelings of eeriness from various groups of users [15,
16], and explore the methods to improve the design of androids or computer-animated characters [14,
17, 18]. However, there are some problems in the existing studies, which may lead to inconsistent find-
ings.
Likability/ Affinity
Human likeness
classic
line1
line2
boundary1
boundary2
Fig. 1. The uncanny valley in different studies
The classic uncanny valley is proposed by Mori. Line 1 is proposed by Ferrey, Burleigh, & Fenske (2015). Line 2
is proposed by Mathur, Reichling, & Lunardini, et al. (2020). Boundary 1 and 2 exhibit the category boundary in
Ferrari et al. and Mathur et al.’s study, respectively.
3.1 Terms
Firstly, the absence of a clear definition of uncanny feelings may be a major cause of the controversial
findings [19-21], especially the inconsistency of the translation [1]. Mori [4] used shinwakan or
bukimito represent the feelings when people faced different human replicas (e.g., androids or arti-
facts), when the feelings changed against human-likeness [22]. The original Japanese term “bukimi”
was translated clearly into eeriness. However, the word “shinwakanwas first translated into familiari-
ty, which was not equivalent and proved complex to define, partly because of its two meanings in Eng-
lish-a sense of closeness or lack of novelty [22-25]. Thus, it is no surprise that Mori’s original items
have been extended to various interpretations and used in numerous studies. Realizing that, Mori et al.
[3] revised the translation of familiarity into affinity, which refers to novelty or strangeness. Unfortu-
nately, according to the literature review recently, although affinity has been used in some research, it
is still not accepted and used consistently (Table 1).
Moreover, the same term can be explained as different connotations in various studies. For in-
stance, likability” is interpreted as friendly and enjoyable [14, 26], or aesthetic or pleasant appearance
of the character [21]. Distinct instructions result in complicated comprehension.
One more reason for the dilemma may be that a single concept could not cover the uncanny feel-
ing. Ho et al. [27] verified that uncanny feeling includes several kinds of emotions, such as fear, dis-
gust, nervousness, dislike, and shock. Future research is encouraged to adopt a universal definition of
the original term “shinwakan”, such as affinity [28], as well as confirm its boundaries and content
compositions.
Table 1. List of items used in different studies
Original Item
Item
Author & Year
Positive
Shinwakan
Acceptability
Hanson, Olney, & Prilliman et al., 2005
Affinity
Mori, MacDorman, & Kageki, 2012; Zibrek, Kokkinara, & McDonnell, 2018;
tsyri, Gelder, & Takala, 2019, Study 2&3
Appeal
Hanson, 2005
Attractiveness
Ho & MacDorman, 2010; Burleigh, Schoenherr, & Lacroix, 2013, Study 1;
Destephe, Zecca, & Hashimoto et al., 2014; Ho & MacDorman, 2017
Familiarity
Hanson, 2005; MacDorman & Ishiguro, 2006; MacDorman, 2006; Bartneck,
Kanda, & Ishiguro et al., 2009; Cheetham, Wu, & Pauli et al., 2015, Study 2;
Chattopadhyay & MacDorman, 2016; MacDorman & Chattopadhyay, 2017;
Schwind, Floerke, & Ju et al., 2018; tten, Krämer, & Maderwald et al., 2019
Pleasantness
/Pleasure
Seyama & Nagayama, 2007; Ho & MacDorman, 2010; Burleigh, Schoenherr,
&Lacroix, 2013, study 2
Likability
Bartneck, Kanda, & Ishiguro et al., 2007; Ferrey, Burleigh, & Fenske et al.,
2015; Zlotowski, Sumioka, & Nishio et al., 2015; Mathur & Reichling, 2016;
tsyri, inen, & Takala, 2017; Pütten, Krämer, & Maderwald et al.,
2019; Mathur, Reichling, & Lunardini et al.,2020
Valence and
Arousal
Cheetham, Suter, & Jäncke, 2011; Cheetham & Jancke, 2013;
Cheetham, Wu, & Pauli et al., 2015, Study 1
Warmth
Ho & MacDorman 2010; MacDorman & Entezari,2015; Chattopadhyay &
MacDorman, 2016; MacDorman & Chattopadhyay, 2016; Ho & MacDorman,
2017
Negative
Bukimi
Eeriness
Hanson, 2005; MacDorman, 2006; MacDorman & Ishiguro, 2006; Bartneck,
Kanda, & Ishiguro et al., 2009; Ho & MacDorman, 2010; Burleigh, Schoenherr,
&Lacroix, 2013; Destephe, Zecca, & Hashimoto et al., 2014; Strait & Scheutz,
2014; Zlotowski, Sumioka, & Nishio et al., 2015; Chattopadhyay & MacDor-
man, 2016; Koschate, Potter, & Bremner et al., 2016; MacDorman & Chatto-
padhyay, 2016; Kätsyri, inen, & Takala, 2017; MacDorman & Chatto-
padhyay, 2017; Strait, Floerke, &Ju et al., 2017; Buckingham, Parr, & Wood et
al.,2019; Kätsyri, Gelder, & Takala, 2019, study 1; Appel, Izydorczyk, & Weber
et al.,2020
3.2 Assessments
Self-report questionnaires are widely used in previous studies. Gray & Wegner [7] used the Likert scale
to collect the participants’ feelings of uneasy, unnerved, and creepy. Meanwhile, different scales were
employed, such as a visual analog scale [14, 26], single-target IAT [29], and semantic differential scale
[30, 31]. However, there are several potential limitations. Firstly, the construct validity of these ques-
tionnaires and scales are still questioned. For example, some dimensions include only one item, and
some dimensions are highly correlated [2, 26, 32, 33]. Secondly, there are few suitable external calibra-
tions to test whether the items measure the putative inner constructs (emotions). The assessment of
uncanny feelings is subjective and lacks objective indexes [2]. Thirdly, psychometric noise will also
bring an impact on the effectiveness of subjective rating [2]. Subjects may also give socially desirable
responses [33].
Recently, objective indicators with high sensitivity, such as reaction times, pupillary responses,
EMG (facial electromyography), and brain activity (ERPs and fMRI), are gradually adopted in this area
[34-39]. For example, an fMRI study found that VMPFC (the ventromedial prefrontal cortex) inte-
grates likability and human-likeness to an explicit UV reaction [39]. The fMRI technology used to
explore uncanny feelings could go back to 2011 [40], while eye-tracking data collected firstly to study
monkeys’ uncanny feelings in 2009 [41]. Thus, objective indexes and measurements are expected to
determine the occurrence and operation mechanisms of uncanny feelings.
3.3 Materials
The selection criteria of experimental materials are not consistent [10, 26]. Similar to uncanny feelings,
human-likeness is also a complex variable without a unified definition [24]. Therefore, various stimuli
used in previous research induce irrelevant variables that may lead to confounding results. Table 2
shows the stimuli used in the experiments which aim to verify the UV effect in the past five years.
Participants were asked to make evaluations based on different forms of stimuli, such as videos,
pictures, descriptions, words, or even interactions [14, 17, 21, 24, 29, 42, 43]. However, few studies
compared the uncanny feelings evoked by these various mediums directly. Moreover, it is also difficult
to infer whether people had similar feelings when they only see a part of the robots (e.g., face, head, or
body), even all of them are displayed as static graphs. [14, 26, 39, 44]. Furthermore, a small number of
discontinuous stimuli could not reflect the continuous axis of human-likeness correctly. Bartneck et al.
[24] got a result against Mori’s prediction, but the author pointed out that by using one human and his
robotic copy as the stimuli was unable to confirm or disconfirm the Mori’s hypothesis. If the stimuli
are arbitrarily or subjectively selected, then researchers would not be possible to obtain reliable conclu-
sions of the UV effect [25].
Additionally, morphing artifact becomes one of the common methods to manipulate stimuli [20].
Following the guidelines that endpoint images should be similar to each other to reduce morphing arti-
facts [45], using similar source images of humans and robots for morphing restrict the generated range
of human-likeness [31]. Even if the morphing artifacts controlled perfectly, it is still questioned wheth-
er the objectively manipulated human-likeness percentages are equal to perceived human-likeness [31,
45].
4 Practical Applications
The relevant research results of the uncanny valley, which involve users attitudes and concepts to-
wards humanoid robots, play a significant role in the field of human-computer interaction, especially in
interaction design. The development and innovation of humanoid robot design are trying to reduce the
negative impact of the uncanny valley. From the perspective of a human, the question is whether the
individual differences among the users can predict sensitivity to the uncanny valley and acceptance to
the humanoid robots [10, 46]. From the view of the robot, the question is what kind of design is more
acceptable to the majority of users [9, 46]. Therefore, in order to avoid the uncanny valley, there are
two directions to improve the design of robots.
One way is to pursuit a nonhuman design deliberately so that the robots can lie at the first peak of
affinity. Find a moderate degree of human likeness and a considerable sense of affinity, rather than
taking the risk to increase the degree of human likeness continually [3]. There are two suggestions:
Table 2. Stimuli used in the past five years
Video Gragh Vignette Others Face Head
Others
Independent
or Scattered
Series Nature
Both Nature
& Artificial
Artificial
Mathur, Reichling, &
Lunardini, et al., 2020
On-line 80 face pictures
Appel, Izydorczyk, & Weber
et al., 2020, Study 1
On-line
3 short descriptions of
robot
Villacampa, Ingram, & Rosa,
2019
Laboratory 5 faces
tten, Krämer, &
Maderwald et al., 2019
Laboratory
36 pictures of 6 stimulus
categories
tsyri, Gelder, & Takala,
2019, Study 1
On-line
60 faces pictures of 6
actors
Reuten, Dam, & Naber,
2018 Laboratory 8 face pictures
MacDorman &
Chattopadhyay, 2017
On-line 7 face pictures
Strait, Floerke, & Ju et al.,
2017
Laboratory 60 half-body pictures
Ho & MacDorman, 2017,
Study 4
On-line
12 video clips of 12
characters
tsyri, Mäkäinen, &
Takala, 2017
Laboratory
60 video clips of 15
movies
Wang & Rocha, 2017,
Study 1 Laboratory 89 face pictures
Mathur & Reichling, 2016,
Study 1
On-line 80 face pictures
Amount of Stimuli
Author & Year
Medium
Display
Serialization
Artificiality
Reponse
Note: Mediummeans the form of stimuli presentation. “Display” means which part of the stimuli could be observed. “Serialization” means whether the stimuli are series of many components, such as a series of
morphing images. “Artificiality” means whether the stimuli were morphed, for example, a photo of robot from Google or filmed is natural.
(1) Keep the balance between humanness and machine-like. The existence of the
nose, eyelids, and mouth can increase the perception of humanness. Several design
suggestions are proposed, for example, four or more features on the head, wide head
with wide eyes, details in the eyes, or complex curves in the forehead [47].
(2) Design the robots for target users. For example, children rate human-
machine like robots as the most positive, and they prefer cartoon-like and mechanical
features, such as exaggerated facial features and wheels [48-50]. Elderly users have
their preferences as well [46].
The other way is to reach the second peak and increase the level of human-
likeness to step over the uncanny valley. The main idea of this way is to narrow the
gap between robots and humans from various aspects:
(1) Make robots alive. Hanson [9] indicated that people feel unease because ro-
bots seem partly-dead. For example, robots shut down instead of going to sleep like
humans. Thus, it is better to remove these flaws to make robots alive, friendly, and
attractive.
(2) Express emotions. The addition of emotion display (e.g., emotional expres-
sions, gait, voice, or gestures) can decrease the sense of uncanniness successfully [18,
51]. These emotion displays narrow the gap between the expectation the design raises
about human nature and the perception of it, achieving a harmonious interaction.
5 Conclusions and Future Directions
Robots are becoming increasingly prevalent in everyday life. Humanoid robots are
expected to be used more friendly and experienced more comfortably. Therefore, how
to define and design the best appearances of humanoid robots is a critical question to
be answered. In summary, decades of research develop two main explanations of the
uncanny valley effect from the views of evolutionary psychology and cognitive con-
flict. The inconsistency of previous studies may be due to the absence of a unified
definition, robust measure, and the representativeness of materials. Practically, pursuit
a nonhuman design and increase the rate of human-likeness as high as possible are
both helpful to avoid uncanny feelings. Future research is encouraged to reach a con-
sensus on how to define the uncanny feelings, no matter it is a single item or complex
emotions. Moreover, creating a sizeable and diverse database of images (or videos)
covers a continuous series of human-likeness, as created by Mathur et al. [14], could
avoid manipulation defects such as heterogeneous or discontinuous stimuli. Finally,
considering most of the previous studies focus on young adults, future research is
expected to test the uncanny valley in a more diverse user group.
Acknowledgments
Jie Zhang and Shuo Li made equal contributions to this manuscript. This research is
supported by fund for building world-class universities (disciplines) of Renmin Uni-
versity of China. Project No. 2018, the Beijing Natural Science Foundation
(5184035), and CAS Key Laboratory of Behavioral Science, Institute of Psychology.
References
1. Broadbent, E.: Interactions With Robots: The Truths We Reveal About Ourselves. Annual
Review of Psychology. 68, 627652 (2017). doi: 10.1146/annurev-psych-010416-043958.
2. Wang, S., S.O. Lilienfeld, P. Rochat: The Uncanny Valley: Existence and Explanations.
Review of General Psychology. 19(4), 393407 (2015). doi: 10.1037/gpr0000056.
3. Mori, M., K. MacDorman, N. Kageki: The Uncanny Valley [From the Field]. IEEE Robot-
ics & Automation Magazine. 19(2), 98100 (2012). doi: 10.1109/mra.2012.2192811.
4. Mori, M.: The Uncanny Valley. Energy. 7(4), 3335 (1970)
5. MacDorman, K.F., H. Ishiguro: The uncanny advantage of using androids in cognitive and
social science research. Interaction Studies. 7(3), 297337 (2006). doi:
10.1075/is.7.3.03mac.
6. MacDorman, K.F., R.D. Green, C.C. Ho, C.T. Koch: Too real for comfort? Uncanny re-
sponses to computer generated faces. Computers in Human Behavior. 25(3), 695710
(2009). doi: 10.1016/j.chb.2008.12.026.
7. Gray, K., D.M. Wegner: Feeling robots and human zombies: mind perception and the un-
canny valley. Cognition. 125(1), 125130 (2012). doi: 10.1016/j.cognition.2012.06.007.
8. Yamada, Y., T. Kawabe, K. Ihaya: Categorization difficulty is associated with negative
evaluation in the “uncanny valley” phenomenon. Japanese Psychological Research. 55(1),
2032 (2013). doi: 10.1111/j.1468-5884.2012.00538.x.
9. Hanson, D.: Expanding the aesthetic possibilities for humanoid robots. In: IEEE-RAS in-
ternational conference on humanoid robots (2005)
10. MacDorman, K.F., S.O. Entezari: Individual differences predict sensitivity to the uncanny
valley. Interaction Studies. 16(2), 141172 (2015). doi: 10.1075/is.16.2.01mac.
11. MacDorman, K.F., D. Chattopadhyay: Reducing consistency in human realism increases
the uncanny valley effect; increasing category uncertainty does not. Cognition. 146, 190
205 (2016). doi: 10.1016/j.cognition.2015.09.019.
12. Ferrari, F., M.P. Paladino, J. Jetten: Blurring HumanMachine Distinctions: Anthropo-
morphic Appearance in Social Robots as a Threat to Human Distinctiveness. International
Journal of Social Robotics. 8(2), 287302 (2016). doi: 10.1007/s12369-016-0338-y.
13. Ferrey, A.E., T.J. Burleigh, M.J. Fenske: Stimulus-category competition, inhibition, and
affective devaluation: a novel account of the uncanny valley. Frontiers in Psychology.
6(249) (2015). doi: 10.3389/fpsyg.2015.00249.
14. Mathur, M.B., D.B. Reichling, F. Lunardini, et al.: Uncanny but not confusing: Multisite
study of perceptual category confusion in the Uncanny Valley. Computers in Human Be-
havior. 103, 2130 (2020). doi: 10.1016/j.chb.2019.08.029.
15. Buckingham, G., J. Parr, G. Wood, et al.: Upper- and lower-limb amputees show reduced
levels of eeriness for images of prosthetic hands. Psychonomic Bulletin & Review. 26(4),
12951302 (2019). doi: 10.3758/s13423-019-01612-x.
16. Destephe, M., M. Zecca, K. Hashimoto, A. Takanishi: Uncanny valley, robot and autism:
perception of the uncanniness in an emotional gait. In: Proceedings of the 2014 IEEE In-
ternational Conference on Robotics and Biomimetics, pp. 11521157. IEEE, Bali (2014).
doi: 10.1109/robio.2014.7090488.
17. Ho, C.-C., K.F. MacDorman: Measuring the Uncanny Valley Effect. International Journal
of Social Robotics. 9(1), 129139 (2017). doi: 10.1007/s12369-016-0380-9.
18. Koschate, M., R. Potter, P. Bremner, M. Levine: Overcoming the uncanny valley: Displays
of emotions reduce the uncanniness of humanlike robots. In: 11th ACM/IEEE Internation-
al Conference on Human-Robot Interaction (HRI), pp. 359366. IEEE, Christchurch
(2016). doi: 10.1109/HRI.2016.7451773.
19. Olivera-La Rosa, A.: Wrong outside, wrong inside: A social functionalist approach to the
uncanny feeling. New Ideas in Psychology. 50, 3847 (2018). doi:
10.1016/j.newideapsych.2018.03.004.
20. Katsyri, J., K. Forger, M. Makarainen, T. Takala: A review of empirical evidence on dif-
ferent uncanny valley hypotheses: support for perceptual mismatch as one road to the val-
ley of eeriness. Frontiers in Psychology. 6(390) (2015). doi: 10.3389/fpsyg.2015.00390.
21. Katsyri, J., M. Makarainen, T. Takala: Testing the 'uncanny valley' hypothesis in semireal-
istic computer-animated film characters: An empirical evaluation of natural film stimuli.
International Journal of Human-Computer Studies. 97, 149161 (2017). doi:
10.1016/j.ijhcs.2016.09.010.
22. Ho, C.-C., K.F. MacDorman: Revisiting the uncanny valley theory: Developing and vali-
dating an alternative to the Godspeed indices. Computers in Human Behavior. 26(6),
15081518 (2010). doi: 10.1016/j.chb.2010.05.015.
23. Bartneck, C., T. Kanda, H. Ishiguro, N. Hagita: Is The Uncanny Valley An Uncanny Cliff?
In: Proceedings of the 16th IEEE international conference on robot & human interactive
communication., pp. 368373. IEEE, Jeju (2007)
24. Bartneck, C., T. Kanda, H. Ishiguro, N. Hagita: My Robotic Doppelgänger A Critical
Look at the Uncanny Valley. In: 18th IEEE International Symposium on Robot and Hu-
man Interactive Communication, pp. 269276. IEEE, Toyama (2009)
25. Lay, S., N. Brace, G. Pike: Circling Around the Uncanny Valley: Design Principles for
Research Into the Relation Between Human Likeness and Eeriness. i-Perception, 111
(2016). doi: 10.1177/2041669516681309
26. Mathur, M.B., D.B. Reichling: Navigating a social world with robot partners: A quantita-
tive cartography of the Uncanny Valley. Cognition. 146, 2232 (2016). doi:
10.1016/j.cognition.2015.09.008.
27. Ho, C.-C., K.F. MacDorman, Z.A.D. Pramono: Human Emotion and the Uncanny Valley:
A GLM, MDS, and Isomap Analysis of Robot Video Ratings. In: 3rd ACM/IEEE Interna-
tional Conference on Human-Robot Interaction (HRI), pp. 169176. IEEE, Amsterdam
(2008)
28. Wang, S., P. Rochat: Human Perception of Animacy in Light of the Uncanny Valley Phe-
nomenon. Perception. 46(12), 13861411 (2017). doi: 10.1177/0301006617722742
29. Villacampa, J., G.P.D. Ingram, G. Corradi, A.O.-L. Rosa: Applying an implicit approach
to research on the uncanny feeling. Journal of Articles in Support of the Null Hypothesis.
16(1), 1122 (2019)
30. MacDorman, K.F., D. Chattopadhyay: Categorization-based stranger avoidance does not
explain the uncanny valley effect. Cognition. 161, 132135 (2017). doi:
10.1016/j.cognition.2017.01.009.
31. Katsyri, J., B. de Gelder, T. Takala: Virtual Faces Evoke Only a Weak Uncanny Valley Ef-
fect: An Empirical Investigation With Controlled Virtual Face Images. Perception. 48(10),
968991 (2019). doi: 10.1177/0301006619869134.
32. MacDorman, K.F.: Subjective Ratings of Robot Video Clips for Human Likeness, Famili-
arity, and Eeriness: An Exploration of the Uncanny Valley. Proceedings of the
ICCS/CogSci-2006 Long Symposium ‘Toward Social Mechanisms of Android Science’,
2629 (2006)
33. Bartneck, C., D. Kuli´c, E. Croft, S. Zoghbi: Measurement Instruments for the Anthropo-
morphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots.
International Journal of Social Robotics. 1, 7181 (2009).doi:10.1007/s12369-008-0001-3
34. Saygin, A.P., T. Chaminade, H. Ishiguro, J. Driver, C. Frith: The thing that should not be:
predictive coding and the uncanny valley in perceiving human and humanoid robot ac-
tions. SCAN. 7, 413422 (2012). doi: 1 0. 1 093/scan/nsr025
35. Strait, M., M. Scheutz: Measuring Users’ Responses to Humans, Robots, and Human-like
Robots with Functional Near Infrared Spectroscopy. In: 23rd IEEE International Symposi-
um on Robot and Human Interactive Communication, pp. 11281133. IEEE, Edinburgh
(2014)
36. Cheetham, M., Wu, L., Pauli, P., & Jancke, L.: Arousal, valence, and the uncanny valley:
psychophysiological and self-report findings. Frontiers in Psychology. 6(981) (2015). doi:
10.3389/fpsyg.2015.00981
37. Strait, M., L. Vujovic, V. Floerke, M. Scheutz, H. Urry: Too Much Humanness for Hu-
man-Robot Interaction. In: Proceedings of the 33rd Annual ACM Conference on Human
Factors in Computing Systems - CHI '15, pp. 35933602, Seoul (2015). doi:
10.1145/2702123.2702415.
38. Reuten, A., M. van Dam, M. Naber: Pupillary Responses to Robotic and Human Emotions:
The Uncanny Valley and Media Equation Confirmed. Frontiers in Psychology. 9(774)
(2018). doi: 10.3389/fpsyg.2018.00774.
39. Rosenthal-von der Puetten, A.M., N.C. Kraemer, S. Maderwald, M. Brand, F. Gra-
benhorst: Neural Mechanisms for Accepting and Rejecting Artificial Social Partners in the
Uncanny Valley. Journal of Neuroscience. 39(33), 65556570 (2019). doi:
10.1523/jneurosci.2956-18.2019.
40. Cheetham, M., P. Suter, L. Jäncke: The Human Likeness Dimension of the “Uncanny Val-
ley Hypothesis”: Behavioral and Functional MRI Findings. Frontiers in Human Neurosci-
ence. 5(126), (2011). doi: 10.3389/fnhum.2011.00126.
41. Steckenfinger, S.A., A.A. Ghazanfar: Monkey visual behavior falls into the uncanny val-
ley. Proceedings of the National Academy of Sciences of the United States of America.
106(43), 1836218366 (2009). doi: 10.1073/pnas.0910063106.
42. Ramey, C.H.: An Inventory of Reported Characteristics for Home Computers, Robots, and
Human Beings: Applications for Android Science and the Uncanny Valley. In: Proceed-
ings of the ICCS/CogSci-2006 Long Symposium ‘Toward Social Mechanisms of Android
Science’ (2006)
43. Appel, M., D. Izydorczyk, S. Weber, M. Mara, T. Lischetzke: The uncanny of mind in a
machine: Humanoid robots as tools, agents, and experiencers. Computers in Human Be-
havior. 102, 274286 (2020). doi: 10.1016/j.chb.2019.07.031.
44. Strait, M.K., V.A. Floerke, W. Ju, et al.: Understanding the Uncanny: Both Atypical Fea-
tures and Category Ambiguity Provoke Aversion toward Humanlike Robots. Frontiers in
Psychology. 8(1366) (2017). doi: 10.3389/fpsyg.2017.01366.
45. Cheetham, M., L. Jancke: Perceptual and Category Processing of the Uncanny Valley Hy-
pothesis' Dimension of Human Likeness: Some Methodological Issues. Jove-Journal of
Visualized Experiments (76) (2013). doi: 10.3791/4375.
46. Prakash, A., W.A. Rogers: Why Some Humanoid Faces Are Perceived More Positively
Than Others: Effects of Human-Likeness and Task. International Journal of Social Robot-
ics. 7(2), 309331 (2015). doi: 10.1007/s12369-014-0269-4.
47. DiSalvo, C.F., F. Gemperle, J. Forlizzi, S. Kiesler: All robots are not created equal: the de-
sign and perception of humanoid robot heads. In: Proceedings of the 4th conference on
Designing interactive systems: processes, practices, methods, and techniques pp. 321326.
ACM Press, London (2002). doi: 10.1145/778712.778756.
48. Woods, S.: Exploring the design space of robots: Children's perspectives. Interacting with
Computers. 18(6), 13901418 (2006). doi: 10.1016/j.intcom.2006.05.001.
49. Woods, S., K. Dautenhahn, J. Schulz: The design space of robots: investigating children's
views. In: 13th IEEE International Workshop on Robot and Human Interactive Communi-
cation, pp. 4752. IEEE, Kurashiki (2004). doi: 10.1109/roman.2004.1374728.
50. Lin, W., H.-P. Yueh, H.-Y. Wu, L.-C. Fu: Developing a Service Robot for a Children's Li-
brary: A Design-Based Research Approach. Journal of the Association for Information
Science and Technology. 65(2), 290301 (2014). doi: 10.1002/asi.22975.
51. Jizheng, Y., W. Zhiliang, Y. Yan: Humanoid Robot Head Design Based on Uncanny Val-
ley and FACS. Journal of Robotics. (2014). doi: 10.1155/2014/208924.
52. Hanson, D., Olney, A., Prilliman, S., Mathews, E., Zielke, M., Hammons, D., Fernandez,
R., & Stephanou, H.E: Upending the Uncanny Valley. The Twentieth National Conference
on Artificial Intelligence and the Seventeenth Innovative Applications of Artificial Intelli-
gence Conference, July 9-13, 2005, Pittsburgh, Pennsylvania, USA. (2005)
53. Zibrek, K., Kokkinara, E., McDonnell, R: The Effect of Realistic Appearance of Virtual
Characters in Immersive Environments - Does the Character's Personality Play a Role?
Ieee Transactions on Visualization and Computer Graphics 24(4), 1681-1690 (2018). doi:
10.1109/tvcg.2018.2794638.
54. Burleigh, T. J., Schoenherr, J. R. Lacroix, G. L.: Does the uncanny valley exist? An em-
pirical test of the relationship between eeriness and the human likeness of digitally created
faces. Computers in Human Behavior 29(3),759-771 (2013). doi:
10.1016/j.chb.2012.11.021.
55. Chattopadhyay, D., K. F. MacDorman: Familiar faces rendered strange: Why inconsistent
realism drives characters into the uncanny valley. J Vis 16(11), 7 (2016).
doi:10.1167/16.11.7.
56. Schwind, V., Leicht, K., Jaeger, S., Wolf, K., Henze, N.: Is there an uncanny valley of vir-
tual animals? A quantitative and qualitative investigation. International Journal of Human-
Computer Studies 111,49-61 (2018). doi: 10.1016/j.ijhcs.2017.11.003.
57. Seyama, J. i., R. S. Nagayama: The Uncanny Valley: Effect of realism on the impression
of artificial human faces." Presence-Teleoperators and Virtual Environments 16(4), 337-
351 (2007). doi: 10.1162/pres.16.4.337.
58. Zlotowski, J., Sumioka, H., Nishio, S., Glas, D.F., Bartneck, C., Ishiguro, H.: Persistence
of the uncanny valley: the influence of repeated interactions and a robot’s attitude on its
perception. Frontiers in Psychology. (2015). doi: 10.3389/fpsyg.2015.00883.
... Although Carolin feels increased anxiety while looking at the cashier, she is able to look at her and answer the therapist's questions. Carolin is surprised to realise that the cashier looks at her in a friendly manner and that the cashier is wearing nice eye makeup; nevertheless, the cashier does not look into the Carolin's eyes, which surprises her (see Zhang et al. 2020 for an overview of the uncanny valley effect). After the exposure exercise, she elaborates that although she knew that it was not real, she really felt it as if she were in fact in that situation. ...
Article
In anxiety and related disorders, Virtual Reality Exposure Therapy (VRET) was one of the first steps toward integrating technology into psychological treatments. In this article, we discuss crucial therapeutical skills and provide a case conceptualisation for the treatment of social anxiety disorder with VRET. The case conceptualisation is based on evidence‐based cognitive‐behavioural treatment approaches. Social anxiety can be very challenging to treat with exposure in vivo, and virtual reality exposure offers the added benefit of being able to create social situations and real‐time interactions within the therapeutic context. The case conceptualisation presented is worked out for a 23‐year‐old female with social anxiety disorder who is increasingly hindered by her anxiety in her professional and personal life. The treatment rationale of VRET, homework assignments, and progress of therapy are presented. Additionally, this paper discusses what steps to take if the first exposure experiences are not successful and how to progress in such cases. Therapeutic pitfalls are illustrated within this case and potential solutions on how to avoid these pitfalls are addressed.
... This negative evaluation may share some causes with the unnerving feelings elicited by artificial humans, called the uncanny valley effect (Mori, 2012). However, the relation between AI food and the uncanny valley effect remains unexamined (Diel et al., 2022;Kätsyri et al., 2015;Zhang et al., 2020; however, see Yamada et al., 2012). ...
... Besides headaches, DOTA 2 players also reported feelings of disgust, confusion, uncanniness and eeriness which resemble the UVE (Mori et al., 2012). In 1970, Mori hypothesized that as a robot approached, but failed to attain, a lifelike appearance or human-like features, it would elicit an abrupt shift from empathy to disgust (the terms familiarity/interpersonal warmth have also been used) Ho & MacDorman, 2017;Mori et al., 2012;Zhang et al., 2020). This slow descent, or according to certain researchers, sudden drop into eeriness, is known as the UVE (Bartneck et al., 2007). ...
Article
Full-text available
DOTA 2, a videogame played by millions of people worldwide, introduced a rotated game environment while players' avatars and point of view remained stable. Unbeknown to the game's designers, players reported nausea, headaches and uncanny feelings e.g., eeriness, discomfort, disgust. In Study 1, we harvested over 25,000 cross-cultural testimonies of individuals (English, Russian and Chinese) from public forums and we performed topic modelling (BERTopic & Llama 2) and sentiment analysis (VADER) confirming the above symptoms. In Study 2, we downloaded over 800.000 DOTA 2 replays from 2018 extracting timestamped player comments. Words related to headaches, nausea and uncanny (Valley) feelings occurred more frequently in the transformed environment compared to a control, starting almost immediately as soon as players saw the transformed environment. In Study 3, we created our own rotated DOTA 2 map in the lab, replicating our previous results with a specialised DOTA 2 sample (mean total of 2379.07 play hours). In Study 4, we replicated our findings in a different video game genre (CS:GO) using a different visual transformation (mirroring). These results suggest that UX designers should be cautious when modifying virtual environments, as unintended effects may cause user discomfort. Moreover, modified virtual environments could give us a reliable and consistent way to induce headaches since other methods can be problematic, supplementing our understanding of these diseases. We propose that the constant expectancy violation (memory versus visual input) and the malfunctioning of extrastriate areas, which are responsible for processing visual transformations, are central to this phenomenon.
... For instance, humanoid robots can be more or less human-like, and can range from android robots, which are aiming at mimicking humans (Haring et al., 2013) to robots with some human-like features (Nitsch & Glassen, 2015). The more humanoid robots resemble humans, the more likely they are to be perceived as eerie or disturbing, as is described by the uncanny valley phenomenon (Zhang et al., 2020). Thus, since different robots are used across studies, it is unclear to what extent the human-likeness of the robot contributes to the results, especially when outcome measures include affective responses or attitudes towards robots. ...
Article
Full-text available
Social robots are robots that can interact and communicate with people in accordance with social norms. They are increasingly implemented in various environments including healthcare, education and the service industry. Individual differences, such as personality traits and attitudes are drivers of human social behaviours and interactions. As robots are increasingly developed as social agents, the drive to develop more socially acceptable, user-centered robots calls for a synthesis of existing findings to improve our understanding how user traits and attitudes influence human-robot interactions (HRI). Understanding the role of individual differences, and their impact on lived experience, is crucial for designing interactions that are better tailored to users. Currently, it is unclear whether or how personality traits and user attitudes affect HRI, which interaction modalities are being investigated and what is the quality of existing evidence. To address these questions, we conducted a systematic search of the literature, yielding 56 articles, from which we extracted relevant findings. As some of the studies included qualitative outcomes, we used a mixed methods meta-aggregation, in which findings were grouped into categories to form more general synthesized findings. We found evidence that user personality traits and attitudes are indeed correlated with social HRI outcomes, including extraversion being associated with preferred distance from the robot, preference for similar robot personality traits, users’ impressions of robots and behavior towards robots. Our analysis also revealed that existing evidence has limitations which prevent us from drawing unambiguous conclusions, such as disparate interaction outcome measures, lack of comparison between different robots and small sample sizes. We provide a comprehensive summary of the existing evidence and propose that these findings can guide the development of research hypotheses to extend knowledge and to provide clarification where the existing literature is ambiguous or contradictory. Findings that warrant future investigation include different preferred robot behaviours based on extroversion and introversion, the impact of user traits on perceived robot anthropomorphism and social presence of the robot.
Article
The study reported in this paper analysed the effectiveness and acceptability of ethologically inspired expressive behaviours implemented in two distinctively different embodiments of the zoomorphic robots Miro-E and Unitree Go1. It investigated how primary school children attribute intentions and emotions to the two robots, examining the importance of certain body parts in human–robot interactions to convey affective states and express intentions (e.g. ears, tail, and legs). A total of 111 students aged 7–10 years participated in the study in a within-subject design, observing an interaction between each robot and an Experimenter in small groups. Every child observed both robots interacting with an Experimenter in the same scenario following an AB-BA order. After each interaction, a questionnaire was presented to each student individually. Effects of (a) robot embodiment, (b) dog-ownership, and (c) students’ age on their perception of the robots, focusing on differences between the two robots’ emotionally and intentionally expressive behaviour, were analysed. Results identified significant effects of each independent variable. While the Miro-E robot was identified as expressing emotions better—underlying the importance of affective features such as ears, and a tail—there was no significant difference in children’s intention-attribution to the two robots, and Unitree Go1 was selected as the preferred one over Miro-E. Despite the differences both Miro-E and Unitree Go1 reliably conveyed the intended emotions and intentions, providing further evidence for the applicability of the ethorobotics approach. Findings implied that the incorporation of zoomorphic embodiment features to express social signals could expand potential applications of these robots.
Conference Paper
The Uncanny Valley hypothesis proposes that as robots become more human-like, they are initially liked better but then elicit a feeling of eeriness, peaking just before achieving full human resemblance. It remains unclear whether context can modify this effect. In an online experiment, participants were primed with a vignette about either robots as social companions (social context priming) or a neutral topic, and then rated images of robots on human-likeness, likability, trust, and creepiness. We found a negative linear relationship between a robot’s human-likeness and its likability and trustworthiness and a positive linear relationship between a robot’s human-likeness and creepiness. Social context priming improved overall likability and trust of robots but did not modulate the Uncanny Valley effect. This indicates that, while presenting robots in a social context can improve their acceptance, this does not change our inherent discomfort with increasing human-like robots.
Article
Full-text available
Háttér és célkitűzések: Kutatásunk központi kérdésköre a robotgender témakör volt. Vizsgálatunkban arra próbáltunk meg választ keresni, hogy az emberekben működő nemiszerep-elvárások milyen hatással vannak a nem ipari robotokkal (szociális robotokkal) szembeni attitűdökre, ezzel beemelve a szexizmust a robotikus pszichológia kutatási aspektusai közé. Módszer: Ennek felmérése érdekében átlagpopuláción vett, kényelmi mintán végeztünk kérdőíves vizsgálatot 135 fővel. Kérdőívcsomagunk saját szerkesztésű kérdéseket, valamint a Robot Elfogadás Kérdőívet, Ambivalens Szexizmus Kérdőívet és a Régi és Modern Szexizmus Kérdőív tartalmazta. Eredmények: Legfontosabb eredményünk, hogy több hipotézisen keresztül is bizonyítékot találtunk a szexizmus és a robotokkal kapcsolatos attitűdök között. Azok, akik magasabb szintű szexizmussal rendelkeznek, preferálják, ha egy robotnak van neme, míg az alacsonyabb szexizmussal rendelkezők inkább a gendersemleges robotokat preferálták. Következtetések: Az eredmények leginkább a szellemi munkakörben és a személyi kisegítő robotok esetében rendelkeznek jelentős felhasználási potenciállal, hiszen ezekben a környezetekben fontos aspektusa lehet az ember-robot illesztésben a robot nemének. Továbbá kutatásunk rámutat a szexizmus vizsgálatának fontosságára az ember-robot interakciók esetében.
Article
Full-text available
Android robots that resemble humans closely, but not perfectly, can provoke negative feelings of dislike and eeriness in humans (the “Uncanny Valley” effect). We investigated whether category confusion between the perceptual categories of “robot” and “human” contributes to Uncanny Valley aversion. Using a novel, validated corpus of 182 images of real robot and human faces, we precisely estimated the shape of the Uncanny Valley and the location of the perceived robot/human boundary. To implicitly measure confusion, we tracked 358 participants’ mouse trajectories as they categorized the faces. We observed a clear Uncanny Valley, though with some interesting differences from standard theoretical predictions; the initial apex of likability for highly mechanical robots indicated that these robots were still moderately dislikable, and the Uncanny Valley itself was positioned closer to the mechanical than to the human-like end of the spectrum. We also observed a pattern of categorization suggesting that humans do perceive a categorical robot/human boundary. Yet in contrast to predictions of the category confusion mechanism hypothesis, the locations of the Uncanny Valley and of the category boundary did not coincide, and mediation analyses further failed to support a mechanistic role of category confusion. These results suggest category confusion does not explain the Uncanny Valley effect.
Article
Full-text available
The uncanny valley hypothesis suggests that a high (but not perfect) human likeness of robots is associated with feelings of eeriness. We distinguished between experience and agency as psychological representations of human likeness. In four online experiments, vignettes about a new generation of robots were presented. The results indicate that a robot’s capacity to feel (experience) elicits stronger feelings of eeriness than a robot’s capacity to plan ahead and to exert self-control (agency, Experiment 1A), which elicits more eeriness than a robot without mind (robot as tool, Experiments 1A and 1B). This effect was attenuated when the robot was introduced to operate in a nursing environment (Experiment 2). A robot’s ascribed gender did not influence the difference between the eeriness of robots introduced as experiencers, agents, or tools (Experiment 3). Additional analyses yielded some evidence for a non-linear (quadratic) effect of participants’ age on the robot mind effects.
Article
Full-text available
Contradictory findings with regard to the nonlinear relation between human likeness and affective reactions have characterized psychological research on the uncanny valley hypothesis (Mori 1970/2005). In the present study we explored the phenomenology of the uncanny feeling (UF) by assessing implicit associations between uncanny stimuli (by android faces) and two emotional responses previously associated with the uncanny: fear and disgust. Further, we tested whether perception of uncanny stimuli would facilitate cognitions of deviant ("sick") morality and mental illness, as suggested by previous literature. Across five Single-Target Implicit Association Tests we found support only for a slight association of the UF with moral disgust (relative to fear). We found no evidence of an implicit link between the UF and fear or general disgust, nor did the UF implicitly facilitate cognitions of psychopathy.
Article
Full-text available
Artificial agents are becoming prevalent across human life domains. However, the neural mechanisms underlying human responses to these new, artificial social partners remain unclear. The uncanny valley (UV) hypothesis predicts that humans prefer anthropomorphic agents but reject them if they become too humanlike-the so-called UV reaction. Using fMRI, we investigated neural activity when subjects evaluated artificial agents and made decisions about them. Across two experimental tasks, the ventromedial prefrontal cortex (VMPFC) encoded an explicit representation of subjects' UV reactions. Specifically, VMPFC signaled the subjective likability of artificial agents as a nonlinear function of humanlikeness, with selective low likability for highly humanlike agents. In exploratory across-subject analyses, these effects explained individual differences in psychophysical evaluations and preference choices. Functionally connected areas encoded critical inputs for these signals: the temporoparietal junction encoded a linear humanlikeness continuum, whereas nonlinear representations of humanlikeness in dorsomedial prefrontal cortex (DMPFC) and fusiform gyrus emphasized a human-nonhuman distinction. Following principles of multisensory integration, multiplicative combination of these signals reconstructed VMPFC's valuation function. During decision making, separate signals in VMPFC and DMPFC encoded subjects' decision variable for choices involving humans or artificial agents, respectively. A distinct amygdala signal predicted rejection of artificial agents. Our data suggest that human reactions toward artificial agents are governed by a neural mechanism that generates a selective, nonlinear valuation in response to a specific feature combination (humanlikeness in nonhuman agents). Thus, a basic principle known from sensory coding-neural feature selectivity from linear-nonlinear transformation-may also underlie human responses to artificial social partners.SIGNIFICANCE STATEMENT Would you trust a robot to make decisions for you? Autonomous artificial agents are increasingly entering our lives, but how the human brain responds to these new artificial social partners remains unclear. The uncanny valley (UV) hypothesis-an influential psychological framework-captures the observation that human responses to artificial agents are nonlinear: we like increasingly anthropomorphic artificial agents, but feel uncomfortable if they become too humanlike. Here we investigated neural activity when humans evaluated artificial agents and made personal decisions about them. Our findings suggest a novel neurobiological conceptualization of human responses toward artificial agents: the UV reaction-a selective dislike of highly humanlike agents-is based on nonlinear value-coding in ventromedial prefrontal cortex, a key component of the brain's reward system.
Article
Full-text available
The “uncanny phenomenon” describes the feeling of unease associated with seeing an image that is close to appearing human. Prosthetic hands in particular are well known to induce this effect. Little is known, however, about this phenomenon from the viewpoint of prosthesis users. We studied perceptions of eeriness and human-likeness for images of different types of mechanical, cosmetic, and anatomic hands in upper-limb prosthesis users (n=9), lower-limb prosthesis users (n=10), prosthetists (n=16), control participants with no prosthetic training (n=20), and control participants who were trained to use a myoelectric prosthetic hand simulator (n=23). Both the upper- and lower-limb prosthesis user groups showed a reduced uncanny phenomenon (i.e., significantly lower levels of eeriness) for cosmetic prosthetic hands compared to the other groups, with no concomitant reduction in how these stimuli were rated in terms of human-likeness. However, a similar effect was found neither for prosthetists with prolonged visual experience of prosthetic hands nor for the group with short-term training with the simulator. These findings in the prosthesis users therefore seem likely to be related to limb absence or prolonged experience with prostheses.
Article
Full-text available
Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses.
Article
Full-text available
Virtual characters that appear almost photo-realistic have been shown to induce negative responses from viewers in traditional media, such as film and video games. This effect, described as the uncanny valley, is the reason why realism is often avoided when the aim is to create an appealing virtual character. In Virtual Reality, there have been few attempts to investigate this phenomenon and the implications of rendering virtual characters with high levels of realism on user enjoyment. In this paper, we conducted a large-scale experiment on over one thousand members of the public in order to gather information on how virtual characters are perceived in interactive virtual reality games. We were particularly interested in whether different render styles (realistic, cartoon, etc.) would directly influence appeal, or if a character's personality was the most important indicator of appeal. We used a number of perceptual metrics such as subjective ratings, proximity, and attribution bias in order to test our hypothesis. Our main result shows that affinity towards virtual characters is a complex interaction between the character's appearance and personality, and that realism is in fact a positive choice for virtual characters in virtual reality.
Article
The uncanny valley (UV) hypothesis suggests that increasingly human-like robots or virtual characters elicit more familiarity in their observers (positive affinity) with the exception of near-human characters that elicit strong feelings of eeriness (negative affinity). We studied this hypothesis in three experiments with carefully matched images of virtual faces varying from artificial to realistic. We investigated both painted and computer-generated (CG) faces to tap a broad range of human-likeness and to test whether CG faces would be particularly sensitive to the UV effect. Overall, we observed a linear relationship with a slight upward curvature between human-likeness and affinity. In other words, less realistic faces triggered greater eeriness in an accelerating manner. We also observed a weak UV effect for CG faces; however, least human-like faces elicited much more negative affinity in comparison. We conclude that although CG faces elicit a weak UV effect, this effect is not fully analogous to the original UV hypothesis. Instead, the subjective evaluation curve for face images resembles an uncanny slope more than a UV. Based on our results, we also argue that subjective affinity should be contrasted against subjective rather than objective measures of human-likeness when testing UV.
Article
The “uncanny valley” hypothesis (Mori 1970/2005) states that a near-human looking entity can engender negative feelings in an observer. I analyze the phenomenology of the uncanny feeling, which is largely understudied despite being the dependent variable in empirical studies. Next, I introduce a social functionalist account to the uncanny valley research. I propose that the uncanny feeling is a social response triggered by the perception that something is ambiguously wrong with the “humanness” of the human-like stimuli, and therefore needs to be avoided. By doing so, the uncanny feeling functions as a “wrong outside, wrong inside” heuristic with central moral connotations. I conclude that rethinking the uncanny feeling as a social response helps to integrate controversial findings within the field. https://www.sciencedirect.com/science/article/pii/S0732118X18300096