Content uploaded by Markus Appel
Author content
All content in this area was uploaded by Markus Appel on Oct 14, 2017
Content may be subject to copyright.
!1
Running head: SCIENCE FICTION AND ANDROID ROBOTS
Science fiction reduces the eeriness of android robots: A field experiment
Martina Mara
Ars Electronica Futurelab, Linz, Austria
and
Markus Appel
Department of Psychology, University of Koblenz-Landau, Germany
Article accepted for publication in Computers in Human Behavior
Many thanks to the team of the Ars Electronica Center—especially Nicole Grueneis who
operated the robot—and to Hiroshi Ishiguro and Kohei Ogawa for providing the Telenoid and
helpful comments. This research was supported by the Austrian Science Fund (FWF-I-996-G22).
Please address correspondence to martina.mara@aec.at or appelm@uni-landau.de.!
SCIENCE FICTION AND ANDROID ROBOTS !2
Abstract
As suggested by the uncanny valley hypothesis, robots that resemble humans likely elicit
feelings of eeriness. Based on the social-psychological model of meaning maintenance, we
expected that the uncanny valley experience could be mitigated through a fictional story, due to
the meaning-generating function of narratives. A field experiment was conducted, in which 75
participants interacted with the humanlike robot Telenoid. Prior to the interaction, they either
read a short story, a non-narrative leaflet about the robot, or they received no preliminary
information. Eeriness ratings were significantly lower in the science fiction condition than in
both other conditions. This effect was mediated by higher perceived human-likeness of the robot.
Our findings suggest that science fiction may provide meaning for otherwise unsettling future
technologies.
Keywords: Human-robot interaction, Uncanny valley, Narrative, Meaning maintenance model,
Field experiment!
SCIENCE FICTION AND ANDROID ROBOTS !3
Science fiction reduces the eeriness of android robots: A field experiment
“As long as there’s a story, it’s all right.” – Graham Swift, Waterland (Swift, 1983)
1. Introduction
The robots are coming—what recently evolved as a highly popular headline in mass media
around the world (e.g., Kelly, 2014; Miller, 2013; Newenham, 2014; Wakefield, 2014) mirrors an
ongoing acceleration of technological progress: from self-driving vehicles and autonomous
delivery drones to your spaceman lookalike robo-assistant able to serve your future tea, the
industry announces their current prototypes to become commonplace technologies within the
next two decades. Among the various forms of such autonomous systems, social service robots
of more or less humanlike appearance represent one of the key fields. Plans for their increasing
use in geriatric care, as household helpers, or as communication companions already exist in
many countries. However, many people are still skeptical about the idea of being assisted by or
teaming up with a robotic creature in their day-to-day life (e.g., European Commission, 2012;
Nomura, Kanda, Suzuki, & Kato, 2008; Nomura, Suzuki, Kanda, & Kato, 2006). In particular,
android robots that mimic the visual appearance, motion, sound, or behavior of human beings,
are reported to likely provoke aversive reactions among human interaction partners (e.g., Ho &
MacDorman, 2010; Mori, 1970), especially at the first encounter (e.g., Koay, Syrdal, Walters, &
Dautenhahn, 2007).
We argue that negative responses towards android robots (often captured by the term
uncanny valley, see below) are reduced when people become accustomed to the robot by means
SCIENCE FICTION AND ANDROID ROBOTS !4
of a fictional story. Science fiction can provide a meaning framework that increases the
likelihood of positive interactions with future technologies—such as android robots. After an
introduction of our main theoretical underpinnings, we will present the results of a field
experiment that involved real-life human-android interactions.
1.1 The Uncanny Valley
In 1970, Masahiro Mori, a roboticist at the Tokyo Institute of Technology, introduced the
hypothetical concept of “bukimi no tani” (Mori, 1970), which later got translated to “uncanny
valley” (Reichardt, 1978). According to Mori, the closer robots resemble humans, the more
positive responses such as familiarity, acceptance, and empathy are triggered in human observers
and interaction partners. However, at some point of high similarity, but not perfect resemblance,
the acceptance will drop sharply and the robot will be rated as unfamiliar, eerie, or uncanny
(Mori, MacDorman, & Kageki, 2012). With even greater human-likeness, positive perceptions
are supposed to increase again (see Figure 1).
- Figure 1 around here -
Psychological theory and research on the uncanny and related experiences dates back to the
beginning of the twentieth century. In his 1906 article “Zur Psychologie des Unheimlichen” (On
the psychology of the uncanny) Viennese psychologist Ernst Jentsch mentioned the “doubt as to
whether an apparently living being really is animate and, conversely, doubt as to whether a
lifeless object may not in fact be animate” as a powerful cause of uncanny feelings (Jentsch,
1906/1997, p. 197). Jentsch’s thoughts on the eeriness of automata were picked up by Sigmund
Freud. In his popular essay “Das Unheimliche” (The uncanny) Freud further suggested that the
SCIENCE FICTION AND ANDROID ROBOTS !5
(encounter with the) unfamiliar in the familiar (situation) might be an important driver of
uncanny feelings (Freud, 1919).
Fuelled by the continuing efforts to develop humanlike robots by research labs and the
industry, Mori’s uncanny valley phenomenon has become a popular concept in the field of
robotics. In contrast to the widespread use of the term, only few empirical investigations have
dealt with the uncanny valley and the factors by which it is driven. Of the comparably small
body of available empirical literature, to date almost all studies have focused on (manipulating)
the visual appearance of a robot along a human-likeness continuum as their independent variable.
Participants’ reactions then were typically assessed for different conditions of the creature’s
appearance (cf., MacDorman, Green, Ho, & Koch, 2009; MacDorman & Ishiguro, 2006; Seyama
& Nagayama, 2007), for example, recipients reported the level of eeriness they experienced
while watching a picture of a human, a robot, or a hybrid figure. Research from this tradition
suggested that the uncanny valley experience might be associated with categorization difficulties
induced by anthropomorphic artifacts that do not only have attributes typical for human nature,
but at the same time show characteristics of other ontological categories (e.g. machine, puppet,
goat, cf. Burleigh, Schoenherr, & Lacroix, 2013; Yamada, Kawabe, & Ihaya, 2013, see also
Guitton, 2013, for human-animal hybrids in fantasy settings).
Based on the Meaning Maintenance Model (Heine, Proulx, & Vohs, 2006) and in line with
the latter mentioned research, we explain the uncanny valley phenomenon as a threat to the
user’s meaning framework. Meaning is “what links people, places, objects, and ideas to one
another in expected and predictable ways” (Heine et al., 2006, p. 89). Individuals organize their
experiences into meaning frameworks, which are mental representations of expected
SCIENCE FICTION AND ANDROID ROBOTS !6
associations. Based on Freud’s discussions of the uncanny (Freud, 1919), Proulx, Heine, and
Vohs (2010) suggested that feelings of uncanniness arise by a mixture of familiarity and
unfamiliarity that does not fit an individual’s meaning frameworks. We suppose that for most
persons, their meaning framework for robots still includes a rather broad and flexible range of
expectations (e.g. regarding variations in size, color, symmetry, movement, or audio signals). In
contrast, their meaning framework for other human beings is very likely to be more restricted
(e.g., variations in shape, symmetry, and size have to stay in a very limited range to be clearly
identifiable as human). The more robots resemble humans, the more likely the meaning
framework for humans may be activated. Unless the android matches an individual’s
expectancies about the ontological category of human beings in a clearly identifiable and
(nearly) perfect manner, however, the meaning framework is expected to be violated, resulting in
an uncanny experience.
We further assume that in a given human-android encounter, the violation of meaning
frameworks is not only subject to the visual appearance or other features of the robot; rather, we
believe that the uncanny experience varies substantially with the human being that interacts with
the robot. The experience may vary with rather stable characteristics of the interaction partner
such as her cultural background or personality. Importantly, we further assume that the uncanny
valley experience can be changed by prior situational or contextual factors such as the
information provided to users prior to interacting with the robot. Any information that
successfully provides a valid meaning framework for the robot may decrease the uncanniness
elicited by an android robot.
1.2 Stories as Meaning Makers
SCIENCE FICTION AND ANDROID ROBOTS !7
Our focus here is on the meaning-generating function of fiction. We assume that science
fiction can provide a meaning framework for android robots (and possibly other new
technologies) that makes these innovations appear less scary and uncanny. Throughout the ages,
humankind tended to tell stories in order to overcome potential threats to established meaning
frameworks (e.g., in the context of mortality salience, terror management, and grief, cf. Gilbert,
2002), to share moral values and purpose with others, and to explain to children why things are
the way they are (cf. Gottschall, 2012). As such, narratives have always been a primal form of
communicating meaning, from early myths and religious writings to contemporary multimedia
manifestations or science fiction movies. Humans are meaning makers (Heine et al., 2006) and
storytelling animals (Gottschall, 2012) at the same time. If our storytelling minds – “allergic to
uncertainty” and “addicted to meaning” – are not able to find meaningful patterns in the real
world, they will have to impose them (Gottschall, 2012, p. 103). Possibly, our need for meaning
is one of the main factors underlying the popularity of fictional stories (Oliver & Raney, 2011;
see also Bartsch & Mares, 2014; Oliver, Hartmann, & Woolley, 2012).
Our thoughts about the meaning-generating function of fictional stories are backed by
theorists from different disciplinary backgrounds. Several scholars have argued that stories are
the instrument through which people create meaning of experience and identity (Bruner, 1990;
Kerby, 1991; Polkinghorne, 1988; Rossiter, 1999; Sarbin, 1986). Narrative form—including
elements of plot, character, setting, scene, and theme—helps us to organize single experiences of
our life and relate them to each other (McAdams, 1985; Polkinghorne, 1988; Rossiter, 1999). In
classical narratives, events are linked to each other as cause to effect (Chatman, 1980). Thus, the
narrative structure transforms separate incidents into interrelated sequences of causality in such a
SCIENCE FICTION AND ANDROID ROBOTS !8
way that they can be understood within an integrated context or meaningful principle (Kerby,
1991). By their inherent causal sequences, stories also provide order. When we seek certainty,
classical narratives, due to their structural composition, are familiar, expectable, and comforting
to us (Appel, 2008).
Absurdist literature, in contrast, violates these expectations on purpose. Thus, examples of
this genre should affront rather than strengthen or establish meaning frameworks. In order to test
this prediction, Proulx and colleagues (2010) compared the effects of reading a conventional
parable by Aesop with an absurdist parable by Franz Kafka. Based on the meaning maintenance
model, the authors expected that a violation of meaning frameworks should increase the
importance of one’s cultural identity (as a means to compensate for the meaning violation). As
expected, after reading Kafka, cultural identity importance was higher than after reading Aesop.
In contrast to absurdist literature, most fictional stories encountered in popular mass media
follow an expected structure that arranges information in causal sequences. Our understanding of
the “lifelike” events in the story (Bruner, 1986) thereby goes beyond the cognitive processing of
logic arguments. As a consequence of the meaning-generating function of stories, science fiction
is expected to draw relations between a new, possibly frightening technology and familiar places,
objects, routines, or rules. It allows readers to experience a remote narrative world and to identify
with otherwise foreign characters (Gerrig, 1993). Story worlds can be safe places that allow the
development of meaning frameworks for future technologies that otherwise appear foreign,
unpleasant or eerie—as described in works on the uncanny valley phenomenon.
1.3 Study Overview and Predictions
SCIENCE FICTION AND ANDROID ROBOTS !9
People have a natural need for meaning and organize their perceptions of the world through
glasses shaped by their mental representations of expected relations, that is, their meaning
frameworks (e.g., Heine et al., 2006). Based on that, a confrontation with an unfamiliar, very
humanlike robot may evoke feelings of ambiguity and uncanniness, because it violates our
meaning framework for human beings. As a function of the meaning-generating power of stories,
we assume that a science fiction narrative can compensate this lack in meaning and provide a
fitting context for the otherwise unsettling machine.
We conducted a field experiment in a technology museum. Participants interacted with the
android robot Telenoid. We expected that a science fiction story with this robot as the
protagonist, presented prior to an actual interaction with it, would increase the perceived human-
likeness of the robot—as compared to non-narrative pre-information of comparable length and
content or a no-text condition. Moreover, participants in the narrative condition should
experience less eeriness, the key indicator of the uncanny valley phenomenon. The story-effect
on eeriness should at least in part be mediated through greater ascribed human-likeness. We
further expected that the science fiction story yielded higher ratings of the robot’s attractiveness
and fewer harmful cognitions toward the robot.
2. Method
2.1 Participants
Seventy-five visitors of the Austrian technology museum Ars Electronica Center took part
in our field experiment. They were recruited either directly in the entrance area of the museum or
through an introductory university course (results for both subgroups did not differ). Regarding
the purpose of the study, participants were told that several new exhibits in the museum needed
SCIENCE FICTION AND ANDROID ROBOTS !10
to be evaluated by visitors and that they would get assigned to just one of them. Data from three
participants were excluded from statistical analyses because they did not finish the material.
Thus, our final sample consisted of 72 participants (41 women) between the ages of 17 and 68
years (M = 31.24, SD = 11.56). The educational level of the participants was rather high, 27
(38%) had a university degree, 39 (54%) had a university entrance qualification (matura), the
remaining six participants (8%) had obtained basic school education. All participants were native
German speakers.
2.2 The robot
We chose Hiroshi Ishiguro’s Telenoid for the experimental human-robot interaction. It is a
puppet-sized tele-presence robot of humanlike appearance and able to physically transmit a
human operator’s voice, facial expressions, and gestures (Ogawa et al., 2011; also see Figure 2
and http://www.youtube.com/watch?v=N9JyDQlHo1A). Anecdotal evidence suggests that the
Telenoid evokes strong uncanny reactions at the first encounter (user comments on YouTube
reach from “The scariest robot I’ve ever seen” to “Kill it with fire”). The robot was installed in
the Ars Electronica Center’s public robot lab, which is part of one of the museum’s main
exhibition halls. No participant had been in contact with the involved robot before.
- Figure 2 around here -
2.3 Stimulus texts
One out of two texts (or no text) was presented before the participants interacted with the
robot: a narrative science fiction story or a non-narrative fact-based text. In view of the experiment’s
validity, it was crucial to manipulate the stimulus texts systematically. They should differ in terms of
their narrative form, but apart from this allow for comparability. Both texts, therefore, were
SCIENCE FICTION AND ANDROID ROBOTS !11
designed to be of the same length (one page each) and to describe key features of the Telenoid. This
included its appearance (puppet-sized, humanlike looks, light skin), its functionality (e.g., transmits
an operator’s voice and facial expressions in real-time), and its suggested purpose (e.g., helps the
elderly to stay in contact with family members). Whereas the non-narrative text was an adaptation
of the Telenoid leaflet given to ordinary museum visitors, the narrative was created especially for
this experiment. We chose a science fiction short story that provided a fitting contextual frame for
the robot protagonist. The pre-defined content regarding Telenoid’s appearance, functionality, and
purpose was embedded in a story plot that introduced the Telenoid as an extraterrestrial traveler who
comes to Earth and helps a grandmother contact her granddaughter who lives thousands of
kilometers away. In a scene that plays in the living room of the elderly lady, the robot transmits
messages and emotions over the long physical distance between the two of them, as depicted, a.o.,
in the following short excerpt: “(…) With its tele-presence function, the little robot managed to
create a feeling of closeness between Johanna and her granddaughter Andrea (…) Not only did
Telenoid convey the young woman’s voice, but with his artificial muscles he was also imitating her
movements while Johanna was holding the Telenoid in her arms on the living room couch (…)”.
The informational leaflet, in comparison, described the robot’s function and purpose in a purely
factual way, as in this example: “When two people are connected via Telenoid, (…) a feeling should
evolve that the other person is actually present. This impression of ‘tele-presence’ is mainly based
on the fact that the robot mirrors the gestures of the other person through its body (…)”.
2.4 Dependent measures
SCIENCE FICTION AND ANDROID ROBOTS !12
Our dependent measures were self-report ratings of the participants’ experience while
interacting with the robot, including attributed human-likeness, eeriness, attractiveness, and
harmful cognitions towards the Telenoid.
1
Human-likeness was measured with the help of five items (7-point semantic differential
scale, e.g., 1 = very machinelike, 7 = very humanlike; 1 = very artificial, 7 = very lifelike). This
index yielded good reliability, as indicated by Cronbach’s α = .82. Eeriness was assessed with
five items (e.g., 1 = very reassuring, 7 = very eerie; 1 = very bland, 7 = very uncanny). A reliable
index could be formed, as indicated by a Cronbach’s α = .86. We measured the attractiveness of
the robot again with five items (e.g., 1 = very unattractive, 7 = very attractive; 1 = very ugly, 7 =
very beautiful) which yielded good reliability, Cronbach’s α = .82. Whereas our measures for the
attribution of human-likeness, eeriness, and attractiveness were based on the uncanny valley
indices by Ho and MacDorman (2010), we further examined participants’ harmful cognitions
with a five-item scale that was developed for the purpose of this study (e.g., The Telenoid should
get destroyed after the end of this exhibition; I can imagine that I would harm the Telenoid if I
got an opportunity to do so). The items went with a seven-point scale (1 = not at all; 7 = very
much). The reliability was satisfactory, Cronbach’s α = .74.
Due to the fact that our field experiment was conducted in Austria, all items stemming from
pre-existing scales had to be translated into German. Therefore, to allow for greater validity, the
authors first developed two German versions independent from each other with the help of
Other variables measured but not reported here were perceived usefulness and the participants’ intention to
1
purchase the Telenoid for themselves. These variables followed the same pattern of results as the variables reported
here. The results on these two variables have been presented in the course of the 8th ACM/IEEE International
Conference on Human-Robot Interaction, Tokyo, 2013, and resulted in a two-page poster report.
SCIENCE FICTION AND ANDROID ROBOTS !13
English native speakers. In a second step, inconsistent translations were discussed with a
professional translator before the final items were added to the German survey.
2.5 Procedure
Each experimental session involved one participant. After arriving at the museum, the
participant was randomly assigned to either read the narrative short story about the Telenoid, to
read the non-narrative information leaflet about it, or to receive no preliminary information.
After having read their text (or immediately in the no-text condition), participants were taken to
the museum’s RoboLab, where the Telenoid was installed. During their interaction with the
Telenoid, participants sat on a sofa and held the robot on their lap. Because the Telenoid is a tele-
presence robot that needs to be remotely controlled by a human being, a female research assistant
operated the robot from a hidden room. This operator assisted in all experimental sessions and
was blind to the experimental conditions. She conversed with the participant on the base of a
predefined script (including questions on how participants have come to the museum that day
and what the weather was like). After five minutes of interaction, the participant was guided to
another place and worked on a questionnaire that included the dependent variables and
demographics.
The experiment followed a one-way between-subjects design. The statistical analyses
included ANOVAs to identify main effects of the experimental factor as well as statistical
procedures to test for mediation based on the classic stepwise approach (Baron & Kenny, 1986)
and the bootstrapping approach (e.g., Preacher & Hayes, 2008).
3. Results
SCIENCE FICTION AND ANDROID ROBOTS !14
Zero-order correlations indicated a positive association between human-likeness and
attractiveness (r = .66, p < .001) and a positive association between eeriness and harmful
cognitions (r = .59, p < .001). Higher scores of human-likeness were related to lower feelings of
eeriness (r = -.50, p < .001) and harmful cognitions (r = -.61, p < .001). Eeriness and harmful
cognitions were both negatively related to attractiveness (r = -.67, p < .001; r = -.72, p < .001).
In line with our main assumptions, participants who read the science fiction story perceived
the robot to be more humanlike (M = 3.63, SD = 1.07) than those in the informational leaflet
condition (M = 2.88, SD = 1.04) or those in the control condition (M = 2.95, SD = 1.15), F(2, 69)
= 4.71, p = .012, ηp2 = .12 (Figure 3a). Human-likeness was significantly higher in the narrative
condition than in the non-narrative condition (p = .007) or in the no-text condition (p = .013).
Substantial differences were also obtained for eeriness: Participants in the science fiction
condition experienced lower eeriness when interacting with the robot (M = 3.73, SD = 0.93) than
participants who read the informational leaflet (M = 4.73, SD = 0.90) or control group members
(M = 4.78, SD = 1.18), F(2, 69) = 7.95, p = .001, ηp2 = .19 (Figure 3b). The perceived eeriness
was significantly lower in the narrative condition than in the non-narrative condition (p = .001)
or the no-text condition (p = .001).
- Figure 3 around here -
We assumed that the influence of the science fiction story on the eeriness experience was
mediated by the perceived human-likeness of the robot. Our mediation analysis involved the
classic procedure by Baron and Kenny (1986), as well as the more recent bootstrapping
approach. Following the classic mediation test procedure introduced by Baron and Kenny, we
first conducted a set of stepwise regression analyses (see Figure 4). As indicated earlier, reading
SCIENCE FICTION AND ANDROID ROBOTS !15
a science fiction story in contrast to a non-narrative leaflet prior to the human-robot interaction
significantly predicted the level of eeriness that our participants experienced, B = -.99, SEB = .27,
p = .001 (path c in Figure 4). Eeriness was also predicted by the human-likeness that the
participants attributed to the android, B = .86, SEB = .29, p = .005 (path a). We subsequently
regressed perceived eeriness on both the text treatment and the attributed human-likeness. In this
analysis, human-likeness predicted eeriness, B = -.41, SEB = .12, p = .002 (path b). At the same
time, the regression weight for the direct effect on path c diminished to B = -.64, SEB = .26, p = .
02 (path c`). To formally test for mediation we ran a bootstrapping analysis based on the
recommended procedure and SPSS macro by Preacher and Hayes (2008) for indirect effect
analyses. We entered eeriness as the criterion, our experimental conditions as the predictor
variable, and human-likeness as the proposed mediator variable. As a result, the 95% confidence
interval for the indirect effect using 5,000 samples did not include zero (lower limit = -0.7625,
upper limit = -0.1234), which indicates a significant mediational role of human-likeness. In sum,
both the bootstrapping and the multiple regression analyses suggest that introducing an android
through science fiction induced higher human-likeness ratings, which in turn led to lower
experience of eeriness while interacting with the robot.
- Figure 4 around here -
Further analyses focused on the additional dependent variables and showed that the science
fiction story elicited higher attractiveness ratings (M = 4.32, SD = 0.92), as compared to the
informational leaflet (M = 3.54, SD = 1.01) or the no-information condition (M = 3.66, SD =
1.17), F(2, 69) = 4.16, p = .026, ηp2 = .10 (Figure 3c). The difference between the narrative
condition and the non-narrative condition was significant (p = .012) as was the difference
SCIENCE FICTION AND ANDROID ROBOTS !16
between the narrative condition and the no-text control condition (p = .030). Moreover, after
reading the science fiction story, participants harbored less harmful cognitions (M = 1.75, SD =
0.70), than participants in the non-narrative leaflet condition (M = 2.91, SD = 1.03) or
participants in the control condition (M = 2.86, SD = 1.05), F(2, 69) = 11.38, p < .001, ηp2 = .25
(Figure 3d). The difference between the narrative group and either of the other groups was
significant (ps < .001). For all four dependent variables, no significant difference was found
between scores in the informational leaflet condition and the no-text control condition (all ps > .
30).
4. Discussion
The uncanny valley hypothesis (Mori, 1970; Mori, MacDorman, & Kageki, 2012) suggests
that robots whose appearance is modeled after that of humans likely yield a low level of
acceptance. Especially when they come to resemble human beings to a very great extent but not
completely, the robot is regarded as uncanny (eerie, creepy). In line with the social-psychological
meaning maintenance model (Heine et al., 2006), we explained the uncanny valley experience as
a violation of meaning frameworks. Based on the meaning-generating function of stories, we
assumed that a fictional text can engender a context of meaning for an android and thus reduce
negative experiences in a human-robot interaction. A field experiment provided support for our
hypotheses: A fictional story that was presented prior to a real-life human-robot interaction
significantly reduced the experience of eeriness while interacting with the robot, as compared to
a non-narrative informational text and a no-text condition. Thus, only the story was able to
bridge the uncanny valley for our participants. This result supports our idea that readers can
extend their existing meaning frameworks when they are transported into the fictional world of a
SCIENCE FICTION AND ANDROID ROBOTS !17
story—and thereby prepare for otherwise potentially unsettling encounters with challenging
technological innovations in robotics and beyond.
We assume that the meaning-generating function of science fiction regarding new
technologies is not limited to robotics: Let’s imagine that a multinational company has just
revealed their latest market launch, a teleportation machine, named “the transporter”. This
machine is able to dematerialize a person who takes place on a special transporter platform and
to send the information to any GPS target, where the person’s body then gets reconverted into
matter; a process they call “beaming”. ― Sounds somewhat frightening? Maybe not so much for
viewers of the TV series Star Trek (Roddenberry, 1966) who are familiar with the concept of
beaming as an element of this series’ narrative world. In line with our theory and findings, we
suggest that by watching the fictional Star Trek stories, people around the world might have
developed a mental representation of beaming which makes this future technology to appear less
frightening. Likewise, anecdotal media reports about audience reactions to the humanoid service
robot VGC-60L—one half of the starring duo in the movie Robot & Frank (Schreier, 2012)—
point at a similar mechanism: Although the robot has been described as creepy when presented
outside the film context, viewers still wanted to get one for themselves after having followed its
story on the screen (Connelly, 2012; Watercutter, 2012).
In addition to the contribution of the present research to the field of robotics and the
responses towards new technologies, it contributes to the growing body of empirical literature
that highlights the real-world implications of fictional stories. In previous experimental research,
fictional stories changed recipients’ knowledge (e.g., Dahlstrom, 2012; Marsh, Butler, &
Umanath, 2012; Marsh, Meade, & Roediger, 2003), their attitudes, beliefs, and behavioral
SCIENCE FICTION AND ANDROID ROBOTS !18
intentions (e.g., Appel & Mara, 2013; Appel & Richter 2007;2010; Green & Brock, 2000) as well
as their self-concept (Richter, Appel, & Calio, 2014; Sestir & Green, 2010), and their theory of
mind (e.g., Kidd & Castano, 2013; Mar & Oatley, 2008). We believe that the construction of
meaning frameworks is an additional avenue for future research on narratives.
Limitations of the present research need to be noted. First, our method was a field
experiment. We attempted to guarantee both a high external and internal validity of the
experiment. Experimenting in the field, however, yields difficulties in controlling all incoming
stimuli for participants. Thus, participants might have been distracted by other museum exhibits
or visitors while interacting with the Telenoid. Moreover, the majority of participants were
recruited among persons arriving as real visitors at the Ars Electronica Museum. This group of
people might have shared a higher than average interest in science and technology, even if this
museum typically addresses a very broad target group.
Second, regarding our two stimulus texts, we tried to manipulate the narrativity whereas
other aspects of both texts were kept similar (e.g., in terms of text length or which information
about the robot’s technical features was embedded). However, the recipients’ affective reactions
to both texts might have differed. For example, the fictional story might have elicited more
positive mood, or more positive emotions such as elevation (Oliver et al., 2012; Oliver &
Bartsch, 2010). Although this does not contradict our theory and interpretation (as meaning
making is likely accompanied by positive affect), future research on the development of meaning
frameworks through stories can profit from incorporating measures of positive affective states.
The uncanniness measurement is a third limitation to our research. We relied on self-
reported ratings mostly based on items from the uncanny valley scales introduced by Ho and
SCIENCE FICTION AND ANDROID ROBOTS !19
MacDorman (2010). Self-reports are common in the field, however, the adoption of non-
obtrusive objective measures to assess the uncanny valley experience seems desirable in future
research (e.g., psychophysiological measures).
Fourth, prior exposure to science fiction or different genre preferences of our participants
have not been taken into account in our field experiment. We believe that such individual
differences did not constitute a confounding factor due to our randomized between-subjects
design. However, individual familiarity with science fiction may influence the effect of an
android robot story on user variables. Future research is encouraged to examine individual media
preferences as a moderating variable of the meaning generating effects of fictional stories.
Taken together, we believe that this piece of research adds to the literature on narrative
experiences and effects as well as to the literature on user perceptions of humanoid and android
robots. Robotics is a key field of technological progress, and service robots of more or less
humanlike appearance are predicted to appear in more and more places in our everyday life. To
date, however, personal experiences with robots are still limited. A recent Eurobarometer study
(European Commission, 2012), for example, revealed that 87% of EU citizens report that they
never have used a robot before. Given this lack of real-life encounters with robots, narratives and
fiction instead might serve as practical sources to create meaningful frameworks for an otherwise
unknown and possibly uncanny technology whose prevalence is expected to increase
tremendously within the next decades.
SCIENCE FICTION AND ANDROID ROBOTS !20
References
Appel, M. (2008). Fictional narratives cultivate just-world beliefs. Journal of Communication,
58, 62–83. doi: 10.1111/j.1460-2466.2007.00374.x
Appel, M., & Mara, M. (2013). The persuasive influence of a fictional character's
trustworthiness. Journal of Communication, 63, 912–932. doi: 10.1111/jcom.12053
Appel, M., & Richter, T. (2010). Transportation and need for affect in narrative persuasion: A
mediated moderation model. Media Psychology, 13, 101–135. doi:
10.1080/15213261003799847
Appel, M., & Richter, T. (2007). Persuasive effects of fictional narratives increase over time.
Media Psychology, 10, 113–134. doi: 10.1080/15213260701301194
Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social
psychological research: Conceptual, strategic, and statistical considerations. Journal of
Personality and Social Psychology, 51, 1173–1182. doi: 10.1037/0022-3514.51.6.1173
Bartsch, A., & Mares, M.-L. (2014). Making sense of violence: Perceived meaningfulness as a
predictor of audience interest in violent media content. Journal of Communication, 64,
956–976. doi: 10.1111/jcom.12112
Bruner, J. (1986). Actual minds, possible worlds. Cambridge, MA: Harvard University Press.
Bruner, J. (1990). Acts of meaning. Cambridge, MA: Harvard University Press.
Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An
empirical test of the relationship between eeriness and the human likeness of digitally
created faces. Computers in Human Behavior, 29, 759–771. doi: 10.1016/j.chb.
2012.11.021
SCIENCE FICTION AND ANDROID ROBOTS !21
Chatman, S. (1980). Story and discourse: Narrative structure in fiction and film. Ithaca, NY:
Cornell University Press.
Connelly, B. (2012, August 8). Another Movie Is Using Possibly Creepy Robot Commercials For
Viral Marketing. Bleeding Cool. Retrieved from http://www.bleedingcool.com
Dahlstrom, M. F. (2012). The persuasive influence of narrative causality: Psychological
mechanism, strength in overcoming resistance, and persistence over time. Media
Psychology, 15, 303–326. doi: 10.1080/15213269.2012.702604
European Commission (2012, September). Special Eurobarometer 382: Public Attitudes
Towards Robots. Retrieved from http://ec.europa.eu/public_opinion/archives/ebs/
ebs_382_en.pdf
Freud, S. (1919). Das Unheimliche. Imago. Zeitschrift für Anwendung der Psychoanalyse auf die
Geisteswissenschaften, V, 297–324. [English version: Freud, S. (2004). The Uncanny.
Fantastic Literature: A Critical Reader, 74–101.]
Gerrig, R. J. (1993). Experiencing narrative worlds. New Haven: Yale University Press.
Gilbert, K. R. (2002). A narrative approach to grief research: Finding meaning in stories. Death
Studies, 26, 223–239. doi: 10.1080/07481180211274
Gottschall, J. (2012). The storytelling animal: How stories make us human. New York, NY:
Houghton Mifflin Harcourt.
Green, M. C., & Brock, T. C. (2000). The role of transportation in the persuasiveness of public
narratives. Journal of Personality and Social Psychology, 79, 701–721. doi:
10.1037/0022-3514.79.5.701
SCIENCE FICTION AND ANDROID ROBOTS !22
Guitton, M. J. (2013). Morphological Conservation in Human-Animal Hybrids in Science
Fiction and Fantasy Settings: Is Our Imagination as Free as We Think It Is?. Advances in
Anthropology, 3, 157–163.
Heine, S. J.; Proulx, T., & Vohs, K. D. (2006). The Meaning Maintenance Model: On the
coherence of social motivations. Personality and Social Psychology Review, 10, 88–110.
doi: 10.1207/s15327957pspr1002_1
Ho, C., & MacDorman, K. F. (2010). Revisiting the uncanny valley theory: Developing and
validating an alternative to the Godspeed indices. Computers in Human Behavior, 26,
1508–1518. doi: 10.1016/j.chb.2010.05.015
Jentsch, E. (1906). Zur Psychologie des Unheimlichen. Psychiatrisch-neurologische
Wochenschrift, 8, 195–198, 203–205. [English version: Jentsch, E. (1997). On the
psychology of the uncanny. Angelaki: Journal of the Theoretical Humanities, 2, 7–16.]
doi: 10.1080/09697259708571910
Kelly, G. (2014, January 4). The robots are coming. Will they bring wealth or a divided society?
The Guardian. Retrieved from http://www.theguardian.com
Kerby, A. P. (1991). Narrative and self. Bloomington, IN: Indiana University Press.
Kidd, D. C., & Castano, E. (2013). Reading literary fiction improves theory of mind. Science,
342, 377–380. doi: 10.1126/science.1239918
Koay, K. L., Syrdal, D. S., Walters, M. L., & Dautenhahn, K. (2007, August). Living with robots:
Investigating the habituation effect in participants' preferences during a longitudinal
human-robot interaction study. In Robot and Human Interactive Communication, 2007.
RO-MAN 2007. The 16th IEEE International Symposium on (pp. 564-569). IEEE.
SCIENCE FICTION AND ANDROID ROBOTS !23
MacDorman, K. F.; Green, R. D.; Ho, C., & Koch, C. T. (2009). Too real for comfort? Uncanny
responses to computer generated faces. Computers in Human Behavior, 25, 695–710. doi:
10.1016/j.chb.2008.12.026
MacDorman, K. F., & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive
and social science research. Interaction Studies, 7, 297–337. doi: 10.1075/is.7.3.03mac
Mar, R. A., & Oatley, K. (2008). The function of fiction is the abstraction and simulation of
social experience. Perspectives on Psychological Science, 3, 173–192. doi: 10.1111/j.
1745-6924.2008.00073.x
Marsh, E. J., Butler, A. C., & Umanath, S. (2012). Using fictional sources in the classroom:
Applications from cognitive psychology. Educational Psychology Review, 24, 449–469.
doi: 10.1007/s10648-012-9204-0
Marsh, E. J., Meade, M. L., & Roediger, H. L. (2003). Learning facts from fiction. Journal of
Memory and Language, 49, 519–536. doi: 10.1016/S0749-596X(03)00092-5
McAdams, D. P. (1985). Power, intimacy and the life story. Homewood, IL: Dorsey.
Miller, M. (2013, January 9). The robots are coming. The Washington Post. Retrieved from
http://www.washingtonpost.com
Mori, M. (1970). Bukimi no tani [The uncanny valley]. Energy, 7, 33–35.
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field].
Robotics & Automation Magazine, IEEE, 19, 98-100.
Newenham, P. (2014, January 6). The robots are coming – and their advance may prove just
irresistible. The Irish Times. Retrieved from http://www.irishtimes.com/
SCIENCE FICTION AND ANDROID ROBOTS !24
Nomura, T., Kanda, T., Suzuki, T., & Kato, K. (2008). Prediction of Human Behavior in
Human--Robot Interaction Using Psychological Scales for Anxiety and Negative
Attitudes Toward Robots. IEEE Transactions on Robotics, 24, 442–451. doi: 10.1109/
TRO.2007.914004
Nomura, T., Suzuki, T., Kanda, T., & Kato, K. (2006, September). Measurement of anxiety
toward robots. In Robot and Human Interactive Communication, 2006. ROMAN 2006.
The 15th IEEE International Symposium on (pp. 372–377). IEEE.
Ogawa, K., Nishio, S., Koda, K., Taura, K., Minato, T., Ishii, C. T., & Ishiguro, H. (2011,
August). Telenoid: tele-presence android for communication. In ACM SIGGRAPH 2011
Emerging Technologies (p. 15). ACM. doi: 10.1145/2048259.2048274
Oliver, M. B., & Raney, A. A. (2011). Entertainment as pleasurable and meaningful: Identifying
hedonic and eudaimonic motivations for entertainment consumption. Journal of
Communication, 61, 984–1004. doi: 10.1111/j.1460-2466.2011.01585.x
Oliver, M. B., Hartmann, T., & Woolley, J. K. (2012). Elevation in Response to Entertainment
Portrayals of Moral Virtue. Human Communication Research, 38, 360–378. doi: 10.1111/
j.1468-2958.2012.01427.x
Oliver, M. B., & Bartsch, A. (2010). Appreciation as audience response: Exploring entertainment
gratifications beyond hedonism. Human Communication Research, 36, 53–81. doi:
10.1111/j.1468-2958.2009.01368.x
Polkinghorne, D. E. (1988). Narrative knowing and the human sciences. Albany, NY: State
University of New York Press.
SCIENCE FICTION AND ANDROID ROBOTS !25
Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and
comparing indirect effects in multiple mediator models. Behavior Research Methods, 40,
879–891. doi: 10.3758/BRM.40.3.879
Proulx, T., Heine, S. J., & Vohs, K. D. (2010). When is the unfamiliar the uncanny? Meaning
affirmation after exposure to absurdist literature, humor, and art. Personality and Social
Psychology Bulletin, 36, 817–829. doi: 10.1177/0146167210369896
Reichardt, J. (1978). Robots: fact, fiction and prediction. London, UK: Thames & Hudson.
Richter, T., Appel, M., & Calio, F. (2014). Stories can influence the self-concept. Social
Influence, 9, 172-188. doi: 10.1080/15534510.2013.799099
Roddenberry, G. (Creator & Producer) (1966). Star Trek: The Original Series [TV Series].
Desilu/Paramount.
Rossiter, M. (1999). A narrative approach to development: Implications for adult education.
Adult Education Quarterly, 50, 56–71. doi: 10.1177/07417139922086911
Sarbin, T. R. (1986). Narrative psychology: The storied nature of human conduct. New York,
NY: Praeger/Greenwood Publishing Group.
Schreier, J. (Director) (2012). Robot & Frank [Film]. Dog Run Pictures.
Sestir, M., & Green, M. C. (2010). You are who you watch: Identification and transportation
effects on temporary self-concept. Social Influence, 5, 272–288. doi:
10.1080/15534510.2010.490672
Seyama, J., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression
of artificial human faces. Presence: Teleoperators and Virtual Environments, 16, 337–
351. doi: 10.1162/pres.16.4.337
SCIENCE FICTION AND ANDROID ROBOTS !26
Swift, G. (1983). Waterland. New York, NY: Poseidon Press.
Wakefield, J. (2014, January 13). Singularity: The robots are coming to steal our jobs. BBC
Online. Retrieved from http://www.bbc.co.uk/
Watercutter, A. (2012, August 8). Robot & Frank’s vaguely creepy ads want to sell you an
android. Wired. Retrieved from http://www.wired.com
Yamada, Y., Kawabe, T., & Ihaya, K. (2013). Categorization difficulty is associated with
negative evaluation in the “uncanny valley” phenomenon. Japanese Psychological
Research, 55, 20–32. doi: 10.1111/j.1468-5884.2012.00538.x
SCIENCE FICTION AND ANDROID ROBOTS !27
Figures
$
Figure 1. The uncanny valley as proposed by Mori (1970) and its hypothesized relationship
between the human-likeness of artificial figures and emotional responses by human interaction
partners (also see Mori, MacDorman, & Kageki, 2012). !
SCIENCE FICTION AND ANDROID ROBOTS !28
$
Figure 2. In our field experiment, participants interacted with the android robot Telenoid (left),
which was tele-operated by a museum employee (middle). While their interaction, participants
sat on a sofa in one of the museum’s exhibition halls (right).
SCIENCE FICTION AND ANDROID ROBOTS !29
$ $
$$
Figure 3. Means and standard errors of the mean for human-likeness (a), eeriness (b),
attractiveness (c), and harmful cognitions (d) under the three experimental conditions.
SCIENCE FICTION AND ANDROID ROBOTS !30
$
Figure 4. The relationships between the text treatment (narrative versus non-narrative), attributed
human-likeness, and eeriness of the android robot Telenoid. Regression weights for the total
effect (upper part) and the model with human-likeness included as a mediator.