ArticlePDF Available

Science fiction reduces the eeriness of android robots: A field experiment


Abstract and Figures

As suggested by the uncanny valley hypothesis, robots that resemble humans likely elicit feelings of eeriness. Based on the social-psychological model of meaning maintenance, we expected that the uncanny valley experience could be mitigated through a fictional story, due to the meaning-generating function of narratives. A field experiment was conducted, in which 75 participants interacted with the humanlike robot Telenoid. Prior to the interaction, they either read a short story, a non-narrative leaflet about the robot, or they received no preliminary information. Eeriness ratings were significantly lower in the science fiction condition than in both other conditions. This effect was mediated by higher perceived human-likeness of the robot. Our findings suggest that science fiction may provide meaning for otherwise unsettling future technologies.
No caption available
No caption available
Content may be subject to copyright.
Science fiction reduces the eeriness of android robots: A field experiment
Martina Mara
Ars Electronica Futurelab, Linz, Austria
Markus Appel
Department of Psychology, University of Koblenz-Landau, Germany
Article accepted for publication in Computers in Human Behavior
Many thanks to the team of the Ars Electronica Center—especially Nicole Grueneis who
operated the robot—and to Hiroshi Ishiguro and Kohei Ogawa for providing the Telenoid and
helpful comments. This research was supported by the Austrian Science Fund (FWF-I-996-G22).
Please address correspondence to or!
As suggested by the uncanny valley hypothesis, robots that resemble humans likely elicit
feelings of eeriness. Based on the social-psychological model of meaning maintenance, we
expected that the uncanny valley experience could be mitigated through a fictional story, due to
the meaning-generating function of narratives. A field experiment was conducted, in which 75
participants interacted with the humanlike robot Telenoid. Prior to the interaction, they either
read a short story, a non-narrative leaflet about the robot, or they received no preliminary
information. Eeriness ratings were significantly lower in the science fiction condition than in
both other conditions. This effect was mediated by higher perceived human-likeness of the robot.
Our findings suggest that science fiction may provide meaning for otherwise unsettling future
Keywords: Human-robot interaction, Uncanny valley, Narrative, Meaning maintenance model,
Field experiment!
Science fiction reduces the eeriness of android robots: A field experiment
“As long as there’s a story, it’s all right.” – Graham Swift, Waterland (Swift, 1983)
1. Introduction
The robots are coming—what recently evolved as a highly popular headline in mass media
around the world (e.g., Kelly, 2014; Miller, 2013; Newenham, 2014; Wakefield, 2014) mirrors an
ongoing acceleration of technological progress: from self-driving vehicles and autonomous
delivery drones to your spaceman lookalike robo-assistant able to serve your future tea, the
industry announces their current prototypes to become commonplace technologies within the
next two decades. Among the various forms of such autonomous systems, social service robots
of more or less humanlike appearance represent one of the key fields. Plans for their increasing
use in geriatric care, as household helpers, or as communication companions already exist in
many countries. However, many people are still skeptical about the idea of being assisted by or
teaming up with a robotic creature in their day-to-day life (e.g., European Commission, 2012;
Nomura, Kanda, Suzuki, & Kato, 2008; Nomura, Suzuki, Kanda, & Kato, 2006). In particular,
android robots that mimic the visual appearance, motion, sound, or behavior of human beings,
are reported to likely provoke aversive reactions among human interaction partners (e.g., Ho &
MacDorman, 2010; Mori, 1970), especially at the first encounter (e.g., Koay, Syrdal, Walters, &
Dautenhahn, 2007).
We argue that negative responses towards android robots (often captured by the term
uncanny valley, see below) are reduced when people become accustomed to the robot by means
of a fictional story. Science fiction can provide a meaning framework that increases the
likelihood of positive interactions with future technologies—such as android robots. After an
introduction of our main theoretical underpinnings, we will present the results of a field
experiment that involved real-life human-android interactions.
1.1 The Uncanny Valley
In 1970, Masahiro Mori, a roboticist at the Tokyo Institute of Technology, introduced the
hypothetical concept of “bukimi no tani” (Mori, 1970), which later got translated to “uncanny
valley” (Reichardt, 1978). According to Mori, the closer robots resemble humans, the more
positive responses such as familiarity, acceptance, and empathy are triggered in human observers
and interaction partners. However, at some point of high similarity, but not perfect resemblance,
the acceptance will drop sharply and the robot will be rated as unfamiliar, eerie, or uncanny
(Mori, MacDorman, & Kageki, 2012). With even greater human-likeness, positive perceptions
are supposed to increase again (see Figure 1).
- Figure 1 around here -
Psychological theory and research on the uncanny and related experiences dates back to the
beginning of the twentieth century. In his 1906 article “Zur Psychologie des Unheimlichen” (On
the psychology of the uncanny) Viennese psychologist Ernst Jentsch mentioned the “doubt as to
whether an apparently living being really is animate and, conversely, doubt as to whether a
lifeless object may not in fact be animate” as a powerful cause of uncanny feelings (Jentsch,
1906/1997, p. 197). Jentsch’s thoughts on the eeriness of automata were picked up by Sigmund
Freud. In his popular essay “Das Unheimliche” (The uncanny) Freud further suggested that the
(encounter with the) unfamiliar in the familiar (situation) might be an important driver of
uncanny feelings (Freud, 1919).
Fuelled by the continuing efforts to develop humanlike robots by research labs and the
industry, Mori’s uncanny valley phenomenon has become a popular concept in the field of
robotics. In contrast to the widespread use of the term, only few empirical investigations have
dealt with the uncanny valley and the factors by which it is driven. Of the comparably small
body of available empirical literature, to date almost all studies have focused on (manipulating)
the visual appearance of a robot along a human-likeness continuum as their independent variable.
Participants’ reactions then were typically assessed for different conditions of the creature’s
appearance (cf., MacDorman, Green, Ho, & Koch, 2009; MacDorman & Ishiguro, 2006; Seyama
& Nagayama, 2007), for example, recipients reported the level of eeriness they experienced
while watching a picture of a human, a robot, or a hybrid figure. Research from this tradition
suggested that the uncanny valley experience might be associated with categorization difficulties
induced by anthropomorphic artifacts that do not only have attributes typical for human nature,
but at the same time show characteristics of other ontological categories (e.g. machine, puppet,
goat, cf. Burleigh, Schoenherr, & Lacroix, 2013; Yamada, Kawabe, & Ihaya, 2013, see also
Guitton, 2013, for human-animal hybrids in fantasy settings).
Based on the Meaning Maintenance Model (Heine, Proulx, & Vohs, 2006) and in line with
the latter mentioned research, we explain the uncanny valley phenomenon as a threat to the
user’s meaning framework. Meaning is “what links people, places, objects, and ideas to one
another in expected and predictable ways” (Heine et al., 2006, p. 89). Individuals organize their
experiences into meaning frameworks, which are mental representations of expected
associations. Based on Freud’s discussions of the uncanny (Freud, 1919), Proulx, Heine, and
Vohs (2010) suggested that feelings of uncanniness arise by a mixture of familiarity and
unfamiliarity that does not fit an individual’s meaning frameworks. We suppose that for most
persons, their meaning framework for robots still includes a rather broad and flexible range of
expectations (e.g. regarding variations in size, color, symmetry, movement, or audio signals). In
contrast, their meaning framework for other human beings is very likely to be more restricted
(e.g., variations in shape, symmetry, and size have to stay in a very limited range to be clearly
identifiable as human). The more robots resemble humans, the more likely the meaning
framework for humans may be activated. Unless the android matches an individual’s
expectancies about the ontological category of human beings in a clearly identifiable and
(nearly) perfect manner, however, the meaning framework is expected to be violated, resulting in
an uncanny experience.
We further assume that in a given human-android encounter, the violation of meaning
frameworks is not only subject to the visual appearance or other features of the robot; rather, we
believe that the uncanny experience varies substantially with the human being that interacts with
the robot. The experience may vary with rather stable characteristics of the interaction partner
such as her cultural background or personality. Importantly, we further assume that the uncanny
valley experience can be changed by prior situational or contextual factors such as the
information provided to users prior to interacting with the robot. Any information that
successfully provides a valid meaning framework for the robot may decrease the uncanniness
elicited by an android robot.
1.2 Stories as Meaning Makers
Our focus here is on the meaning-generating function of fiction. We assume that science
fiction can provide a meaning framework for android robots (and possibly other new
technologies) that makes these innovations appear less scary and uncanny. Throughout the ages,
humankind tended to tell stories in order to overcome potential threats to established meaning
frameworks (e.g., in the context of mortality salience, terror management, and grief, cf. Gilbert,
2002), to share moral values and purpose with others, and to explain to children why things are
the way they are (cf. Gottschall, 2012). As such, narratives have always been a primal form of
communicating meaning, from early myths and religious writings to contemporary multimedia
manifestations or science fiction movies. Humans are meaning makers (Heine et al., 2006) and
storytelling animals (Gottschall, 2012) at the same time. If our storytelling minds – “allergic to
uncertainty” and “addicted to meaning” – are not able to find meaningful patterns in the real
world, they will have to impose them (Gottschall, 2012, p. 103). Possibly, our need for meaning
is one of the main factors underlying the popularity of fictional stories (Oliver & Raney, 2011;
see also Bartsch & Mares, 2014; Oliver, Hartmann, & Woolley, 2012).
Our thoughts about the meaning-generating function of fictional stories are backed by
theorists from different disciplinary backgrounds. Several scholars have argued that stories are
the instrument through which people create meaning of experience and identity (Bruner, 1990;
Kerby, 1991; Polkinghorne, 1988; Rossiter, 1999; Sarbin, 1986). Narrative form—including
elements of plot, character, setting, scene, and theme—helps us to organize single experiences of
our life and relate them to each other (McAdams, 1985; Polkinghorne, 1988; Rossiter, 1999). In
classical narratives, events are linked to each other as cause to effect (Chatman, 1980). Thus, the
narrative structure transforms separate incidents into interrelated sequences of causality in such a
way that they can be understood within an integrated context or meaningful principle (Kerby,
1991). By their inherent causal sequences, stories also provide order. When we seek certainty,
classical narratives, due to their structural composition, are familiar, expectable, and comforting
to us (Appel, 2008).
Absurdist literature, in contrast, violates these expectations on purpose. Thus, examples of
this genre should affront rather than strengthen or establish meaning frameworks. In order to test
this prediction, Proulx and colleagues (2010) compared the effects of reading a conventional
parable by Aesop with an absurdist parable by Franz Kafka. Based on the meaning maintenance
model, the authors expected that a violation of meaning frameworks should increase the
importance of one’s cultural identity (as a means to compensate for the meaning violation). As
expected, after reading Kafka, cultural identity importance was higher than after reading Aesop.
In contrast to absurdist literature, most fictional stories encountered in popular mass media
follow an expected structure that arranges information in causal sequences. Our understanding of
the “lifelike” events in the story (Bruner, 1986) thereby goes beyond the cognitive processing of
logic arguments. As a consequence of the meaning-generating function of stories, science fiction
is expected to draw relations between a new, possibly frightening technology and familiar places,
objects, routines, or rules. It allows readers to experience a remote narrative world and to identify
with otherwise foreign characters (Gerrig, 1993). Story worlds can be safe places that allow the
development of meaning frameworks for future technologies that otherwise appear foreign,
unpleasant or eerie—as described in works on the uncanny valley phenomenon.
1.3 Study Overview and Predictions
People have a natural need for meaning and organize their perceptions of the world through
glasses shaped by their mental representations of expected relations, that is, their meaning
frameworks (e.g., Heine et al., 2006). Based on that, a confrontation with an unfamiliar, very
humanlike robot may evoke feelings of ambiguity and uncanniness, because it violates our
meaning framework for human beings. As a function of the meaning-generating power of stories,
we assume that a science fiction narrative can compensate this lack in meaning and provide a
fitting context for the otherwise unsettling machine.
We conducted a field experiment in a technology museum. Participants interacted with the
android robot Telenoid. We expected that a science fiction story with this robot as the
protagonist, presented prior to an actual interaction with it, would increase the perceived human-
likeness of the robot—as compared to non-narrative pre-information of comparable length and
content or a no-text condition. Moreover, participants in the narrative condition should
experience less eeriness, the key indicator of the uncanny valley phenomenon. The story-effect
on eeriness should at least in part be mediated through greater ascribed human-likeness. We
further expected that the science fiction story yielded higher ratings of the robot’s attractiveness
and fewer harmful cognitions toward the robot.
2. Method
2.1 Participants
Seventy-five visitors of the Austrian technology museum Ars Electronica Center took part
in our field experiment. They were recruited either directly in the entrance area of the museum or
through an introductory university course (results for both subgroups did not differ). Regarding
the purpose of the study, participants were told that several new exhibits in the museum needed
to be evaluated by visitors and that they would get assigned to just one of them. Data from three
participants were excluded from statistical analyses because they did not finish the material.
Thus, our final sample consisted of 72 participants (41 women) between the ages of 17 and 68
years (M = 31.24, SD = 11.56). The educational level of the participants was rather high, 27
(38%) had a university degree, 39 (54%) had a university entrance qualification (matura), the
remaining six participants (8%) had obtained basic school education. All participants were native
German speakers.
2.2 The robot
We chose Hiroshi Ishiguro’s Telenoid for the experimental human-robot interaction. It is a
puppet-sized tele-presence robot of humanlike appearance and able to physically transmit a
human operator’s voice, facial expressions, and gestures (Ogawa et al., 2011; also see Figure 2
and Anecdotal evidence suggests that the
Telenoid evokes strong uncanny reactions at the first encounter (user comments on YouTube
reach from “The scariest robot I’ve ever seen” to “Kill it with fire”). The robot was installed in
the Ars Electronica Centers public robot lab, which is part of one of the museum’s main
exhibition halls. No participant had been in contact with the involved robot before.
- Figure 2 around here -
2.3 Stimulus texts
One out of two texts (or no text) was presented before the participants interacted with the
robot: a narrative science fiction story or a non-narrative fact-based text. In view of the experiment’s
validity, it was crucial to manipulate the stimulus texts systematically. They should differ in terms of
their narrative form, but apart from this allow for comparability. Both texts, therefore, were
designed to be of the same length (one page each) and to describe key features of the Telenoid. This
included its appearance (puppet-sized, humanlike looks, light skin), its functionality (e.g., transmits
an operator’s voice and facial expressions in real-time), and its suggested purpose (e.g., helps the
elderly to stay in contact with family members). Whereas the non-narrative text was an adaptation
of the Telenoid leaflet given to ordinary museum visitors, the narrative was created especially for
this experiment. We chose a science fiction short story that provided a fitting contextual frame for
the robot protagonist. The pre-defined content regarding Telenoid’s appearance, functionality, and
purpose was embedded in a story plot that introduced the Telenoid as an extraterrestrial traveler who
comes to Earth and helps a grandmother contact her granddaughter who lives thousands of
kilometers away. In a scene that plays in the living room of the elderly lady, the robot transmits
messages and emotions over the long physical distance between the two of them, as depicted, a.o.,
in the following short excerpt: “(…) With its tele-presence function, the little robot managed to
create a feeling of closeness between Johanna and her granddaughter Andrea (…) Not only did
Telenoid convey the young woman’s voice, but with his artificial muscles he was also imitating her
movements while Johanna was holding the Telenoid in her arms on the living room couch (…)”.
The informational leaflet, in comparison, described the robot’s function and purpose in a purely
factual way, as in this example: “When two people are connected via Telenoid, (…) a feeling should
evolve that the other person is actually present. This impression of ‘tele-presence’ is mainly based
on the fact that the robot mirrors the gestures of the other person through its body (…)”.
2.4 Dependent measures
Our dependent measures were self-report ratings of the participants’ experience while
interacting with the robot, including attributed human-likeness, eeriness, attractiveness, and
harmful cognitions towards the Telenoid.
Human-likeness was measured with the help of five items (7-point semantic differential
scale, e.g., 1 = very machinelike, 7 = very humanlike; 1 = very artificial, 7 = very lifelike). This
index yielded good reliability, as indicated by Cronbach’s α = .82. Eeriness was assessed with
five items (e.g., 1 = very reassuring, 7 = very eerie; 1 = very bland, 7 = very uncanny). A reliable
index could be formed, as indicated by a Cronbach’s α = .86. We measured the attractiveness of
the robot again with five items (e.g., 1 = very unattractive, 7 = very attractive; 1 = very ugly, 7 =
very beautiful) which yielded good reliability, Cronbach’s α = .82. Whereas our measures for the
attribution of human-likeness, eeriness, and attractiveness were based on the uncanny valley
indices by Ho and MacDorman (2010), we further examined participants’ harmful cognitions
with a five-item scale that was developed for the purpose of this study (e.g., The Telenoid should
get destroyed after the end of this exhibition; I can imagine that I would harm the Telenoid if I
got an opportunity to do so). The items went with a seven-point scale (1 = not at all; 7 = very
much). The reliability was satisfactory, Cronbach’s α = .74.
Due to the fact that our field experiment was conducted in Austria, all items stemming from
pre-existing scales had to be translated into German. Therefore, to allow for greater validity, the
authors first developed two German versions independent from each other with the help of
Other variables measured but not reported here were perceived usefulness and the participants’ intention to
purchase the Telenoid for themselves. These variables followed the same pattern of results as the variables reported
here. The results on these two variables have been presented in the course of the 8th ACM/IEEE International
Conference on Human-Robot Interaction, Tokyo, 2013, and resulted in a two-page poster report.
English native speakers. In a second step, inconsistent translations were discussed with a
professional translator before the final items were added to the German survey.
2.5 Procedure
Each experimental session involved one participant. After arriving at the museum, the
participant was randomly assigned to either read the narrative short story about the Telenoid, to
read the non-narrative information leaflet about it, or to receive no preliminary information.
After having read their text (or immediately in the no-text condition), participants were taken to
the museum’s RoboLab, where the Telenoid was installed. During their interaction with the
Telenoid, participants sat on a sofa and held the robot on their lap. Because the Telenoid is a tele-
presence robot that needs to be remotely controlled by a human being, a female research assistant
operated the robot from a hidden room. This operator assisted in all experimental sessions and
was blind to the experimental conditions. She conversed with the participant on the base of a
predefined script (including questions on how participants have come to the museum that day
and what the weather was like). After five minutes of interaction, the participant was guided to
another place and worked on a questionnaire that included the dependent variables and
The experiment followed a one-way between-subjects design. The statistical analyses
included ANOVAs to identify main effects of the experimental factor as well as statistical
procedures to test for mediation based on the classic stepwise approach (Baron & Kenny, 1986)
and the bootstrapping approach (e.g., Preacher & Hayes, 2008).
3. Results
Zero-order correlations indicated a positive association between human-likeness and
attractiveness (r = .66, p < .001) and a positive association between eeriness and harmful
cognitions (r = .59, p < .001). Higher scores of human-likeness were related to lower feelings of
eeriness (r = -.50, p < .001) and harmful cognitions (r = -.61, p < .001). Eeriness and harmful
cognitions were both negatively related to attractiveness (r = -.67, p < .001; r = -.72, p < .001).
In line with our main assumptions, participants who read the science fiction story perceived
the robot to be more humanlike (M = 3.63, SD = 1.07) than those in the informational leaflet
condition (M = 2.88, SD = 1.04) or those in the control condition (M = 2.95, SD = 1.15), F(2, 69)
= 4.71, p = .012, ηp2 = .12 (Figure 3a). Human-likeness was significantly higher in the narrative
condition than in the non-narrative condition (p = .007) or in the no-text condition (p = .013).
Substantial differences were also obtained for eeriness: Participants in the science fiction
condition experienced lower eeriness when interacting with the robot (M = 3.73, SD = 0.93) than
participants who read the informational leaflet (M = 4.73, SD = 0.90) or control group members
(M = 4.78, SD = 1.18), F(2, 69) = 7.95, p = .001, ηp2 = .19 (Figure 3b). The perceived eeriness
was significantly lower in the narrative condition than in the non-narrative condition (p = .001)
or the no-text condition (p = .001).
- Figure 3 around here -
We assumed that the influence of the science fiction story on the eeriness experience was
mediated by the perceived human-likeness of the robot. Our mediation analysis involved the
classic procedure by Baron and Kenny (1986), as well as the more recent bootstrapping
approach. Following the classic mediation test procedure introduced by Baron and Kenny, we
first conducted a set of stepwise regression analyses (see Figure 4). As indicated earlier, reading
a science fiction story in contrast to a non-narrative leaflet prior to the human-robot interaction
significantly predicted the level of eeriness that our participants experienced, B = -.99, SEB = .27,
p = .001 (path c in Figure 4). Eeriness was also predicted by the human-likeness that the
participants attributed to the android, B = .86, SEB = .29, p = .005 (path a). We subsequently
regressed perceived eeriness on both the text treatment and the attributed human-likeness. In this
analysis, human-likeness predicted eeriness, B = -.41, SEB = .12, p = .002 (path b). At the same
time, the regression weight for the direct effect on path c diminished to B = -.64, SEB = .26, p = .
02 (path c`). To formally test for mediation we ran a bootstrapping analysis based on the
recommended procedure and SPSS macro by Preacher and Hayes (2008) for indirect effect
analyses. We entered eeriness as the criterion, our experimental conditions as the predictor
variable, and human-likeness as the proposed mediator variable. As a result, the 95% confidence
interval for the indirect effect using 5,000 samples did not include zero (lower limit = -0.7625,
upper limit = -0.1234), which indicates a significant mediational role of human-likeness. In sum,
both the bootstrapping and the multiple regression analyses suggest that introducing an android
through science fiction induced higher human-likeness ratings, which in turn led to lower
experience of eeriness while interacting with the robot.
- Figure 4 around here -
Further analyses focused on the additional dependent variables and showed that the science
fiction story elicited higher attractiveness ratings (M = 4.32, SD = 0.92), as compared to the
informational leaflet (M = 3.54, SD = 1.01) or the no-information condition (M = 3.66, SD =
1.17), F(2, 69) = 4.16, p = .026, ηp2 = .10 (Figure 3c). The difference between the narrative
condition and the non-narrative condition was significant (p = .012) as was the difference
between the narrative condition and the no-text control condition (p = .030). Moreover, after
reading the science fiction story, participants harbored less harmful cognitions (M = 1.75, SD =
0.70), than participants in the non-narrative leaflet condition (M = 2.91, SD = 1.03) or
participants in the control condition (M = 2.86, SD = 1.05), F(2, 69) = 11.38, p < .001, ηp2 = .25
(Figure 3d). The difference between the narrative group and either of the other groups was
significant (ps < .001). For all four dependent variables, no significant difference was found
between scores in the informational leaflet condition and the no-text control condition (all ps > .
4. Discussion
The uncanny valley hypothesis (Mori, 1970; Mori, MacDorman, & Kageki, 2012) suggests
that robots whose appearance is modeled after that of humans likely yield a low level of
acceptance. Especially when they come to resemble human beings to a very great extent but not
completely, the robot is regarded as uncanny (eerie, creepy). In line with the social-psychological
meaning maintenance model (Heine et al., 2006), we explained the uncanny valley experience as
a violation of meaning frameworks. Based on the meaning-generating function of stories, we
assumed that a fictional text can engender a context of meaning for an android and thus reduce
negative experiences in a human-robot interaction. A field experiment provided support for our
hypotheses: A fictional story that was presented prior to a real-life human-robot interaction
significantly reduced the experience of eeriness while interacting with the robot, as compared to
a non-narrative informational text and a no-text condition. Thus, only the story was able to
bridge the uncanny valley for our participants. This result supports our idea that readers can
extend their existing meaning frameworks when they are transported into the fictional world of a
story—and thereby prepare for otherwise potentially unsettling encounters with challenging
technological innovations in robotics and beyond.
We assume that the meaning-generating function of science fiction regarding new
technologies is not limited to robotics: Let’s imagine that a multinational company has just
revealed their latest market launch, a teleportation machine, named “the transporter”. This
machine is able to dematerialize a person who takes place on a special transporter platform and
to send the information to any GPS target, where the person’s body then gets reconverted into
matter; a process they call “beaming”. Sounds somewhat frightening? Maybe not so much for
viewers of the TV series Star Trek (Roddenberry, 1966) who are familiar with the concept of
beaming as an element of this series’ narrative world. In line with our theory and findings, we
suggest that by watching the fictional Star Trek stories, people around the world might have
developed a mental representation of beaming which makes this future technology to appear less
frightening. Likewise, anecdotal media reports about audience reactions to the humanoid service
robot VGC-60L—one half of the starring duo in the movie Robot & Frank (Schreier, 2012)—
point at a similar mechanism: Although the robot has been described as creepy when presented
outside the film context, viewers still wanted to get one for themselves after having followed its
story on the screen (Connelly, 2012; Watercutter, 2012).
In addition to the contribution of the present research to the field of robotics and the
responses towards new technologies, it contributes to the growing body of empirical literature
that highlights the real-world implications of fictional stories. In previous experimental research,
fictional stories changed recipients’ knowledge (e.g., Dahlstrom, 2012; Marsh, Butler, &
Umanath, 2012; Marsh, Meade, & Roediger, 2003), their attitudes, beliefs, and behavioral
intentions (e.g., Appel & Mara, 2013; Appel & Richter 2007;2010; Green & Brock, 2000) as well
as their self-concept (Richter, Appel, & Calio, 2014; Sestir & Green, 2010), and their theory of
mind (e.g., Kidd & Castano, 2013; Mar & Oatley, 2008). We believe that the construction of
meaning frameworks is an additional avenue for future research on narratives.
Limitations of the present research need to be noted. First, our method was a field
experiment. We attempted to guarantee both a high external and internal validity of the
experiment. Experimenting in the field, however, yields difficulties in controlling all incoming
stimuli for participants. Thus, participants might have been distracted by other museum exhibits
or visitors while interacting with the Telenoid. Moreover, the majority of participants were
recruited among persons arriving as real visitors at the Ars Electronica Museum. This group of
people might have shared a higher than average interest in science and technology, even if this
museum typically addresses a very broad target group.
Second, regarding our two stimulus texts, we tried to manipulate the narrativity whereas
other aspects of both texts were kept similar (e.g., in terms of text length or which information
about the robot’s technical features was embedded). However, the recipients’ affective reactions
to both texts might have differed. For example, the fictional story might have elicited more
positive mood, or more positive emotions such as elevation (Oliver et al., 2012; Oliver &
Bartsch, 2010). Although this does not contradict our theory and interpretation (as meaning
making is likely accompanied by positive affect), future research on the development of meaning
frameworks through stories can profit from incorporating measures of positive affective states.
The uncanniness measurement is a third limitation to our research. We relied on self-
reported ratings mostly based on items from the uncanny valley scales introduced by Ho and
MacDorman (2010). Self-reports are common in the field, however, the adoption of non-
obtrusive objective measures to assess the uncanny valley experience seems desirable in future
research (e.g., psychophysiological measures).
Fourth, prior exposure to science fiction or different genre preferences of our participants
have not been taken into account in our field experiment. We believe that such individual
differences did not constitute a confounding factor due to our randomized between-subjects
design. However, individual familiarity with science fiction may influence the effect of an
android robot story on user variables. Future research is encouraged to examine individual media
preferences as a moderating variable of the meaning generating effects of fictional stories.
Taken together, we believe that this piece of research adds to the literature on narrative
experiences and effects as well as to the literature on user perceptions of humanoid and android
robots. Robotics is a key field of technological progress, and service robots of more or less
humanlike appearance are predicted to appear in more and more places in our everyday life. To
date, however, personal experiences with robots are still limited. A recent Eurobarometer study
(European Commission, 2012), for example, revealed that 87% of EU citizens report that they
never have used a robot before. Given this lack of real-life encounters with robots, narratives and
fiction instead might serve as practical sources to create meaningful frameworks for an otherwise
unknown and possibly uncanny technology whose prevalence is expected to increase
tremendously within the next decades.
Appel, M. (2008). Fictional narratives cultivate just-world beliefs. Journal of Communication,
58, 62–83. doi: 10.1111/j.1460-2466.2007.00374.x
Appel, M., & Mara, M. (2013). The persuasive influence of a fictional character's
trustworthiness. Journal of Communication, 63, 912–932. doi: 10.1111/jcom.12053
Appel, M., & Richter, T. (2010). Transportation and need for affect in narrative persuasion: A
mediated moderation model. Media Psychology, 13, 101–135. doi:
Appel, M., & Richter, T. (2007). Persuasive effects of fictional narratives increase over time.
Media Psychology, 10, 113–134. doi: 10.1080/15213260701301194
Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social
psychological research: Conceptual, strategic, and statistical considerations. Journal of
Personality and Social Psychology, 51, 1173–1182. doi: 10.1037/0022-3514.51.6.1173
Bartsch, A., & Mares, M.-L. (2014). Making sense of violence: Perceived meaningfulness as a
predictor of audience interest in violent media content. Journal of Communication, 64,
956–976. doi: 10.1111/jcom.12112
Bruner, J. (1986). Actual minds, possible worlds. Cambridge, MA: Harvard University Press.
Bruner, J. (1990). Acts of meaning. Cambridge, MA: Harvard University Press.
Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An
empirical test of the relationship between eeriness and the human likeness of digitally
created faces. Computers in Human Behavior, 29, 759–771. doi: 10.1016/j.chb.
Chatman, S. (1980). Story and discourse: Narrative structure in fiction and film. Ithaca, NY:
Cornell University Press.
Connelly, B. (2012, August 8). Another Movie Is Using Possibly Creepy Robot Commercials For
Viral Marketing. Bleeding Cool. Retrieved from
Dahlstrom, M. F. (2012). The persuasive influence of narrative causality: Psychological
mechanism, strength in overcoming resistance, and persistence over time. Media
Psychology, 15, 303–326. doi: 10.1080/15213269.2012.702604
European Commission (2012, September). Special Eurobarometer 382: Public Attitudes
Towards Robots. Retrieved from
Freud, S. (1919). Das Unheimliche. Imago. Zeitschrift für Anwendung der Psychoanalyse auf die
Geisteswissenschaften, V, 297–324. [English version: Freud, S. (2004). The Uncanny.
Fantastic Literature: A Critical Reader, 74–101.]
Gerrig, R. J. (1993). Experiencing narrative worlds. New Haven: Yale University Press.
Gilbert, K. R. (2002). A narrative approach to grief research: Finding meaning in stories. Death
Studies, 26, 223–239. doi: 10.1080/07481180211274
Gottschall, J. (2012). The storytelling animal: How stories make us human. New York, NY:
Houghton Mifflin Harcourt.
Green, M. C., & Brock, T. C. (2000). The role of transportation in the persuasiveness of public
narratives. Journal of Personality and Social Psychology, 79, 701–721. doi:
Guitton, M. J. (2013). Morphological Conservation in Human-Animal Hybrids in Science
Fiction and Fantasy Settings: Is Our Imagination as Free as We Think It Is?. Advances in
Anthropology, 3, 157–163.
Heine, S. J.; Proulx, T., & Vohs, K. D. (2006). The Meaning Maintenance Model: On the
coherence of social motivations. Personality and Social Psychology Review, 10, 88–110.
doi: 10.1207/s15327957pspr1002_1
Ho, C., & MacDorman, K. F. (2010). Revisiting the uncanny valley theory: Developing and
validating an alternative to the Godspeed indices. Computers in Human Behavior, 26,
1508–1518. doi: 10.1016/j.chb.2010.05.015
Jentsch, E. (1906). Zur Psychologie des Unheimlichen. Psychiatrisch-neurologische
Wochenschrift, 8, 195–198, 203–205. [English version: Jentsch, E. (1997). On the
psychology of the uncanny. Angelaki: Journal of the Theoretical Humanities, 2, 7–16.]
doi: 10.1080/09697259708571910
Kelly, G. (2014, January 4). The robots are coming. Will they bring wealth or a divided society?
The Guardian. Retrieved from
Kerby, A. P. (1991). Narrative and self. Bloomington, IN: Indiana University Press.
Kidd, D. C., & Castano, E. (2013). Reading literary fiction improves theory of mind. Science,
342, 377–380. doi: 10.1126/science.1239918
Koay, K. L., Syrdal, D. S., Walters, M. L., & Dautenhahn, K. (2007, August). Living with robots:
Investigating the habituation effect in participants' preferences during a longitudinal
human-robot interaction study. In Robot and Human Interactive Communication, 2007.
RO-MAN 2007. The 16th IEEE International Symposium on (pp. 564-569). IEEE.
MacDorman, K. F.; Green, R. D.; Ho, C., & Koch, C. T. (2009). Too real for comfort? Uncanny
responses to computer generated faces. Computers in Human Behavior, 25, 695–710. doi:
MacDorman, K. F., & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive
and social science research. Interaction Studies, 7, 297–337. doi: 10.1075/is.7.3.03mac
Mar, R. A., & Oatley, K. (2008). The function of fiction is the abstraction and simulation of
social experience. Perspectives on Psychological Science, 3, 173–192. doi: 10.1111/j.
Marsh, E. J., Butler, A. C., & Umanath, S. (2012). Using fictional sources in the classroom:
Applications from cognitive psychology. Educational Psychology Review, 24, 449–469.
doi: 10.1007/s10648-012-9204-0
Marsh, E. J., Meade, M. L., & Roediger, H. L. (2003). Learning facts from fiction. Journal of
Memory and Language, 49, 519–536. doi: 10.1016/S0749-596X(03)00092-5
McAdams, D. P. (1985). Power, intimacy and the life story. Homewood, IL: Dorsey.
Miller, M. (2013, January 9). The robots are coming. The Washington Post. Retrieved from
Mori, M. (1970). Bukimi no tani [The uncanny valley]. Energy, 7, 33–35.
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field].
Robotics & Automation Magazine, IEEE, 19, 98-100.
Newenham, P. (2014, January 6). The robots are coming – and their advance may prove just
irresistible. The Irish Times. Retrieved from
Nomura, T., Kanda, T., Suzuki, T., & Kato, K. (2008). Prediction of Human Behavior in
Human--Robot Interaction Using Psychological Scales for Anxiety and Negative
Attitudes Toward Robots. IEEE Transactions on Robotics, 24, 442–451. doi: 10.1109/
Nomura, T., Suzuki, T., Kanda, T., & Kato, K. (2006, September). Measurement of anxiety
toward robots. In Robot and Human Interactive Communication, 2006. ROMAN 2006.
The 15th IEEE International Symposium on (pp. 372377). IEEE.
Ogawa, K., Nishio, S., Koda, K., Taura, K., Minato, T., Ishii, C. T., & Ishiguro, H. (2011,
August). Telenoid: tele-presence android for communication. In ACM SIGGRAPH 2011
Emerging Technologies (p. 15). ACM. doi: 10.1145/2048259.2048274
Oliver, M. B., & Raney, A. A. (2011). Entertainment as pleasurable and meaningful: Identifying
hedonic and eudaimonic motivations for entertainment consumption. Journal of
Communication, 61, 984–1004. doi: 10.1111/j.1460-2466.2011.01585.x
Oliver, M. B., Hartmann, T., & Woolley, J. K. (2012). Elevation in Response to Entertainment
Portrayals of Moral Virtue. Human Communication Research, 38, 360–378. doi: 10.1111/
Oliver, M. B., & Bartsch, A. (2010). Appreciation as audience response: Exploring entertainment
gratifications beyond hedonism. Human Communication Research, 36, 53–81. doi:
Polkinghorne, D. E. (1988). Narrative knowing and the human sciences. Albany, NY: State
University of New York Press.
Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and
comparing indirect effects in multiple mediator models. Behavior Research Methods, 40,
879–891. doi: 10.3758/BRM.40.3.879
Proulx, T., Heine, S. J., & Vohs, K. D. (2010). When is the unfamiliar the uncanny? Meaning
affirmation after exposure to absurdist literature, humor, and art. Personality and Social
Psychology Bulletin, 36, 817–829. doi: 10.1177/0146167210369896
Reichardt, J. (1978). Robots: fact, fiction and prediction. London, UK: Thames & Hudson.
Richter, T., Appel, M., & Calio, F. (2014). Stories can influence the self-concept. Social
Influence, 9, 172-188. doi: 10.1080/15534510.2013.799099
Roddenberry, G. (Creator & Producer) (1966). Star Trek: The Original Series [TV Series].
Rossiter, M. (1999). A narrative approach to development: Implications for adult education.
Adult Education Quarterly, 50, 56–71. doi: 10.1177/07417139922086911
Sarbin, T. R. (1986). Narrative psychology: The storied nature of human conduct. New York,
NY: Praeger/Greenwood Publishing Group.
Schreier, J. (Director) (2012). Robot & Frank [Film]. Dog Run Pictures.
Sestir, M., & Green, M. C. (2010). You are who you watch: Identification and transportation
effects on temporary self-concept. Social Influence, 5, 272–288. doi:
Seyama, J., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression
of artificial human faces. Presence: Teleoperators and Virtual Environments, 16, 337–
351. doi: 10.1162/pres.16.4.337
Swift, G. (1983). Waterland. New York, NY: Poseidon Press.
Wakefield, J. (2014, January 13). Singularity: The robots are coming to steal our jobs. BBC
Online. Retrieved from
Watercutter, A. (2012, August 8). Robot & Frank’s vaguely creepy ads want to sell you an
android. Wired. Retrieved from
Yamada, Y., Kawabe, T., & Ihaya, K. (2013). Categorization difficulty is associated with
negative evaluation in the “uncanny valley” phenomenon. Japanese Psychological
Research, 55, 20–32. doi: 10.1111/j.1468-5884.2012.00538.x
Figure 1. The uncanny valley as proposed by Mori (1970) and its hypothesized relationship
between the human-likeness of artificial figures and emotional responses by human interaction
partners (also see Mori, MacDorman, & Kageki, 2012). !
Figure 2. In our field experiment, participants interacted with the android robot Telenoid (left),
which was tele-operated by a museum employee (middle). While their interaction, participants
sat on a sofa in one of the museum’s exhibition halls (right).
$ $
Figure 3. Means and standard errors of the mean for human-likeness (a), eeriness (b),
attractiveness (c), and harmful cognitions (d) under the three experimental conditions.
Figure 4. The relationships between the text treatment (narrative versus non-narrative), attributed
human-likeness, and eeriness of the android robot Telenoid. Regression weights for the total
effect (upper part) and the model with human-likeness included as a mediator.
... Individuals are less likely to use a hammer to hit a robot (HEXBUG) presented with a first name and a story (e.g., "He's friendly but easily distracted") than a robot presented as an object [137]. A robot (TELENOID) presented as having a personal story is considered more attractive by the participants, who then report a higher degree of perceived human likeness and a lower feeling of eeriness [139]. When robots are presented as part of a narrative story (in a situational context), they are appreciated more than robots presented solely from a technical point of view and are judged to be more intelligent and more human-like [143]. ...
... Anthropomorphism varies according to the framing of the interaction, the role of the robot, the frequency and duration of the interaction, and the degree of autonomy. When a robot is presented in a human way (for example, by giving it a name or a personal story) individuals show more empathy and indulgence toward it [65,137] and find it more attractive [139,143]. A robot that is assigned mental abilities is rated as more socially intelligent [147] and more trustworthy [140,144], and individuals show an increased desire to interact with it [136]. ...
Full-text available
The increasing presence of robots in our society raises questions about how these objects are perceived by users. Individuals seem inclined to attribute human capabilities to robots, a phenomenon called anthropomorphism. Contrary to what intuition might suggest, these attributions vary according to different factors, not only robotic factors (related to the robot itself), but also situational factors (related to the interaction setting), and human factors (related to the user). The present review aims at synthesizing the results of the literature concerning the factors that influence anthropomorphism, in order to specify their impact on the perception of robots by individuals. A total of 134 experimental studies were included from 2002 to 2023. The mere appearance hypothesis and the SEEK (sociality, effectance, and elicited agent knowledge) theory are two theories attempting to explain anthropomorphism. According to the present review, which highlights the crucial role of contextual factors, the SEEK theory better explains the observations on the subject compared to the mere appearance hypothesis, although it does not explicitly explain all the factors involved (e.g., the autonomy of the robot). Moreover, the large methodological variability in the study of anthropomorphism makes the generalization of results complex. Recommendations are proposed for future studies.
... In fact, the results were published based largely on the authors' evaluation methods for some of the participants. As Mara et al. [44,45] used an android robot called Telenoid to assess the response when communicating between humans and robots. Te authors tried to reduce the value of the uncanny valley through fctional stories that participants hear before communicating with the android robot. ...
... In this study, they focused on assessing the efectiveness of using embodied robots as they said that robots in less agentic form had less impact, but the works related to disembodied algorithms were not evaluated in this study. Or, Mara and Appel [45] studied a method to reduce the uncanny valley, namely, before participants join in communication with the robot, they can hear and read a fctional story about the robot. Te results of this experiment show that fear are signifcantly reduced in sci-f conditions. ...
Full-text available
The development of robotics is undeniable in recent years. Many developing contries face the growth of the elderly population, it is the premise and impetus for the development of research on humanoid robots to serve humans. Many studies on various aspects of robotics are carried out in different parts of the world. Many novel methods were introduced for the design of the robot’s external appearance, internal mechanisms, and gestures. Recent research on humanoid robots is designed to appear to be copies of the anthropometric indicators of real people, which may affect the security of other people’s identities. Besides, these robots cause a feeling of horror in the user if their appearance is in the position of the uncanny valley. Therefore, these designs need to be carefully considered before fabrication. Artificial skin is studied for various purposes such as ensuring collision safety for industrial users and helping robots to perceive basic tactile sensations. This review consists of the recent literature on the interaction of appearance and behavior of robot interaction, artificial skin, and especially humanoid robots, including appearance, such as android and Geminoid robots. This work can provide a reference for humanoid robot research, including uncanny valley hypotheses, artificial skin, and humanoid robots.
... It seems that it had a particularly larger remedial effect on perceived undesirability, which captured participants' uneasy feelings toward the chatbot (i.e., "I feel uneasy when I see the chatbot's behaviors," "What the chatbot says freaks me out"). Thus, our finding is consistent with Mara and Appel's study suggesting that using explanatory text reduced people's perceived eeriness of android robots [62]. ...
Full-text available
Social chatbots are aimed at building emotional bonds with users, and thus it is particularly important to design these technologies so as to elicit positive perceptions from users. In the current study, we investigate the impacts that transparent explanations of chatbots’ mechanisms have on users’ perceptions of the chatbots. A total of 914 participants were recruited from Amazon Mechanical Turk. They were randomly assigned to observe conversations between a hypothetical chatbot and a user in one of the two-by-two experimental conditions: whether the participants received an explanation about how the chatbot was trained and whether the chatbot was framed as an intelligent entity or a machine. A fifth group, who believed they were observing interactions between two humans, served as a control. Analyses of participants’ responses to the postobservation survey indicated that transparency positively affected perceptions of social chatbots by leading users to (1) find the chatbot less creepy, (2) feel greater affinity to the chatbot, and (3) perceive the chatbot as more socially intelligent, though these effects were small. Moreover, transparency appeared to have a larger effect on increasing the perceived social intelligence among participants with lower prior AI knowledge. These findings have implications for the design of future social chatbots and support the addition of transparency and explanation for chatbot users.
... The uncanny valley effect is a phenomenon first proposed by Mori (1970) that describes a potential negative emotional response that individuals have towards human-like robotic technology. These negative emotional responses were reduced, however, when people had experience with robots through science fiction (Mara & Appel, 2015). ...
Full-text available
Incremental advancements in technology present researchers with unique opportunities to examine and predict human behavior not only during, but before the integration of technology into daily life. While human-like, autonomous robots are still a thing of science fiction, digital assistant technology is becoming increasingly prevalent in various industries, like entertainment and business. Previous studies have identified trends in both the design and reception of these technologies, including gender biases and social “othering”, which may affect how humans interact with more advanced robotic technologies in the future. The aim of the current study was to determine whether preconceived beliefs about gender inequality and social hierarchies would predict individuals’ interest in engaging in platonic friendships (“robofriendship”) or sexual relationships (“robosexuality”) with hypothetical human-like robots. Gender, ambivalent sexism, social dominance orientation, and sociosexual orientation were used to predict individuals’ interest in both robofriendship and robosexuality. It was found that hostile sexism positively predicted interest in robosexuality, particularly for men (β =.16, b = .27, 95% CI [.03, .30], t(209) =2.364, p =.019). Conversely, hostile sexism negatively predicted robofriendship, and significant interactions effects were found in that at lower levels of SDO, women maintained greater interest in robofriendship than men (β = .26, b = .54, 95% CI [.09, .99], t(208) = -2.235, p = .02). The current study provides preliminary evidence to suggest that preconceived beliefs about social hierarchy and gender inequality may impact romantic and platonic interactions between humans and robots. Limitations and future directions are also discussed.
... HMC scholars have also demonstrated experimentally that what humans say to one another about a social robot can impact social and emotional responses to it (C. Edwards et al., 2020b;Liang & S.A. Lee, 2016;Mara & Appel, 2015;Van Straten et al., 2020). ...
Introduction Human–Robot Interaction (HRI) is both the name of an academic research field and a general context of interaction between humans and robotic systems. Some research conducted within the field of HRI pertains to communication and a great deal is indirectly relevant by shedding light on perception, cognition, and behavior. Moreover, numerous researchers overlap HRI and Human– Machine Communication (HMC) in their professional memberships, conference participation, and publication outlets. However, this chapter narrows the focus to summarize the methodological contributions of communication scholars who have investigated HRI as a form of HMC. First, I briefly explain HRI as both a field of study and context of interaction as it relates to HMC. Second, I discuss how robots have been variously positioned within the communication process. Third, I survey the HMC research which centers HRI, highlighting the knowledge claims and methods behind them. Finally, I offer summarizing remarks and future research directions.
... HMC scholars have also demonstrated experimentally that what humans say to one another about a social robot can impact social and emotional responses to it (C. Edwards et al., 2020b;Liang & S.A. Lee, 2016;Mara & Appel, 2015;Van Straten et al., 2020). ...
This chapter discusses our affective relationship with digital products.
Due to today’s shortage of skilled workers, humanoid robots are already used in workspaces. As technology develops further, this usage is likely to increase even further, making research more important. This paper presents results of a first longitudinal experiment about leadership in mixed human-robot teams compared to human-only teams. Specifically, the integration of a social robot in an office team with a human team leader is assessed. Based on extant leadership theory, we argue that empowering leadership contributes best to the performance of mixed human-robot teams. In this longitudinal experiment, two teams were working in a company and compared for 54 different knowledge work tasks over a project duration of six weeks. One team was a mixed human-robot team while the other was a human-only team. Our results show that both teams can achieve similar performance outcomes. These results give insights into leadership in future workplace with increased use of technology and suggest empowering leadership as a viable option to lead mixed human-robot teams without performance losses.KeywordsAgent System and Intelligent SpeakerHCI in Society 5.0RobotsAvatars and Virtual HumanMixed Human-Robot TeamRobotic Team AssistantSocial RobotEmpowering LeadershipLongitudinal Study
Anthropomorphe Dienstleistungsroboter gewinnen immer mehr an Popularität. Je leistungsfähiger sie werden und je stärker sie in unserem Alltag integriert sind, desto wichtiger wird die verantwortungsbewusste Gestaltung der Mensch-Roboter-Interaktion (MRI). Hierbei sind Menschenwürde, Transparenz, Privatsphäre, Datenschutz und Compliance im verantwortungsbewussten Einsatz anthropomorpher Dienstleistungsroboter von zentraler Bedeutung. Dieser Beitrag nennt Tätigkeiten von Dienstleistungsrobotern im Handel und bietet einen interdisziplinären Überblick über den aktuellen Forschungsstand zur verantwortungsbewussten Gestaltung der MRI unter besonderer Berücksichtigung der vier Disziplinen Ethik, Recht, Psychologie und Technik. Zudem wird ein interdisziplinärer Bezugsrahmen für die Gestaltung einer verantwortungsbewussten MRI mit anthropomorphen Dienstleitungsrobotern entwickelt und präsentiert. Abschließend werden wissenschaftliche Implikationen abgeleitet und weitere Forschungsfelder hinsichtlich einer verantwortungsbewussten Gestaltung der MRI mit anthropomorphen Dienstleistungsrobotern formuliert.
Full-text available
In this article, we attempt to distinguish between the properties of moderator and mediator variables at a number of levels. First, we seek to make theorists and researchers aware of the importance of not using the terms moderator and mediator interchangeably by carefully elaborating, both conceptually and strategically, the many ways in which moderators and mediators differ. We then go beyond this largely pedagogical function and delineate the conceptual and strategic implications of making use of such distinctions with regard to a wide range of phenomena, including control and stress, attitudes, and personality traits. We also provide a specific compendium of analytic procedures appropriate for making the most effective use of the moderator and mediator distinction, both separately and in terms of a broader causal system that includes both moderators and mediators. (46 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Full-text available
The question of whether human imagination knows no boundaries or is, alternatively, constrained by conscious or unconscious cognitive templates is a key issue in defining human mind. We try here to address this extremely large question by focusing on one particular element of imaginary creations, the specific case of human-animal hybrids. Human-animal hybrids are common inhabitants of human imaginary spaces, being regularly encountered across numerous mythologies, as well as in modern popular culture. If human imagination was unconstrained, it would be expected that such hybrid creatures would display roughly half human and half animalistic features. Using several different popular science fiction and fantasy settings, we conducted an analysis of the morphological traits of human-animal hybrids, both anatomical and phenotypic. Surprisingly, we observed extremely high conservation of human morphological traits in human-animal hybrids, with a contrasting high use of phenotypic (“cosmetic”) alterations, and with highly stereotyped patterns of morphological alterations. While these alterations were independent of the setting considered, shape alterations were setting-dependent and used as a way to increase internal coherence. Finally, important gender differences were observed, as female human-animal hybrids retained significantly more human traits than males did, suggesting that conservation of female appearance may bear essential evolutionary importance. Taken together, these results demonstrate the existence of strong cognitive templates which frame and limit the expression of the capacity of human imagination, and unveil some of the psychological mechanisms which constrain the emergence of imaginary spaces.
Full-text available
The present work examined the influence of stories on the self-concept of femininity. A mixed sample of female respondents (N=689) participated in a web-based experiment. Self-reported femininity was assessed after reading a story that featured a protagonist with a traditional gender role (focused on motherhood) or a control story. The experimental story increased femininity only among readers who were more deeply transported into the story world. Moreover, the experimental story increased femininity among respondents who were unlikely to engage in social comparison (had no children of their own), whereas no such effect was observed for respondents who were demographically more similar to the protagonist (had children of their own).
Full-text available
The present research examined the role of a fictional character's trustworthiness on narrative persuasion. The authors suggest that trustworthiness indicators within the story, rather than paratextual cues (fact–fiction labeling) affect persuasive outcomes. An experiment on fuel‐efficient driving behavior (green driving) was conducted, with behavioral intentions and self‐reported behavior (3 weeks postexposure) as dependent variables. A story with a trustworthy character who introduced green driving behavior led to stronger intentions to engage in fuel‐efficient driving among car owners than a story with a less trustworthy character who provided the same information or a control story. Low character trustworthiness was particularly detrimental to story‐consistent intentions and behavior for recipients who were not deeply immersed into the story world (low narrative presence).
Fictional materials are commonly used in the classroom to teach course content. Both laboratory experiments and classroom demonstrations illustrate the benefits of using fiction to help students learn accurate information about the world. However, fictional sources often contain factually inaccurate content, making them a potent vehicle for learning misinformation about the world. We briefly review theoretical issues relevant to whether learners process fictional sources differently before exploring how individual differences, learning activities, and assessment characteristics may affect learning from fiction. This review focuses on our own experimental approaches for studying learning from fiction, including learning from short stories and from films, while connecting to a broader educational literature on learning from fictional sources. Throughout the review, implications for educational use and future directions for experimental research are noted.
Research on audience interest in violent media content is extended to include individuals' appreciation of certain types of violent portrayals as a meaningful and valuable reflection of reality. A sample of 482 German and U.S. adults aged 18–82 watched movie trailers that varied in pretest ratings of gore and meaningfulness, but were equivalent in suspense. As hypothesized, perceived levels of gore and meaningfulness interacted to predict individuals' reported likelihood of watching the full movie, such that a negative influence of gore on viewing likelihood was compensated at high levels of meaningfulness. These findings suggest that, in addition to other motivations such as suspense, some types of violent and even gory content might be sought as an opportunity for meaning-making.