ArticlePDF Available

Does Context Matter? Effects of Robot Appearance and Reliability on Social Attention Differs Based on Lifelikeness of Gaze Task

Authors:

Abstract and Figures

Social signals, such as changes in gaze direction, are essential cues to predict others’ mental states and behaviors (i.e., mentalizing). Studies show that humans can mentalize with nonhuman agents when they perceive a mind in them (i.e., mind perception). Robots that physically and/or behaviorally resemble humans likely trigger mind perception, which enhances the relevance of social cues and improves social-cognitive performance. The current experiments examine whether the effect of physical and behavioral influencers of mind perception on social-cognitive processing is modulated by the lifelikeness of a social interaction. Participants interacted with robots of varying degrees of physical (humanlike vs. robot-like) and behavioral (reliable vs. random) human-likeness while the lifelikeness of a social attention task was manipulated across five experiments. The first four experiments manipulated lifelikeness via the physical realism of the robot images (Study 1 and 2), the biological plausibility of the social signals (Study 3), and the plausibility of the social context (Study 4). They showed that humanlike behavior affected social attention whereas appearance affected mind perception ratings. However, when the lifelikeness of the interaction was increased by using videos of a human and a robot sending the social cues in a realistic environment (Study 5), social attention mechanisms were affected both by physical appearance and behavioral features, while mind perception ratings were mainly affected by physical appearance. This indicates that in order to understand the effect of physical and behavioral features on social cognition, paradigms should be used that adequately simulate the lifelikeness of social interactions.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
1 3
International Journal of Social Robotics
https://doi.org/10.1007/s12369-020-00675-4
Does Context Matter? Eects ofRobot Appearance andReliability
onSocial Attention Diers Based onLifelikeness ofGaze Task
AbdulazizAbubshait1,2· PatrickP.Weis1,3· EvaWiese1
Accepted: 26 June 2020
© Springer Nature B.V. 2020
Abstract
Social signals, such as changes in gaze direction, are essential cues to predict others’ mental states and behaviors (i.e., men-
talizing). Studies show that humans can mentalize with nonhuman agents when they perceive a mind in them (i.e., mind
perception). Robots that physically and/or behaviorally resemble humans likely trigger mind perception, which enhances the
relevance of social cues and improves social-cognitive performance. The current experiments examine whether the effect
of physical and behavioral influencers of mind perception on social-cognitive processing is modulated by the lifelikeness
of a social interaction. Participants interacted with robots of varying degrees of physical (humanlike vs. robot-like) and
behavioral (reliable vs. random) human-likeness while the lifelikeness of a social attention task was manipulated across
five experiments. The first four experiments manipulated lifelikeness via the physical realism of the robot images (Study 1
and 2), the biological plausibility of the social signals (Study 3), and the plausibility of the social context (Study 4). They
showed that humanlike behavior affected social attention whereas appearance affected mind perception ratings. However,
when the lifelikeness of the interaction was increased by using videos of a human and a robot sending the social cues in
a realistic environment (Study 5), social attention mechanisms were affected both by physical appearance and behavioral
features, while mind perception ratings were mainly affected by physical appearance. This indicates that in order to under-
stand the effect of physical and behavioral features on social cognition, paradigms should be used that adequately simulate
the lifelikeness of social interactions.
Keywords Gaze-cueing· Social cognition· Human–robot gaze· Mind perception
1 Introduction
Humans make inferences based on observing nonverbal
social behaviors, such as changes in gaze direction, and make
predictions about the intentions underlying these behaviors
[13]. Reasoning about internal states occurs when an entity
is believed to have a mind (i.e., mind perception), with the
capability of possessing internal states, such as emotions,
preferences, and intentions [4]. While there is no doubt that
humans can experience internal states, the degree to which
nonhuman entities like robots can trigger mind perception
can depend on the human-likeness of the entity’s physical
appearance and displayed behaviors [5]. Previous studies
have shown that when an entity is believed to “have a mind”
(independent of its actual mind status), more social rele-
vance is ascribed to its nonverbal signals [6]. Specifically,
it was shown that attentional orienting to changes in gaze
direction [7], was more pronounced when gaze signals were
believed to be generated by a human (i.e., an entity with
a mind) as opposed to a non-intentional machine [812].
While these studies show a clear link between beliefs about
an agent’s mind status and social-cognitive processing,
they do not inform about potential effects of physical (e.g.,
humanlike appearance), behavioral (e.g., biological motion),
and contextual (e.g., lifelikeness of interaction) features on
mind perception and social cognitive processes. This is
crucial for social roboticists, in order to understand how to
design robots that trigger social-cognitive processes similar
Electronic supplementary material The online version of this
article (https ://doi.org/10.1007/s1236 9-020-00675 -4) contains
supplementary material, which is available to authorized users.
* Abdulaziz Abubshait
Abdulaziz.abubshait@iit.it
1 George Mason University, Fairfax, VA, USA
2 Italian Institute ofTechnology, Genoa, Italy
3 Ulm University, Ulm, Germany
International Journal of Social Robotics
1 3
to humans. To address this, the current study manipulated
physical and behavioral agent features, as well as the lifelike-
ness of the social interaction and examined the combined
effects of these parameters on mind perception on social
cognitive processing.
To investigate the effects of physical, behavioral, and con-
textual parameters on social cognitive processing, we used a
social attention task that measured the extent to which par-
ticipants orient their attention to a location that is spatially
cued by a face’s change in gaze direction (i.e., gaze cues)
[7]. For this purpose, a face stimulus was presented in the
center of a screen that first looked straight and then changed
its gaze direction to either the left or right side of the screen,
which constitutes the gaze cue. The gaze cue is then fol-
lowed by a target that participants were asked to respond to
as quickly and accurately as possible. Observing gaze cues
shifts the observer’s attention to the gazed-at location, which
results in faster reaction times to targets that are presented at
the gazed-at location (i.e., valid trials) than those opposite of
the gaze cue (i.e., invalid trials). The difference in reaction
times between valid and invalid trials is called the gaze-
cueing effect and its size is indicative of the extent to which
people attend to where an interaction partner is looking
[7]. This task was chosen for four reasons: First, attentional
orienting to gaze signals is a social-cognitive process that
is essential for human development and a prerequisite for
higher-order social-cognitive processes, such as mentalizing
[9, 13, 14]. Second, prior studies have shown that social
attention is sensitive to the perceived social relevance of an
interaction [10, 12, 1519], and specifically to the degree
to which the gazer is perceived as having a mind [8, 10, 11,
18]. Third, cognitive modeling of nonverbal signals like gaze
cues in nonhuman agents has been a central topic for HRI
since robots that display nonverbal signals can evoke natural
responses from the interacting human [15, 20, 21]. Fourth,
the paradigm allows for the simple manipulation of physi-
cal parameters of the gazer (i.e., humanlike vs. robot-like),
behavioral parameters of the gaze signal (i.e., predictiveness
and biological plausibility), and contextual parameters of the
interaction (i.e., presence of reference objects and lifelike-
ness of the simulation).
1.1 Causes andEects ofMind Perception
Research suggests that mind perception can be manipu-
lated via physical and behavioral agent features, as well
as contextual features of an interaction. Agents that physi-
cally resemble humans are more likely to be perceived as
“having a mind” than actors that appear mechanistic [20,
2225]. Specifically, when robots have similar physical
characteristics as humans (e.g., humanlike head dimen-
sions) or when their human-likeness is increased by adding
a high percentage of humanness via morphing a human
face into nonhuman faces (e.g., dolls, robots or stuffed
animals), people tend to ascribe a higher mind status to
them [22, 24, 26, 27]. Likewise, people also perceive
“more mind” in other agents when their behavior is pre-
dictable, for instance when an agent’s gaze signals reliably
indicate the location of an upcoming target [28] or when
their behaviors generate unexpected outcomes, for instance
when playing economic games with entities whose human-
likeness is unknown [29]. People are also more likely to
attribute mental states to inanimate objects when they
move at similar speeds as human agents [30], when they
show behavioral patterns reminiscent of human–human
interactions [31, 32] (even when the objects are abstract,
such as triangles [33]), or when they interact with non-
human agents that display negative intentions or violate
social norms, such as robots that cheat during an inter-
active game (e.g., rock-paper-scissors; [34]). Finally,
studies have shown that contextual features of an inter-
action can influence the extent of mind perception. For
example, when the outcome of an interaction is negative,
people attribute more mental capacities to robots [35],
and focusing on the body rather than the face of another
agent changes the dynamic of mind perception such that it
reduces perceptions of the agency component of mind per-
ception (i.e., planning, acting) but increases perceptions of
the experience component (i.e., emotion, sensation [36]).
Physical, behavioral and context features not only affect
mind perception, but have also been shown to change the
social relevance ascribed to others’ actions and conse-
quently modulate social-cognitive processing [11, 12, 37].
Increasing an agent’s physical human-likeness is associ-
ated with enhanced social cognitive processing [20], as
well as increased activation in social brain areas [11], but
it can also have negative consequences when an agent’s
appearance is categorically ambiguous and cannot easily
be classified as “human” or “nonhuman” [26, 38]. With
regard to behavioral factors, robots emulating humanlike
behaviors have a positive effect on social-cognitive pro-
cesses. For example, when robots engage in mutual gaze
(as opposed to looking down) with a human interaction
partner prior to executing a gaze cue, people follow the
signal more strongly resulting in faster responses to gazed-
at targets [15]. Likewise, when observed changes in gaze
direction are perceived as being predictive of a target’s
location, attention orienting in response to these cues
become spatially more specific resulting in faster reaction
times to targets presented at the gazed-at location [28].
Similarly, studies that manipulate the context in which a
cue is observed show that participants are more likely to
follow a robot’s behavioral cue when a deliberate delay is
introduced that makes the robot’s cues more salient [39]
or when adding a reference for where an object can be
presented at the time of the gaze shift [17].
International Journal of Social Robotics
1 3
1.2 Importance ofLifelikeness When Examining
Mind Perception andSocial Cognition
These studies show that mind perception can be manipulated
through physical, behavioral and contextual features [14,
24, 26], and that all features in isolation modulate certain
aspects of social cognition [10, 11, 23, 28, 40, 41]. However,
in everyday interactions, it is likely that those parameters
do not occur in isolation, requiring research to look at the
combined effects of these factors on social cognition. Of
particular importance for HRI is the question of what hap-
pens when robot appearance and behavior are incongruent,
for instance when a robot looks humanlike but behaves like a
machine (e.g., due to delays or lack of biological motion). As
one of the few studies on this topic, Saygin etal. [42] have
shown that while activation in the action-perception network
of the brain was not sensitive to the appearance or motion
of an agent (humanlike vs. machine-like), being exposed
to a mismatch between the human-likeness of an agent’s
appearance and behavior (e.g., agent with robot appearance
showing biological motion) was associated with a higher
prediction error signals indicating that people expect con-
gruency between physical appearance and behavior and
that these two mind perception factors do not work in isola-
tion [42]. Furthermore, Abubshait and Wiese [37] showed
that when being examined in combination, physical and
behavioral agent features seem to affect different aspects of
social cognition than was previously reported: independent
of appearance, an agent whose gaze reliably predicted the
location of a target induced stronger attentional orienting in
response to its gaze signals than an agent whose gaze signals
were non-predictive; in contrast, humanlike versus robot-like
appearance affected subjective mind perception ratings but
did not affect social attention. Taken together, these find-
ings suggest that triggers of mind perception do not work in
isolation but interact in more complex ways and thus need
to be examined in combination in paradigms that sufficiently
simulate the complexity or lifelikeness of social interactions.
1.3 Aim ofStudy
The goal of the current study is to examine (1) how physical
and behavioral agent features affect mind perception and
social attention when being manipulated within the same
paradigm (Experiments 1–4), and (2) whether the effect of
these parameters changes as the lifelikeness of the para-
digm is increased (Experiment 5). Specifically, we wanted
to examine whether effects of physical human-likeness (i.e.,
human vs. robot appearance of the gazer) on mind percep-
tion ratings and behavioral human-likeness (i.e., reliable/
predictive vs. random gaze behavior) on social attention
[37] would interact in their effect on mind perception ratings
and social attention when being presented in more lifelike
interaction scenarios. We hypothesized that at a certain
level of the paradigm’s lifelikeness, both mind perception
ratings and gaze cueing effects would be positively affected
by physical and behavioral human-likeness, instead of just
one of the two parameters. The specific hypotheses can be
found below:
H1: In line with previous studies, gaze-cueing effects
are expected to be modulated by behavioral triggers of
mind perception, such as predictable/reliable gaze behav-
ior compared to random gaze behavior. However, with
increasing levels of lifelikeness, we expect physical trig-
gers of mind perception, such as humanlike compared
to robot-like appearance of the gazer, to also affect gaze
cueing.
H2: In line with previous studies, mind perception rat-
ings are expected to be modulated by physical triggers of
mind perception, such as humanlike compared to robot-
like appearance. However, with increasing levels of life-
likeness, we expect behavioral triggers of mind percep-
tion, such as predictable/reliable gaze behavior compared
to random gaze behavior, to also affect mind perception
ratings. Since the effect of behavioral cues on mind per-
ception ratings can only take effect after the task, we
calculated a pre-post interaction mind perception differ-
ence score and examined the effect of both physical and
behavioral parameters on this difference score.
2 Methods andMaterials
2.1 Experiments
Five experiments manipulated the physical and behavioral
human-likeness of a gazing agent and examined the effects
of these manipulations on mind perception and social atten-
tion in controlled (Experiments 1–4), and more lifelike
(Experiment 5) settings. In the following section, we report
the methods and materials that are common to all experi-
ments and then report the specific variants of each experi-
ment separately.
2.2 Participants
Participants were recruited from the undergraduate student
pool at George Mason University and reimbursed via partici-
pation credits. All participants were at least 18years old and
reported normal or corrected to normal vision. The research
complies with the APA’s code of ethics and was approved
by the local Ethics Committee at George Mason University.
Participants provided informed consent prior to participa-
tion. 375 individuals were recruited for the five experiments
(75 per experiment), and the data of 314 participants were
International Journal of Social Robotics
1 3
included in the final analyses (for details on data rejection,
please see the section of the respective experiment).
2.3 Stimuli
The target stimuli for the gaze-cueing procedure were black
capital letters (F or T), measuring 0.8° in width and 1.3° in
height; targets always appeared on the horizontal axis, and
were located 6.0° from the center of the screen. The gazing
stimuli varied in their degree of human-likeness, but differed
between experiments and are described in the Stimuli section
of the respective experiment.
2.4 Apparatus
Stimuli were presented at a distance of about 57cm on an
ASUS VB198T-P 19-inch monitor with a resolution of
1280 × 1024 pixels and a refresh rate of 60Hz using Experi-
ment Builder ([43]; in Experiment 1) or MATLAB (version
R2015b; [44]) in combination with the Psychophysics Tool-
box ([45]; in Experiments 2–5). Key press responses were
recorded using a USB-connected standard keyboard.
2.5 Social Attention Task
Participants were asked to respond as fast and accurately as
possible to the identity of target letters (F or T) that appeared
either to the left or the right side of a centrally presented face
(i.e., the gazer) by pressing one of two response keys (“D”
and “K”; marked with stickers “F” and “T”). Prior to the tar-
get presentation, a centrally presented face changed its gaze
direction (i.e., the gaze cue) to either the left or the right side
of the screen, where the target subsequently either would
(i.e., valid trial) or would not (i.e., invalid trial) appear. As
soon as the target appeared, participants were asked to press
the respective key so that reaction times and error rates could
be recorded. To avoid spatial compatibility effects, the letter
“F” was assigned to the “D” key and the letter “T” to the “K”
key for 50% of the participants and vice versa for the other
50% of participants.
Each trial started with the presentation of a fixation cross
in the center of the screen for a duration that was jittered
between 700 and 1000ms. Afterwards, the gazer appeared
behind the fixation cross, and changed its gaze direction
either towards the left or the right side of the screen after
a jittered interval of 700–1000ms. This gaze cue was fol-
lowed by the presentation of the target letter either at the
gazed-at location or opposite of the gazed-at location with
a certain stimulus onset asynchrony (SOA), which varied
between experiments (500ms for Experiments 1–4; 1000ms
for Experiment 5). The gazer and target remained on the
screen until a response was given or a timeout of 1200ms
was reached, whichever came first. The trial was concluded
with the presentation of a blank screen for 680ms (intertrial
interval; ITI). See Fig.1; for the trial sequences of Experi-
ments 1–5.
For each experiment, physical human-likeness was
manipulated within participants (robot vs. human; see
Fig.2), and cue reliability was altered between participants
(50% vs. 80%). In the 50% reliability condition, 50% of
targets were validly cued and 50% were invalidly cued by
the agent, which appeared random. In the 80% reliability
condition, 80% of targets were validly cued and 20% were
invalidly cued, which appeared predictive.
2.6 Procedure
At the beginning of each experiment, participants were wel-
comed and seated in front of a computer screen. After pro-
viding informed consent, they were randomly assigned to
either the 50% or 80% reliability condition and subsequently
started the gaze cueing task. Participants were told to answer
as quickly and as accurately as possible. Participants first
completed a training block consisting of 20 trials, followed
by an experimental block consisting of 320 trials (160 trials
with the humanlike gazer and 160 trials with the robot-like
gazer). The gazing stimulus in the training block differed
from the agents used in the experimental block (i.e., mech-
anistic robot), and the order in which the human and the
robot agent were presented during the experimental block
was counterbalanced across participants. Participants were
allowed to take a short break between blocks.
In order to obtain mind perception measures, participants
were presented with images of the two gazers before and
after the social attention task and asked to rate regarding
their potential of having a mind (i.e., “Do you think this
agent has a mind?”) on 7-point scale (1: definitely not to
7: definitely yes). After completion of the post interaction
agent rating, participants took a demographic survey. Each
experiment took about 20–25min to complete.
2.7 Analysis
Trials with incorrect answers and reaction times deviating
more than 2 standard deviations from the individual mean
were excluded from analysis. The gaze cueing effect was
calculated for each block and each individual. To do so, the
individual reaction time means of invalidly cued trials was
subtracted from the individual reaction time means of val-
idly cued trials of the respective block.
To analyze the influence of physical humanness and reli-
ability on participants’ gaze cueing effect, a 2 × 2 mixed
ANOVA with the within-participants factor physical human-
ness (human, robot) and the between-participants factor reli-
ability (50%, 80%) was conducted separately for each experi-
ment. A 2 × 2 mixed ANOVA with the within-participants
International Journal of Social Robotics
1 3
factor Physical Humanness (human, robot) and the between-
participants factor Reliability (50%, 80%) was conducted
to investigate the influence of physical humanness and
reliability on the change in mind ratings of the respective
agents (pre-post assessment). With regards to assumptions,
it should be noted that (1) outliers had already been removed
before conducting the ANOVA, (2) residuals were visually
checked for violating normality assumptions, and (3) homo-
geneity of variance was tested using Levene’s test. Residual
distributions for all ANOVAs conducted showed no signs
Fig. 1 Gaze Cueing Paradigm:
in all experiments, participants
were to identify a target letter
that was either validly or inval-
idly cued by an agent’s gaze. In
Experiment 1, 2 and 3, the gaze
cues consisted of a still image
a. The time distribution of the
straight gaze varied across
experiments (see methods of
respective experiment). In
Experiment 4, the gaze cues
consisted of a still image, but
additionally, possible target
locations are indicated with a
black frame at the time of the
gaze shift b. In Experiment 5,
the gaze cues consisted of a
video instead of a still image c
Fig. 2 Gazing Stimuli: Agents used in Experiments 1, 3 and 4 are
shown in a: the robot agent (top row) is a morphed image that con-
sists of 20% human image and 80% robot image; the human agent
(bottom row) is a morphed image that consists of 80% human image
and 20% robot image. During the gaze cueing trials, the agents
looked either to the left side of the screen (left), straight (middle) or
to the right side of the screen (right). gazers b. Experiment 2, 100%
robot (top row) and 100% human (bottom row) images were used as
gazers. c In Experiment 5, videos of 100% robot and 100% human
gazers were used instead of pictures. The images presented at the bot-
tom depict the most eccentric gaze (left, right) and straight gaze (mid-
dle) shown in the videos
International Journal of Social Robotics
1 3
of skewness, although some showed signs of platycurtosis.
We did not adjust for these signs because platycurtosis will
increase the overall variance and thus bias the significance
toward a less significant result [46]. The discussed signifi-
cant results are thus not affected. Violations are reported in
the results section of the respective experiment if applicable.
In case of violations, we report a nonparametric analogue
of the mixed ANOVA using the ezPerm R function (version
4.4-0) to confirm our results [47].
3 Experiment 1
In Experiment 1, morphing was used on a 100% human
image and a 100% robot image to create one gazing stimu-
lus with a high level of physical human-likeness (i.e., con-
sisting of 80% of the human image and 20% of the robot
image) and one gazing stimulus with a low level of physical
human-likeness (i.e., consisting of 20% of the human image
and 80% of the robot image). This manipulation was chosen
to assure that familiarity with human versus robot faces did
not bias the results. The reliability of the depicted gaze cues
was either low (i.e., random or 50%) or high (i.e., predictive
or 80%).
3.1 Participants
75 undergraduate students participated in the experiment.
Ten participants were excluded due to poor task perfor-
mance (i.e. answering incorrectly in more than 20% of the
trials) or missing data, resulting in a final sample size of
65 participants (49 females; mean age: 20.3; range: 18–33;
56 right-handed). Participants were randomly assigned to
either the 80% reliability condition (25 females; mean age:
21.03; range: 18–33; 28 right-handed) or the 50% reliabil-
ity condition (24 females; mean age: 20; range: 18–28; 30
right-handed).
3.2 Stimuli
The human- and robot-like agent images were created by
morphing the image of a human face (i.e., male face from the
Karolinska Institute database; [48]) into the image of a robot
face (i.e., Meka S2 robot head by Meka Robotics) in steps of
10% using the software FantaMorph 5.4.8 (Abrosoft). Out of
this spectrum, the morph with 80% physical humanness was
used as a humanlike gazer and the morph with 20% physi-
cal humanness as a robot-like gazer. The left-and rightward
gazing face stimuli were created by shifting irises and pupils
of the original 100% human and robot faces until they devi-
ated 0.4° from direct gaze (with Photoshop), followed by
another round of morphing as described above for each of
the left- and the rightward gazing faces separately. As a last
step, GIMP was used for all images to touch up any minor
imperfections in the images and to make the sequencing of
the images smooth. The face stimuli were 6.4° wide and
10.0° high on the screen, depicted on a white background
and presented in full frontal orientation with eyes positioned
on the central horizontal axis of the screen; see Fig.2a.
3.3 Results
The mixed 2 × 2 ANOVA with gaze cueing effects as
dependent variable revealed that Reliability (F(1, 63) = 6.14,
p = .016, ηG
2 = .05), but not Physical Humanness (F(1,
63) = .29, p = .593, ηG
2 < .01) had a significant impact on
social attention, such that gaze cueing effects were signifi-
cantly larger for reliable than random gaze cues. The Reli-
ability x Physical Humanness interaction was not significant
(F(1, 63) = .35, p = .559, ηG
2 < .01); see Fig.3a. The mixed
2 × 2 ANOVA with pre-post difference in mind percep-
tion ratings as a dependent variable revealed that Physical
Humanness (F(1, 63) = 24.91, p < .001, ηG
2 = .13), but not
Reliability (F(1, 63) = 1.10, p = .298, ηG
2 = .01) had a signifi-
cant impact on mind ratings, such that mind ratings generally
increased for the robot gazer but decreased for the human
gazer after the gaze cueing task. The Reliability x Physi-
cal Humanness interaction did not reach significance (F(1,
63) = .28, p = .600, ηG
2 < .01); see Fig.4a.
Gaze cueing variance between high versus low reliability
groups was not equal for the robot level of physical human-
likeness, as indicated by a Levene’s test (F(1, 63) = 5.97,
p = 0.035).1 We therefore ran a nonparametric alternative
for the mixed ANOVA with gaze cueing effects as depend-
ent variable, which confirmed the significant main effect of
Reliability (p = .020), as well as the insignificant effects of
Physical Humanness and Reliability x Physical Humanness
(both p > .5).
3.4 Discussion
The results of this experiment show that physical and behav-
ioral parameters associated with human-likeness exert
independent effects on mind perception ratings and social
attention: physical human-likeness exclusively affected
mind perception ratings, such that mind perception rat-
ings for the robot agent increased after the gaze cueing task
and decreased for the human agent, whereas cue reliability
exclusively affected social attention, such that reliable gaze
behavior induced larger gaze cueing effects than random
1 The p value has been adjusted using the Bonferroni procedure
because two Levene’s tests—one for each Physical Humanness
level—have been conducted.
International Journal of Social Robotics
1 3
gaze behavior. No interaction between the two parameters
was observed in Experiment 1.
4 Experiment 2
In Experiment 2, the procedure of Experiment 1 was
repeated with the 100% human and 100% robot image to
assure that the results in Experiment 1 were not due to the
morphed nature of the images, which could reduce their life-
likeness and induce feelings of discomfort associated with
the 80% morph (as hypothesized by studies on the Uncanny
Valley; see [49]); cue reliability was again set at 50% or 80%.
4.1 Participants
75 undergraduate students participated in the experiment.
Eight participants were excluded due to poor task per-
formance (i.e. answering incorrectly in more than 20%
of the trials) and two due to missing data, resulting in
a final sample size of 65 participants (50 females; mean
age: 20.3; range: 18–29; 59 right handed). Participants
were randomly assigned to either the 80% reliability con-
dition (23 females; mean age: 20.3; range: 18–27; 29 right-
handed) or the 50% reliability condition (27 females; mean
age: 20.3; range: 18–29; 30 right-handed).
Fig. 3 Gaze Cueing Effects as a function of physical (human vs.
robot) and behavioral features (random vs. reliable): Patterns in gaze
cueing were similar for Experiment 1 (morphed images: 80% robot
and 80% human; a), Experiment 2 (original images: 100% robot and
100% human; b), Experiment 3 (recorded human gaze behavior dis-
played on 80% robot and 80% human morph; c) and Experiment 4
(spatial marker in periphery with 80% robot and 80% human morph;
d): gaze cueing effects were affected by behavioral features, but not
by physical features. In Experiment 5 (videos of 100% robot and
100% human as gazing stimuli; e) an interaction effect between physi-
cal and behavioral features was found, such that gaze cueing effects
were largest for videos of reliable human gazers and smallest for ran-
dom robot gazers
Fig. 4 Changes in Mind Ratings (pre- vs. post-gaze cueing) as a func-
tion of physical (human vs. robot) and behavioral (random vs. reli-
able) features: Patterns in mind rating differences before and after
interacting with the agents were comparable for Experiment 1 (mor-
phed images: 80% robot and 80% human; a), Experiment 2 (original
images: 100% robot and 100% human; b), Experiment 3 (recorded
human gaze behavior displayed on 80% robot and 80% human morph;
c), Experiment 4 (spatial marker in periphery with 80% robot and
80% human morph; d) and Experiment 5 (videos of 100% robot and
100% human as gazers): mind ratings decreased for all agents with
human appearance and increased for all agents with robot appear-
ance; the gazer’s reliability during the gaze cueing task did not have
an impact on mind rating difference scores
International Journal of Social Robotics
1 3
4.2 Stimuli
As gazing stimuli, the 100% human and 100% robot base
images were used; Fig.2b.
4.3 Results
The mixed 2 × 2 ANOVA with gaze cueing effects as
dependent variable revealed that Reliability (F(1, 63) = 4.64,
p = .035, ηG
2 = .05), but not Physical Humanness (F(1,
63) = 1.12, p = .293, ηG
2 < .01) had a significant impact on
gaze cueing effects. The Reliability x Physical Human-
ness interaction did not reach significance (F(1, 63) = 1.48,
p = .229, ηG
2 < .01); see Fig.3b. The mixed 2 × 2 ANOVA
with pre-post difference in mind perception ratings as a
dependent variable revealed that Physical Humanness
(F(1, 63) = 8.41, p = .005, ηG
2 = .07), but not Reliability (F(1,
63) < .01, p = .940, ηG
2 < .01) had a significant impact on
mind perception ratings. The Reliability x Physical Human-
ness interaction did not reach significance (F(1, 63) < .01,
p = .994, ηG
2 < .01); see Fig.4b.
4.4 Discussion
The results of Experiment 2 replicate the findings of Experi-
ment 1, showing that mind perception ratings are exclusively
influenced by physical human-likeness and gaze cueing
effects are exclusively influenced by behavioral human-like-
ness. The results also show that the lifelikeness of the gazing
stimuli themselves did not impact the results, since the same
pattern of results was observed for morphed (i.e., 80% and
20% humanlike morphs; Experiment 1) and realistic (i.e.,
100% human and robot images; Experiment 2) images.
5 Experiment 3
The goal of Experiment 3 was to examine whether changing
the lifelikeness of a gazer’s eye movements would modu-
late the previously reported findings. In order to do so, we
recorded eye movement patterns from a human volunteer
pretending to take the role of the gazer in the gaze cueing
task using an eye tracker and replayed the timing of the eye
movements on the gazing stimulus during the experiment.2
Cue reliability was again set at 50% or 80%.
5.1 Participants
75 undergraduate students participated in the experiment.
7 participants were excluded due to poor task performance
(i.e., answering incorrectly in more than 20% of the trials)
and 6 participants were excluded due to missing data (e.g.,
because participants used the wrong keys), resulting in a
final sample size of 62 participants (46 females; mean age:
20.2; range: 18–38; 57 right handed). Participants were ran-
domly assigned to either the 80% reliability condition (22
females; mean age: 19.4; range: 18–38; 29 right-handed) or
the 50% reliability condition (20 females; mean age: 21.0;
range: 18–25; 28 right-handed).
5.2 Stimuli
The agent images were identical to the ones used in Experi-
ment 1; see Fig.2a.
5.3 Trial Sequence
The trial sequence was identical to Experiment 1, with one
exception: the time the agent took from looking straight to
looking to the side of the screen was not drawn from a uni-
form distribution but from a mean-adjusted distribution col-
lected from a human volunteer (the first author of this paper:
AA). The distribution was obtained using a MATLAB script
that recorded the time needed to shift the gaze from a central
fixation cross towards a laterally presented target letter using
an EyeLink 1000 eye-tracker [52] sampling at 1000Hz. 320
trials were collected to mirror the distribution needed for the
320 trials in the experiment. On a descriptive level, the dis-
tribution was more similar to a normal distribution than to a
uniform distribution (as was the case in Experiment 1 and 2).
After centering the distribution on the mean of the uniform
distribution used for the robot agent, i.e. on 850ms, values
ranged from 750 to 1090ms. The gaze response latencies
used for the experiment were drawn from this mean-adjusted
“human” gaze response distribution and can be inspected in
Fig. S1. The trial sequence is depicted in Fig.1a.
5.4 Results
The mixed 2 × 2 ANOVA with gaze cueing effects as
a dependent variable revealed that Reliability (F(1,
60) = 10.15, p = .002, ηG
2 = .10), but not Physical Humanness
(F(1, 60) = .27, p = .603, ηG
2 < .01) had a significant impact
on gaze cueing effects; the Reliability x Physical Human-
ness interaction did not reach significance (F(1, 60) = 2.70,
p = .106, ηG
2 = .02); see Fig.3c. The mixed 2 x 2 ANOVA
with pre-post differences in mind perception ratings as a
dependent variable revealed that Physical Humanness (F(1,
60) = 5.55, p = 0.022, ηG
2 = .03), but not Reliability (F(1,
2 This manipulation was chosen based on previous research that has
shown that people are highly sensitive in differentiating biological
from non-biological motion [50, 51].
International Journal of Social Robotics
1 3
60) = .03, p = .855, ηG
2 < .01) had a significant impact on
mind ratings; the Reliability x Physical Humanness inter-
action did not reach significance (F(1, 60) = 1.14, p = .290,
ηG
2 < .01); see Fig.4c.
Gaze cueing variance between high and low reliability
groups was not equal for the robot level of physical human-
likeness as indicated by a Levene’s test (F(1, 55) = 5.61,
p = 0.042).3 We therefore ran a nonparametric alternative
for the mixed ANOVA on gaze cueing effects, which con-
firmed with a main effect of Reliability (p = .028), as well as
the insignificance of the main effect of Physical Humanness
and the interaction term (both p > .2).
5.5 Discussion
The results of Experiment 3 replicate the findings of Experi-
ments 1 and 2, again showing an isolated effect of physical
human-likeness on mind ratings and behavioral human-like-
ness on gaze cueing effects, indicating that the lifelikeness
of the observed eye movements does not significantly impact
the pattern of results.
6 Experiment 4
The goal of Experiment 4 was to examine whether the life-
likeness of the context in which a social exchange takes
place potentially modulates previous findings. One known
issue with the gaze cueing paradigm that could reduce the
perceived lifelikeness of the interaction is that changes in
gaze direction are not tied to changes in the environment
but are directed at empty space where subsequently a target
appears (on valid trials) or not (on invalid trials). In reality,
however, changes in gaze direction usually occur in response
to a triggering event, for instance a loud sound or the appear-
ance of a person or an object. To increase the lifelikeness of
the interaction, we added abstract objects in the environment
that were already present at the time when the face changed
its gaze direction and could serve as spatial markers to which
the gaze cue could refer (and which became the location at
which the targets appeared later). Cue reliability was again
set at 50% or 80%.
6.1 Participants
75 undergraduate students participated in the experiment.
12 participants were excluded due to poor task performance
(i.e. answering incorrectly in more than 20% of the trials)
and 6 because of technical issues (e.g., pressing the wrong
response keys), resulting in a final sample size of 57 partici-
pants (46 females; mean age: 20.1; range: 18–29; 50 right
handed). Participants were randomly assigned to either the
80% reliability condition (24 females; mean age: 20.1; range:
18–29; 27 right-handed) or the 50% reliability condition (22
females; mean age: 20.1; range: 18–29; 23 right handed).
6.2 Stimuli
The agent images were identical to the ones used in Experi-
ment 1; see Fig.2a.
6.3 Trial Sequence
The trial sequence was identical to experiment one with
one exception: when shifting its gaze, the agent did not
look towards empty space but towards a placeholder that
indicated the two locations at which the target could sub-
sequently appear. The frames appeared together with the
fixation cross at the beginning of each trial and disappeared
during the ITI. The trial sequence is depicted in Fig.1b.
6.4 Results
The mixed 2 × 2 ANOVA with gaze cueing effects as
a dependent variable revealed that Reliability (F(1,
55) = 10.59, p = .002, ηG
2 = .13), but not Physical Humanness
(F(1, 55) = .57, p = .453, ηG
2 < .01) had a significant impact
on gaze cueing effects; the Reliability x Physical Human-
ness interaction did not reach significance (F(1, 55) = .08,
p = .784, ηG
2 < .01); see Fig.3d. The mixed 2 x 2 ANOVA
with pre-post difference in mind perception ratings as a
dependent variable revealed that Physical Humanness (F(1,
55) = 13.93, p < .001, ηG
2 = .08), but not Reliability (F(1,
55) = .31, p = .582, ηG
2 < .01) had a significant impact on
mind ratings; the Reliability x Physical Humanness inter-
action did not reach significance (F(1, 55) = 1.97, p = .17,
ηG
2 = .01); see Fig.4d.
6.5 Discussion
The results of Experiment 4 replicate the findings of experi-
ments 1-3, again showing an isolated effect of physical
human-likeness on mind ratings and behavioral human-
likeness on gaze cueing effects, indicating that the lifelike-
ness of the context in which a social exchange take places
does not significantly impact the pattern of previous results.
3 The p-value has been adjusted using the Bonferroni procedure
because two Levene’s tests—one for each Physical Humanness
level—have been conducted.
International Journal of Social Robotics
1 3
7 Experiment 5
Experiments 2–4 showed that increasing lifelikeness of the
interaction paradigm by using stimuli that are physically
realistic, that move their eyes with humanlike timing or
whose gaze cues refer to objects in visual space in a mean-
ingful way was not impactful enough to change the pattern
of results. In all previous experiments, observers interacted
with static images of human or humanlike gazers, which is
very unlike lifelike social interactions with other humans.
To increase the perceived lifelikeness of the social attention
task as a whole, we used video recordings of a human and a
robot agent as gazing stimuli instead of static images. Cue
reliability was again set at 50% or 80%.
7.1 Participants
75 undergraduate students participated in the experiment.
Eight participants were excluded due to poor task perfor-
mance (i.e., answering incorrectly in more than 20% of
the trials) and two due to missing data, resulting in a final
sample size of 65 participants (51 females; mean age: 19.9;
range: 18–30; 58 right handed). Participants were randomly
assigned to either the 80% reliability condition (26 females;
mean age: 19.9; range: 18–30; 30 right-handed) or the 50%
reliability condition (25 females; mean age: 19.8; range:
18–25; 28 right-handed).
7.2 Stimuli
Video sequences simulating gaze cues of a human and a
robot agent were recorded: for the robot condition, cues to
the left and right were recorded from the humanoid Meka
S2 robot head; for the human condition, cues to the left and
right were recorded from a human, the second author PPW.
All videos were cut such that the first frame showed the gaz-
ing agents with straight gaze (Fig.2c, middle), the gaze shift
was completed within 1000ms and the last frame’s gaze was
of maximal eccentricity (Fig.2c, left and right). On top of
the gaze cues, both human and robot videos included head
cues of comparable strength.
7.3 Trial Sequence
The trial sequence was kept as similar as possible to Experi-
ment 1. Each trial started with the presentation of a fixation
cross at the center of the screen for a duration drawn from
values uniformly distributed between 700 and 1000ms.
Afterwards, the agent as appearing in the first frame of the
respective video, appeared behind the fixation cross for a
duration drawn from values uniformly distributed between
200 and 500ms. Subsequently, the video was being played
for 1000ms during which the agent changed its gaze towards
either the left or the right side of the screen, thereby either
validly or invalidly cueing the location of the subsequently
presented target letter. When the video finished playing, the
last frame froze and the target letter was presented at the
left or the right side of the screen. The last frame and the
target remained on the screen until a response was given or
1200ms had passed. The trial was concluded with a blank
screen presented for 680ms. The trial sequence is depicted
in Fig.1c.
7.4 Results
In contrast to previous experiments, the mixed 2 × 2 ANOVA
with gaze cueing effects as a dependent variable revealed
that Reliability (F(1, 63) = 4.71, p = .034, ηG
2 = .04) and
Physical Humanness (F(1, 63) = 12.05, p < .001, ηG
2 = .08)
had a significant impact on gaze cueing effects; the Reliabil-
ity x Physical Humanness interaction was trending towards
significance but did not reach significance (F(1, 63) = 2.96,
p = .090, ηG
2 = .02); see Fig.3e. Again in contrast to previous
findings, the mixed 2 × 2 ANOVA with pre-post differences
in mind perception ratings as a dependent variable revealed
that neither Physical Humanness (F(1, 63) = 2.37, p = .129,
ηG
2 = .02) nor Reliability (F(1, 63) < .01, p = .958, ηG
2 < .01)
had a significant impact on mind ratings. the Reliability x
Physical Humanness interaction did not reach significance
(F(1, 63) = 1.31, p = .257, ηG
2 = .01); see Fig.4e.
Gaze cueing variance between high and low reliability
groups was not equal for both levels of physical human-
likeness (human and robot) as indicated by Levene’s tests
(Human: F(1, 63) = 6.61, p = 0.025; Robot: F(1, 63) = 5.41,
p = 0.047).4 We therefore ran a nonparametric alternative for
the mixed ANOVA with gaze cueing effects as a depend-
ent variable, which confirmed the main effect of Reliability
(p = .030) and Physical Humanness (p < .001), as well as a
trend for the interaction term (p = 0.105).
7.5 Discussion
The results of Experiment 5 show that changing the lifelike-
ness of the interaction scenario as a whole by using dynamic
videos instead of static images changes the pattern of results
such that physical and behavioral markers of human-like-
ness now both affect gaze cueing effects independently, with
larger cueing effects for the human versus robot gazer, as
well as the reliable versus random gaze cues (with no inter-
action effects between the two components). In contrast,
4 The p-values have been adjusted using the Bonferroni proce-
dure because two Levene’s tests—one for each Physical Humanness
level—have been conducted.
International Journal of Social Robotics
1 3
physical human-likeness does not significantly impact pre-
post interaction changes in mind perception anymore. The
implications of these findings are discussed below.
8 General Discussion
This study aimed to investigate how factors that, independ-
ent of each other, have been related to mind perception, such
as physical human-likeness and predictable behavior, affect
mind perception ratings and social attention mechanisms
as a function of the interaction’s lifelikeness. For that pur-
pose, we manipulated physical, behavioral and contextual
parameters that were thought to manipulate the lifelikeness
of a social interaction scenario. In Experiment 1, which con-
stituted the baseline, we looked at the influence of physical
appearance (human morph vs. robot morph) and gaze pre-
dictivity on mind perception ratings and gaze-cueing effects
without specifically manipulating lifelikeness. In Experi-
ment 2, the lifelikeness of the gazer was manipulated by
using a 100% human face and a 100% robot face as opposed
to morphed images. Experiment 3 manipulated the lifelike-
ness of the gaze signal by modeling the onset of the gaze
cues after a real human’s cue onsets, thereby incorporating
biological eye movements (i.e., right and left gaze changes)
into the paradigm. Experiment 4 manipulated the lifelike-
ness of the context by adding reference objects (i.e., place
holders) to the gaze cueing paradigm that were already pre-
sent at the time of the gaze change, as gaze changes in real
life usually are targeted at reference objects in the environ-
ment and not at empty space (like in traditional gaze cue-
ing paradigms). Experiments 1–4 revealed similar results,
such that the behavioral component (i.e., reliability of the
gaze cue) affected social attention but not mind perception,
whereas the physical component (i.e., appearance of the
gazer) affected mind perception but not social attention.5
Only when the lifelikeness of the overall interaction was
changed by using videos of an actual human and an actual
robot first engaging in mutual gaze and then performing gaze
cues, the pattern of results changed: both gaze reliability and
physical appearance now had an influence on social atten-
tion, such that gaze cueing effects were larger for human ver-
sus robot gazers and reliable versus random gaze behaviors;
in contrast, pre-post mind perception ratings were neither
affected by physical appearance nor by gaze reliability.
The experiments outline two important findings with
regard to the effects of physical, behavioral and contextual
effects on mind perception and social attention: First, behav-
ioral features, such as the reliability of gaze signals, robustly
modulated social attention across experiments, whereas
physical appearance only had an effect when the interac-
tion seemed sufficiently lifelike (through the use of video
sequences). This replicates findings from previous studies
showing that even very basic social-cognitive processes like
gaze cueing can be top-down modulated by social context
information [53], and highlights that certain top-down mod-
ulators, such as the physical appearance of an agent, might
only exert their effect in relatively lifelike interactions. This
observation also provides some clarity regarding the ongo-
ing debate in the literature whether manipulations related to
mind perception and/or mentalizing have an effect on social
attention [54] or not [55]. The current study suggests that
there is an interaction between top-down and bottom-up
mechanisms influencing social attention, but that the top-
down component might only take effect in sufficiently real-
istic paradigms (see also [56]). Although the current study
does not maximize lifelikeness to the same extent as other
studies where the gaze cues are sent by a real human actor
sitting opposite of the participant (e.g., [57]), it indicates that
a certain level of lifelikeness needs to be reached before vari-
ous context factors start modulating social attention. Where
exactly this level is located and whether different context
factors require different levels of lifelikeness should be the
focus of future studies.
Second, physical agent features, such as the human-like-
ness of a gazer’s appearance, modulated pre-post mind per-
ception changes in more controlled versions of the paradigm
(Experiments 1–4) but not under relatively lifelike interac-
tion conditions using videos (Experiment 5); behavioral
parameters, such as the reliability of gaze cues, never mod-
ulated mind perception ratings. One explanation as to why
reliability did not modulate mind perception ratings could
be that completing the gaze-cueing task with very reliable
agents diminished participants’ need for anthropomorphiz-
ing nonhuman agents, which resulted in mind ratings that
were not different from those for agents whose gaze behavior
was random. In other words, maybe a certain level of uncer-
tainty is needed in order to strongly trigger mind perception.
This interpretation is supported by prior work suggesting
that agents displaying very predictable actions, decrease our
need to understand their behaviors, and consequently trigger
less anthropomorphizing/mind perception [58].
The current study is consistent with previous literature
illustrating the importance of using ecologically valid para-
digms when investigating social cognition [59]. While prior
work shows that robots may not be able to reflexively shift
human attention in computer-based paradigms [20], face-
to face gaze-cueing paradigms using real robots as gazers
illustrate that robots can in fact reflexively shift human atten-
tion (like human gazers) when the surrounding is sufficiently
5 Interestingly, the results of Experiment 4 show a descriptive differ-
ence such that gaze-cueing effects were overall larger increase. This is
not surprising as previous studies have shown that including contex-
tual information has a positive effect on gaze-cueing effects [17].
International Journal of Social Robotics
1 3
lifelike [15]. Prior literature also shows that different brain
regions are activated during social attention depending on
whether highly controlled, offline paradigms or face-to-face,
online paradigms are employed, that is: traditional fMRI
studies identify brain regions in the right hemifield (e.g.,
STS, ACC, TPJ) as important neural correlates of social
attention [60], whereas studies that use dynamic face-to-face
paradigms implicate similar structures in the left hemifield
[61], suggesting that some social-cognitive processes may
not be sufficiently activated in highly controlled experi-
ments (see [59]; for detailed arguments for the necessity
to examine social cognition “online”). Other studies using
VR-based paradigms showed that joint attention not only
consists of directing others’ attention to important objects
or events in the environment (i.e., other-representations) but
also requires another essential mechanism, that is, engag-
ing in mutual gaze to signal the readiness for joint atten-
tion (i.e., self-representations) [59, 62, 63] —an insight that
traditional gaze cueing paradigms were unable to uncover.
Consistent with these observations, the current study shows
that the effect of physical and behavioral parameters on
social attention might change depending on the lifelikeness
of the paradigm. Although “online” social cognition para-
digms are more challenging to design and implement than
“offline” paradigms (e.g., additional programming require-
ments, access to embodied robot platforms, more involved
study approval processes), it is important to examine social
cognitive processes in settings that are similar enough to real
interactions in order to draw firm conclusions regarding the
impact of potential modulating factors. Future studies should
increase the lifelikeness of social attention paradigms in HRI
even more, for instance by using embodied robot platforms
instead of video recordings; see [6467].
In conclusion, this study illustrates the importance of
using methods to mimic real-life gaze interaction in investi-
gating social gaze whenever possible. This is of the upmost
relevance for social roboticists since the goal is to design
social robots that are equipped with means to display social
human behaviors and evoke both natural and intuitive reac-
tions from the humans that interact with these robots.
Acknowledgements We would like to acknowledge the hard-working
research assistants that helped collect our sample.
Author’s contribution AA and EW conceptualized the study. PW pro-
grammed the experiments. AA and PW collected and analyzed the data.
AA, EW, and PW interpreted the results and wrote the manuscript.
Funding This study was not funded by any grants.
Compliance with Ethical Standards
Conflict of interest The authors declare that they have no conflict of
interest.
References
1. Adolphs R (1999) Social cognition and the human brain.
Trends Cogn Sci 3(12):469–479. https ://doi.org/10.1016/s1364
-6613(99)01399 -6
2. Emery NJ (2000) The eyes have it: the neuroethology, function
and evolution of social gaze. Neurosci Biobehav Rev 24(6):581–
604. https ://doi.org/10.1016/S0149 -7634(00)00025 -7
3. Gallagher HL, Frith CD (2003) Functional imaging of ‘theory
of mind’. Trends Cogn Sci 7(2):77–83. https ://doi.org/10.1016/
S1364 -6613(02)00025 -6
4. Gray HM, Gray K, Wegner DM (2007) Dimensions of mind per-
ception. Science 315(5812):619. https ://doi.org/10.1126/scien
ce.11344 75
5. Waytz A, Gray K, Epley N, Wegner DM (2010) Causes and con-
sequences of mind perception. Trends Cogn Sci 14(8):383–388.
https ://doi.org/10.1016/j.tics.2010.05.006
6. Wiese E, Metta G, Wykowska A (2017) Robots as intentional
agents: using neuroscientific methods to make robots appear
more social. Front Psychol 8:1663. https ://doi.org/10.3389/fpsyg
.2017.01663
7. Friesen CK, Kingstone A (1998) The eyes have it! Reflexive
orienting is triggered by non-predictive gaze. Psychon Bull Rev
5(3):490–495. https ://doi.org/10.3758/BF032 08827
8. Özdem C, Wiese E, Wykowska A, Müller H, Brass M, Van Over-
walle F (2016) Believing androids—fMRI activation in the right
temporo-parietal junction is modulated by ascribing intentions
to non-human agents. Soc Neurosci 12(5):582–593. https ://doi.
org/10.1080/17470 919.2016.12077 02
9. Teufel C, Fletcher PC, Davis G (2010) Seeing other minds:
attributed mental states influence perception. Trends Cogn Sci
14(8):376–382. https ://doi.org/10.1016/j.tics.2010.05.005
10. Wiese E, Wykowska A, Zwickel J, Müller HJ (2012) I see what
you mean: how attentional selection is shaped by ascribing inten-
tions to others. PLoS ONE 7(9):e45391. https ://doi.org/10.1371/
journ al.pone.00453 91
11. Wiese E, Buzzell GA, Abubshait A, Beatty PJ (2018) Seeing
minds in others: mind perception modulates low-level social-
cognitive performance and relates to ventromedial prefrontal
structures. Cogn Affect Behav Neurosci 18(5):837–856. https ://
doi.org/10.3758/s1341 5-018-0608-2
12. Wykowska A, Wiese E, Prosser A, Müller HJ (2014) Beliefs about
the minds of others influence how we process sensory informa-
tion. PLoS ONE 9(4):e94339. https ://doi.org/10.1371/journ
al.pone.00943 39
13. Nummenmaa L, Calder AJ (2009) Neural mechanisms of
social attention. Trends Cogn Sci 13(3):135–143. https ://doi.
org/10.1016/j.tics.2008.12.006
14. Pfeiffer UJ, Timmermans B, Bente G, Vogeley K, Schilbach L
(2011) A non-verbal turing test: differentiating mind from machine
in gaze-based social interaction. PLoS ONE 6(11):e27591. https
://doi.org/10.1371/journ al.pone.00275 91
15. Kompatsiari K, Ciardo F, Tikhanoff V, Metta G, Wykowska
A (2018) On the role of eye contact in gaze cueing. Sci Rep
8(1):17842. https ://doi.org/10.1038/s4159 8-018-36136 -2
16. Perez-Osorio J, Müller HJ, Wiese E, Wykowska A (2015) Gaze
following is modulated by expectations regarding others’ action
goals. PLoS ONE 10(11):e0143614. https ://doi.org/10.1371/journ
al.pone.01436 14
17. Wiese E, Zwickel J, Müller HJ (2013) The importance of context
information for the spatial specificity of gaze cueing. Atten Per-
cept Psychophys 75(5):967–982. https ://doi.org/10.3758/s1341
4-013-0444-y
18. Caruana N, de Lissa P, McArthur G (2017) Beliefs about
human agency influence the neural processing of gaze
International Journal of Social Robotics
1 3
during joint attention. Soc Neurosci 12(2):194–206. https ://doi.
org/10.1080/17470 919.2016.11609 53
19. Deaner RO, Shepherd SV, Platt ML (2007) Familiarity accentuates
gaze cuing in women but not men. Biol Lett 3(1):64–67. https ://
doi.org/10.1098/rsbl.2006.0564
20. Admoni H, Bank C, Tan J, Toneva M, Scassellati B (2011) Robot
gaze does not reflexively cue human attention. In: Proceedings
of the 33rd annual conference of the cognitive science society,
1983–1988
21. Admoni H, Scassellati B (2017) Social eye gaze in human-robot
interaction: a review. J. Hum. Robot Interact. 6(1):25. https ://doi.
org/10.5898/JHRI.6.1.Admon i
22. DiSalvo CF, Gemperle F, Forlizzi J, Kiesler S (2002) All robots
are not created equal: the design and perception of humanoid
robot heads. In: Conference on designing interactive systems
processes practices methods and techniques, pp 321–326. https
://doi.org/10.1145/77871 2.77875 6
23. Kiesler S, Powers A, Fussell SR, Torrey C (2008) Anthropomor-
phic interactions with a robot and robot–like agent. Soc Cogn
26(2):169–181. https ://doi.org/10.1521/soco.2008.26.2.169
24. Looser CE, Wheatley T (2010) The tipping point of animacy:
how, when, and where we perceive life in a face. Psychol Sci
21(12):1854–1862. https ://doi.org/10.1177/09567 97610 38804 4
25. Tung F (2011) Influence of gender and age on the attitudes of
children towards humanoid robots. Hum Comput Interact IV. https
://doi.org/10.1007/978-3-642-21619 -0_76
26. Martini M, Buzzell G, Wiese E (2015) Agent appearance modu-
lates mind attribution and social attention in human-robot interac-
tion. Soc Robot 1:431–439. https ://doi.org/10.1007/978-3-319-
25554 -5
27. Wiese E, Weis PP (2020) It matters to me if you are human—
examining categorical perception in human and nonhuman agents.
Int J Hum Comput Stud 133:1–12. https ://doi.org/10.1016/j.ijhcs
.2019.08.002
28. Wiese E, Wykowska A, Müller HJ (2014) What we observe is
biased by what other people tell us: beliefs about the reliability of
gaze behavior modulate attentional orienting to gaze cues. PLoS
ONE 9(4):e94529. https ://doi.org/10.1371/journ al.pone.00945 29
29. Morewedge CK (2009) Negativity bias in attribution of exter-
nal agency. J Exp Psychol Gen 138(4):535–545. https ://doi.
org/10.1037/a0016 796
30. Morewedge CK, Preston J, Wegner DM (2007) Timescale bias in
the attribution of mind. J Pers Soc Psychol 93(1):1–11. https ://doi.
org/10.1037/0022-3514.93.1.1
31. Abell F, Happe F, Frith U (2000) Do triangles play tricks? attribu-
tion of mental states to animated shapes in normal and abnormal
development. Cogn Dev 15(1):1–16. https ://doi.org/10.1016/
S0885 -2014(00)00014 -9
32. Castelli F, Happé F, Frith U, Frith C (2013) Movement and mind:
a functional imaging study of perception and interpretation of
complex intentional movement patterns. In: Social neuroscience:
key readings, pp 155–170. https ://doi.org/10.4324/97802 03496
190
33. Heider F, Simmel M (1944) An experimental study of apparent
behavior. Am J Psychol 57(2):243–259. https ://doi.org/10.1017/
CBO97 81107 41532 4.004
34. Short E, Hart J, Vu M, Scassellati B (2010) No fair!! an interac-
tion with a cheating robot. In 2010 5th ACM/IEEE international
conference on human–robot interaction (HRI), pp 219–226. https
://doi.org/10.1109/HRI.2010.54531 93
35. Ciardo F, Beyer F, De Tommaso D, Wykowska A (2020) Attribu-
tion of intentional agency towards robots reduces one’s own sense
of agency. Cognition 194:104109. https ://doi.org/10.1016/j.cogni
tion.2019.10410 9
36. Gray K, Knobe J, Sheskin M, Bloom P, Barrett LF (2011) More
than a body: mind perception and the nature of objectification.
J Pers Soc Psychol 101(6):1207–1220. https ://doi.org/10.1037/
a0025 883
37. Abubshait A, Wiese E (2017) You look human, but act like
a machine: agent appearance and behavior modulate different
aspects of human–robot interaction. Front. Psychol. 8:1393.
https ://doi.org/10.3389/fpsyg .2017.01393
38. Mandell AR, Smith MA, Martini MC, Shaw TH, Wiese E
(2015) Does the presence of social agents improve cogni-
tive performance on a vigilance task? Int. Conf. Soc. Robot.
1:421–430
39. Admoni H, Dragan A, Srinivasa SS, Scassellati B (2014) Deliber-
ate delays during robot-to-human handovers improve compliance
with gaze communication. In: Proceedings of the 2014 ACM/
IEEE international conference on human–robot interaction -
HRI’14, pp 49–56. https ://doi.org/10.1145/25596 36.25596 82
40. Hungr CJ, Hunt AR (2012) Physical self-similarity enhances the
gaze-cueing effect. Q J Exp Psychol 65(7):1250–1259. https ://doi.
org/10.1080/17470 218.2012.69076 9
41. Waytz A, Heafner J, Epley N (2014) The mind in the machine:
anthropomorphism increases trust in an autonomous vehi-
cle. J Exp Soc Psychol 52:113–117. https ://doi.org/10.1016/j.
jesp.2014.01.005
42. Saygin AP, Chaminade T, Ishiguro H, Driver J, Frith C (2012)
The thing that should not be: predictive coding and the uncanny
valley in perceiving human and humanoid robot actions. SCAN
7:413–422. https ://doi.org/10.1093/scan/nsr02 5
43. SR Research (2004) Experiment builder
44. The Mathworks Inc (2015) Matlab
45. Brainard DH (1997) The psychophysics toolbox. Spat Vis
10(4):433–436. https ://doi.org/10.1163/15685 6897X 00357
46. Field AP (2005) Discovering statistics using SPSS: (and sex,
drugs and rock’n’roll). SAGE, Thousand Oaks
47. Feys J (2016) Nonparametric tests for the interaction in two-way
factorial designs using R. R J 8(1):367. https ://doi.org/10.32614 /
RJ-2016-027
48. Lundqvist D, Flykt A, Ohman A (1988) Karolinska directed emo-
tional faces. In: Psychology section. Department of Clinical Neu-
roscience, Karolinska Hospital, S-171, 76
49. Kätsyri J, Förger K, Mäkäräinen M, Takala T (2015) A review
of empirical evidence on different uncanny valley hypotheses:
support for perceptual mismatch as one road to the valley of eeri-
ness. Front Psychol 6(MAR):1–16. https ://doi.org/10.3389/fpsyg
.2015.00390
50. Giese MA, Poggio T (2003) Neural mechanisms for the recogni-
tion of biological movements. Nat Rev Neurosci 4(3):179–192.
https ://doi.org/10.1038/nrn10 57
51. Grossman ED, Blake R (2002) Brain areas active during visual
perception of biological motion. Neuron 35:9
52. SR Research (2010) Eyelink 1000
53. Ristic J, Kingstone A (2005) Taking control of reflexive social
attention. Cognition 94(3):B55–B65. https ://doi.org/10.1016/j.
cogni tion.2004.04.005
54. Gobel MS, Tufft MRA, Richardson DC (2017) Social beliefs
and visual attention: how the social relevance of a cue influences
spatial orienting. Cogn Sci 42:161–185. https ://doi.org/10.1111/
cogs.12529
55. Kingstone A, Kachkovski G, Vasilyev D, Kuk M, Welsh TN
(2019) Mental attribution is not sufficient or necessary to trigger
attentional orienting to gaze. Cognition 189:35–40. https ://doi.
org/10.1016/j.cogni tion.2019.03.010
56. Capozzi F, Ristic J (2020) Attention and mentalizing? reframing a
debate on social orienting of attention. Vis Cogn 28:97–105. https
://doi.org/10.1080/13506 285.2020.17252 06
57. Cole GG, Skarratt PA, Kuhn G (2016) Real person interaction in
visual attention research. Eur Psychol 21(2):141–149. https ://doi.
org/10.1027/1016-9040/a0002 43
International Journal of Social Robotics
1 3
58. Waytz A, Morewedge CK, Epley N, Monteleone G, Gao J-H,
Cacioppo JT (2010) Making sense by making sentient: effectance
motivation increases anthropomorphism. J Pers Soc Psychol
99(3):410–435. https ://doi.org/10.1037/a0020 240
59. Schilbach L, Timmermans B, Reddy V, Costall A, Bente G, Schli-
cht T, Vogeley K (2013) Toward a second-person neuroscience.
Behav Brain Sci 36(4):393–414. https ://doi.org/10.1017/S0140
525X1 20006 60
60. Redcay E, Kleiner M, Saxe R (2012) Look at this: the neural
correlates of initiating and responding to bids for joint atten-
tion. Front Hum Neurosci 6:169. https ://doi.org/10.3389/fnhum
.2012.00169
61. Lachat F, Conty L, Hugueville L, George N (2012) Gaze cueing
effect in a face-to-face situation. J Nonverbal Behav 36(3):177–
190. https ://doi.org/10.1007/s1091 9-012-0133-x
62. Caruana N, Brock J, Woolgar A (2015) A frontotemporoparietal
network common to initiating and responding to joint attention
bids. NeuroImage 108:34–46. https ://doi.org/10.1016/j.neuro
image .2014.12.041
63. Schilbach L, Wilms M, Eickhoff SB, Romanzetti S, Tepest R,
Bente G, Shah NJ, Fink GR, Vogeley K (2010) Minds made
for sharing: initiating joint attention recruits reward-related
neurocircuitry. J Cogn Neurosci 22(12):2702–2715. https ://doi.
org/10.1162/jocn.2009.21401
64. Thellman S, Silvervarg A, Gulz A, Ziemke T (2016) Physical vs.
virtual agent embodiment and effects on social interaction. In:
Traum D, Swartout W, Khooshabeh P, Kopp S, Scherer S, Leuski
A (eds) Intelligent virtual agents, vol 10011. Springer, Berlin, pp
412–415. https ://doi.org/10.1007/978-3-319-47665 -0_44
65. Mollahosseini A, Abdollahi H, Sweeny TD, Cole R, Mahoor MH
(2018) Role of embodiment and presence in human perception of
robots’ facial cues. Int J Hum Comput Stud 116:25–39. https ://
doi.org/10.1016/j.ijhcs .2018.04.005
66. Wainer J, Feil-Seifer DJ, Shell DA, Mataric MJ (2007) Embodi-
ment and human–robot interaction: a task-based perspective. In:
RO-MAN 2007—the 16th IEEE international symposium on
robot and human interactive communication, pp 872–877. https
://doi.org/10.1109/ROMAN .2007.44152 07
67. Lee KM, Jung Y, Kim J, Kim SR (2006) Are physically embodied
social agents better than disembodied social agents?: the effects of
physical embodiment, tactile interaction, and people’s loneliness
in human-robot interaction. Int J Hum Comput Stud 64(10):962–
973. https ://doi.org/10.1016/j.ijhcs .2006.05.002
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Abdulaziz Abubshait is a postdoc at the Italian Institute of Technology
in Genova, Italy. He received his PhD in Human Factors and Applied
Cognition from George Mason University in 2019. His research inter-
ests investigates the dynamics of human-robot social interactions.
Patrick Weis is a postdoc at the Nicolaus Copernicus University in
Torun. In 2019, he received his PhD in Human Factors and Applied
Cognition at George Mason University. He also received an MS in
Neuroscience from the University of Tubingen in 2014.
Eva Wiese is an Associate Professor in Human Factors and Applied
Cognition and the head of the Social and Cognitive Interactions Lab
at George Mason University. She has a PhD in Neuroscience from
Ludwig Maximilian University Munich and a MS in Psychology from
Otto-Friedrich University Bamberg. Eva’s research interests focus on
mind perception and embodied cognition and their application to social
robotics and cognitive offloading.
... Nonetheless, several authors showed that GCE is prone to a top-down modulation. The GCE depends on a social context and relevance 12,16,25,26 , familiarity of the gazer 25 , reliability of the cue 27,28 , social status 29 , and ingroup/outgroup status of the gazer 30 . Also the perception of being in competition with the gazer affects the GCE 31 .Finally, and importantly for the purposes of the present study, Kompatsiari et al. 18,20 showed that engaging participants in eye contact or not before displaying the directional cue affects the GCE. ...
Article
Full-text available
Joint attention is a pivotal mechanism underlying human ability to interact with one another. The fundamental nature of joint attention in the context of social cognition has led researchers to develop tasks that address this mechanism and operationalize it in a laboratory setting, in the form of a gaze cueing paradigm. In the present study, we addressed the question of whether engaging in joint attention with a robot face is culture-specific. We adapted a classical gaze-cueing paradigm such that a robot avatar cued participants’ gaze subsequent to either engaging participants in eye contact or not. Our critical question of interest was whether the gaze cueing effect (GCE) is stable across different cultures, especially if cognitive resources to exert top-down control are reduced. To achieve the latter, we introduced a mathematical stress task orthogonally to the gaze cueing protocol. Results showed larger GCE in the Singapore sample, relative to the Italian sample, independent of gaze type (eye contact vs. no eye contact) or amount of experienced stress, which translates to available cognitive resources. Moreover, since after each block, participants rated how engaged they felt with the robot avatar during the task, we observed that Italian participants rated as more engaging the avatar during the eye contact blocks, relative to no eye contact while Singaporean participants did not show any difference in engagement relative to the gaze. We discuss the results in terms of cultural differences in robot-induced joint attention, and engagement in eye contact, as well as the dissociation between implicit and explicit measures related to processing of gaze.
... Several studies have shown that social relevance of the cue modulates the gaze cueing effect (Abubshait, Weis, & Wiese, 2021;Wiese, Abubshait, Azarian, & Blumberg, 2019;Abubshait & Wiese, 2017;Ciardo, Marino, Actis-Grosso, Rossetti, & Ricciardelli, 2014;Wykowska, Wiese, Prosser, & Müller, 2014;Wiese, Wykowska, Zwickel, & Müller, 2012). For example, a communicative gaze (mutual gaze or averted) before the directional cue has been shown to modulate attentional orienting (i.e., by facilitating joint attention; Kompatsiari, Ciardo, & Wykowska, 2022;McKay et al., 2021;Dalmaso, Alessi, et al., 2020;Kompatsiari, Ciardo, Tikhanoff, Metta, & Wykowska, 2018;Xu, Zhang, & Geng, 2018;Bristow, Rees, & Frith, 2007). ...
Article
Communicative gaze (e.g., mutual or averted) has been shown to affect attentional orienting. However, no study to date has clearly separated the neural basis of the pure social component that modulates attentional orienting in response to communicative gaze from other processes that might be a combination of attentional and social effects. We used TMS to isolate the purely social effects of communicative gaze on attentional orienting. Participants completed a gaze-cueing task with a humanoid robot who engaged either in mutual or in averted gaze before shifting its gaze. Before the task, participants received either sham stimulation (baseline), stimulation of right TPJ (rTPJ), or dorso-medial prefrontal cortex. Results showed, as expected, that communicative gaze affected attentional orienting in baseline condition. This effect was not evident for rTPJ stimulation. Interestingly, stimulation to rTPJ also canceled out attentional orienting altogether. On the other hand, dorso-medial prefrontal cortex stimulation eliminated the socially driven difference in attention orienting between the two gaze conditions while maintaining the basic general attentional orienting effect. Thus, our results allowed for separation of the pure social effect of communicative gaze on attentional orienting from other processes that are a combination of social and generic attentional components.
... Several studies have shown that social relevance of the cue modulates the gaze cueing effect (Abubshait, Weis, & Wiese, 2021;Wiese, Abubshait, Azarian, & Blumberg, 2019;Abubshait & Wiese, 2017;Ciardo, Marino, Actis-Grosso, Rossetti, & Ricciardelli, 2014;Wykowska, Wiese, Prosser, & Müller, 2014;Wiese, Wykowska, Zwickel, & Müller, 2012). For example, a communicative gaze (mutual gaze or averted) before the directional cue has been shown to modulate attentional orienting (i.e., by facilitating joint attention; Kompatsiari, Ciardo, & Wykowska, 2022;McKay et al., 2021;Dalmaso, Alessi, et al., 2020;Kompatsiari, Ciardo, Tikhanoff, Metta, & Wykowska, 2018;Xu, Zhang, & Geng, 2018;Bristow, Rees, & Frith, 2007). ...
Preprint
Communicative gaze (e.g., mutual or averted) has been shown to affect attentional orienting. However, no study to date has clearly separated the neural basis of the pure social component that modulates attentional orienting in response to communicative gaze from other processes that might be a combination of attentional and social effects. We used Transcranial Magnetic Stimulation (TMS) to isolate the purely social effects of communicative gaze on attentional orienting. Participants completed a gaze-cueing task with a humanoid robot who engaged either in mutual or in averted gaze prior to shifting its gaze. Before the task, participants received either Sham stimulation (baseline), stimulation of right temporoparietal junction (rTPJ) or dorso-medial prefrontal cortex (dmPFC). Results showed, as expected, that communicative gaze affected attentional orienting in baseline condition. This effect was not evident for rTPJ stimulation. Interestingly, stimulation to rTPJ also canceled out attentional orienting altogether. On the other hand, dmPFC stimulation eliminated the socially-driven difference in attention orienting between the two gaze conditions while maintaining the basic general attentional orienting effect. Thus, our results allowed for separation of the pure social effect of communicative gaze on attentional orienting from other processes that are a combination of social and generic attentional components.
... Studies that investigated individual differences, for example, have shown that individual differences in resting-state beta rhythms in the brain [11], experiencing loneliness [12], and expectations regarding the capabilities of robots can predict whether people would explain a robot's behavior in mentalistic or mechanistic terms [13]. Studies that investigated robot factors have shown that robots that are embodied [14], [15], behave unexpectedly ( [10]), display variability in their behavior [16], or cheat [17], [18] are more likely to have their behaviors explained in mentalistic terms as opposed to mechanistic terms (For a review, see [19]). More recently, Lefkeli and colleagues [20], asked participants to play in cooperation with-or competition against a robot. ...
Preprint
Full-text available
When humans interact with artificial agents, they adopt various stances towards them. On one side of the spectrum, people might adopt a mechanistic stance towards an agent and explain its behavior using its functional properties. On the other hand, people can adopt the intentional stance towards artificial agents and explain their behavior using mentalistic terms and explain the agents’ behavior using internal states (e.g., thoughts and feelings). While studies continue to investigate under which conditions people adopt the intentional stance towards artificial robots, here, we report a study in which we investigated the effect of social framing during a color-classification task with a humanoid robot, iCub. One group of participants were asked to complete the task with iCub, in collaboration, while the other group completed an identical task with iCub and were told that they were completing the task for themselves. Participants completed a task assessing their level of adoption of the Intentional Stance (the InStance test) prior to - and after completing the task. Results illustrate that participants who “collaborated” with iCub were more likely to adopt the intentional stance towards it after the interaction. These results suggest that social framing can be a powerful method to influence the stance that people adopt towards a robot.
... The finding that short exposure durations were likely to have a positive effect on subjective ratings of adopting the intentional stance is in line with prior work that used the gaze-cueing paradigm and found that subjective ratings of the mind status of a robot increases after completing the gaze-cueing paradigm (Abubshait and Wiese, 2017;Abubshait et al., 2020b) and other studies showing that initial impressions influenced subjective ratings positively, but latter interactions kept the interactions unchanged (Paetzel et al., 2020). Furthermore, the positive impact of short interactions on adopting the intentional stance might be due to interacting with a real embodied humanoid robot, which initially might evoke social attunement, due to its physical presence (Wainer et al., 2007). ...
Article
Full-text available
Gaze behavior is an important social signal between humans as it communicates locations of interest. People typically orient their attention to where others look as this informs about others' intentions and future actions. Studies have shown that humans can engage in similar gaze behavior with robots but presumably more so when they adopt the intentional stance toward them (i.e., believing robot behaviors are intentional). In laboratory settings, the phenomenon of attending toward the direction of others' gaze has been examined with the use of the gaze-cueing paradigm. While the gaze-cueing paradigm has been successful in investigating the relationship between adopting the intentional stance toward robots and attention orienting to gaze cues, it is unclear if the repetitiveness of the gaze-cueing paradigm influences adopting the intentional stance. Here, we examined if the duration of exposure to repetitive robot gaze behavior in a gaze-cueing task has a negative impact on subjective attribution of intentionality. Participants performed a short, medium, or long face-to-face gaze-cueing paradigm with an embodied robot while subjective ratings were collected pre and post the interaction. Results show that participants in the long exposure condition had the smallest change in their intention attribution scores, if any, while those in the short exposure condition had a positive change in their intention attribution, indicating that participants attributed more intention to the robot after short interactions. The results also show that attention orienting to robot gaze-cues was positively related to how much intention was attributed to the robot, but this relationship became more negative as the length of exposure increased. In contrast to subjective ratings, the gaze-cueing effects (GCEs) increased as a function of the duration of exposure to repetitive behavior. The data suggest a tradeoff between the desired number of trials needed for observing various mechanisms of social cognition, such as GCEs, and the likelihood of adopting the intentional stance toward a robot.
Article
Purpose Artificial intelligence (AI) has a large number of applications at the industry and user levels. However, AI's uniqueness neglect is becoming an obstacle in the further application of AI. Based on the theory of innovation resistance, this paper aims to explore the effect of AI's uniqueness neglect on consumer resistance to AI. Design/methodology/approach The authors tested four hypothesis across four studies by conducting lab experiments. Study 1 used a questionnaire to verify the hypothesis that AI's uniqueness neglect leads to consumer resistance to AI; Studies 2 focused on the role of human–AI interaction trust as an underlying driver of resistance to medical AI. Study 3–4 provided process evidence by way of a measured moderator, testing whether participants with a greater sense of non-verbal human–AI communication are more reluctant to have consumer resistance to AI. Findings The authors found that AI's uniqueness neglect increased users' resistance to AI. This occurs because the uniqueness neglect of AI hinders the formation of interaction trust between users and AI. The study also found that increasing the gaze behavior of AI and increasing the physical distance in the interaction can alleviate the effect of AI's uniqueness neglect on consumer resistance to AI. Originality/value This paper explored the effect of AI's uniqueness neglect on consumer resistance to AI and uncovered human–AI interaction trust as a mediator for this effect and gaze behavior and physical distance as moderators for this effect.
Article
While we applaud the careful breakdown by Clark and Fischer of the representation of social robots held by the human user, we emphasise that a neurocognitive perspective is crucial to fully capture how people perceive and construe social robots at the behavioural and brain levels.
Article
Sensorimotor signaling is a key mechanism underlying coordination in humans. The increasing presence of artificial agents, including robots, in everyday contexts, will make joint action with them as common as a joint action with other humans. The present study investigates under which conditions sensorimotor signaling emerges when interacting with them. Human participants were asked to play a musical duet either with a humanoid robot or with an algorithm run on a computer. The artificial agent was programmed to commit errors. Those were either human-like (simulating a memory error) or machine-like (a repetitive loop of back-and-forth taps). At the end of the task, we tested the social inclusion toward the artificial partner by using a ball-tossing game. Our results showed that when interacting with the robot, participants showed lower variability in their performance when the error was human-like, relative to a mechanical failure. When the partner was an algorithm, the pattern was reversed. Social inclusion was affected by human-likeness only when the partner was a robot. Taken together, our findings showed that coordination with artificial agents, as well as social inclusion, are influenced by how human-like the agent appears, both in terms of morphological traits and in terms of behaviour.
Article
Full-text available
People spontaneously attend where others are looking. Recently, it has been debated whether such orienting behavior is supported by domain-general attentional processes, that involve reading the cues’ directional properties, or by processes that involve attributing mental states to agents. In this opinion, we summarize key evidence for each position and argue that instead of favoring one or the other view, the available data support an integrated framework in which the attribution of mental states and the operation of domain-general attentional processes interact to modulate social orienting. In addition to providing a novel perspective, this view opens several fruitful future research avenues aimed at understanding how the two processes act together to influence cognitions and behavior.
Article
Full-text available
Humanlike but not perfectly human agents frequently evoke feelings of eeriness, a phenomenon termed the Uncanny Valley (UV). The Categorical Perception Hypothesis proposes that effects associated with the UV are due to uncertainty as to whether to categorize agents falling into the valley as “human” or “nonhuman”. However, since UV studies have traditionally looked at agents of varying human-likeness, it remains unclear whether UV-related effects are due to categorical uncertainty in general or are specifically evoked by categorizations that require decisions regarding an agent's human-likeness. Here, we used mouse tracking to determine whether agent spectra with (i.e., robot-human) and without (i.e., robot-animal and robot-stuffed animal) a human endpoint cause phenomena related to categorical perception to comparable extents. Specifically, we compared human and nonhuman agent spectra with respect to existence and location of a category boundary (H1-1 and H2-1), as well as the magnitude of cognitive conflict around the boundary (H1-2 and H2-2). The results show that human and nonhuman spectra exhibit category boundaries (H1-1) at which cognitive conflict is higher than for less ambiguous parts of the spectra (H1-2). However, in human agent spectra cognitive conflict maxima were more pronounced than for nonhuman agent spectra (H2-1) and category boundaries were shifted towards the human endpoint of the spectrum (H2-2). Overall, these results suggest a quantitatively, though not qualitatively, different categorization process for spectra containing human endpoints. Possible reasons and the impact for virtual and robotic agent design are discussed.
Article
Full-text available
Abstract Most experimental protocols examining joint attention with the gaze cueing paradigm are “observational” and “offline”, thereby not involving social interaction. We examined whether within a naturalistic online interaction, real-time eye contact influences the gaze cueing effect (GCE). We embedded gaze cueing in an interactive protocol with the iCub humanoid robot. This has the advantage of ecological validity combined with excellent experimental control. Critically, before averting the gaze, iCub either established eye contact or not, a manipulation enabled by an algorithm detecting position of the human eyes. For non-predictive gaze cueing procedure (Experiment 1), only the eye contact condition elicited GCE, while for counter-predictive procedure (Experiment 2), only the condition with no eye contact induced GCE. These results reveal an interactive effect of strategic (gaze validity) and social (eye contact) top-down components on the reflexive orienting of attention induced by gaze cues. More generally, we propose that naturalistic protocols with an embodied presence of an agent can cast a new light on mechanisms of social cognition.
Article
Full-text available
In social interactions, we rely on nonverbal cues like gaze direction to understand the behavior of others. How we react to these cues is affected by whether they are believed to originate from an entity with a mind, capable of having internal states (i.e., mind perception). While prior work has established a set of neural regions linked to social-cognitive processes like mind perception, the degree to which activation within this network relates to performance in subsequent social-cognitive tasks remains unclear. In the current study, participants performed a mind perception task (i.e., judging the likelihood that faces, varying in physical human-likeness, have internal states) while event-related fMRI was collected. Afterwards, participants performed a social attention task outside the scanner, during which they were cued by the gaze of the same faces that they previously judged within the mind perception task. Parametric analyses of the fMRI data revealed that activity within ventromedial prefrontal cortex (vmPFC) was related to both mind ratings inside the scanner and gaze-cueing performance outside the scanner. In addition, other social brain regions were related to gaze-cueing performance, including frontal areas like the left insula, dorsolateral prefrontal cortex, and inferior frontal gyrus, as well as temporal areas like the left temporo-parietal junction and bilateral temporal gyri. The findings suggest that functions subserved by the vmPFC are relevant to both mind perception and social attention, implicating a role of vmPFC in the top-down modulation of low-level social-cognitive processes.
Article
Full-text available
Both robotic and virtual agents could one day be equipped with social abilities necessary for effective and natural interaction with human beings. Although virtual agents are relatively inexpensive and flexible, they lack the physical embodiment present in robotic agents. Surprisingly, the role of embodiment and physical presence for enriching human-robot-interaction is still unclear. This paper explores how these unique features of robotic agents influence three major elements of human-robot face-to-face communication, namely the perception of visual speech, facial expression, and eye-gaze. We used a quantitative approach to disentangle the role of embodiment from the physical presence of a social robot, called Ryan, with three different agents (robot, telepresent robot, and virtual agent), as well as with an actual human. We used a robot with a retro-projected face for this study, since the same animation from a virtual agent could be projected to this robotic face, thus allowing comparison of the virtual agent's animation behaviors with both telepresent and the physically present robotic agents. The results of our studies indicate that the eye gaze and certain facial expressions are perceived more accurately when the embodied agent is physically present than when it is displayed on a 2D screen either as a telepresent or a virtual agent. Conversely, we find no evidence that either the embodiment or the presence of the robot improves the perception of visual speech, regardless of syntactic or semantic cues. Comparison of our findings with previous studies also indicates that the role of embodiment and presence should not be generalized without considering the limitations of the embodied agents.
Article
Full-text available
We are highly tuned to each other's visual attention. Perceiving the eye or hand movements of another person can influence the timing of a saccade or the reach of our own. However, the explanation for such spatial orienting in interpersonal contexts remains disputed. Is it due to the social appearance of the cue-a hand or an eye-or due to its social relevance-a cue that is connected to another person with attentional and intentional states? We developed an interpersonal version of the Posner spatial cueing paradigm. Participants saw a cue and detected a target at the same or a different location, while interacting with an unseen partner. Participants were led to believe that the cue was either connected to the gaze location of their partner or was generated randomly by a computer (Experiment 1), and that their partner had higher or lower social rank while engaged in the same or a different task (Experiment 2). We found that spatial cue-target compatibility effects were greater when the cue related to a part-ner's gaze. This effect was amplified by the partner's social rank, but only when participants believed their partner was engaged in the same task. Taken together, this is strong evidence in support of the idea that spatial orienting is interpersonally attuned to the social relevance of the cue-whether the cue is connected to another person, who this person is, and what this person is doing-and does not exclusively rely on the social appearance of the cue. Visual attention is not only guided by the physical salience of one's environment but also by the mental representation of its social relevance.
Article
Full-text available
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.
Article
The Karolinska Directed Emotional Faces (KDEF; Lundqvist, Flykt, & Öhman, 1998) is a database of pictorial emotional facial expressions for use in emotion research. The original KDEF database consists of a total of 490 JPEG pictures (72x72 dots per inch) showing 70 individuals (35 women and 35 men) displaying 7 different emotional expressions (Angry, Fearful, Disgusted, Sad, Happy, Surprised, and Neutral). Each expression is viewed from 5 different angles and was recorded twice (the A and B series). All the individuals were trained amateur actors between 20 and 30 years of age. For participation in the photo session, beards, moustaches, earrings, eyeglasses, and visible make-up were exclusion criteria. All the participants were instructed to try to evoke the emotion that was to be expressed and to make the expression strong and clear. In a validation study (Goeleven et al., 2008), a series of the KDEF images were used and participants rated emotion, intensity, and arousal on 9-point Likert scales. In that same study, a test-retest reliability analysis was performed by computing the percentage similarity of emotion type ratings and by calculating the correlations for the intensity and arousal measures over a one-week period. With regard to the intensity and arousal measures, a mean correlation across all pictures of .75 and .78 respectively was found. (APA PsycTests Database Record (c) 2019 APA, all rights reserved)
Article
Attention can be shifted in the direction that another person is looking, but the role played by an observer's mental attribution to the looker is controversial. And whether mental attribution to the looker is sufficient to trigger an attention shift is unknown. The current study introduces a novel paradigm to investigate this latter issue. An actor is presented on video turning his head to the left or right before a target appears, randomly, at the gazed-at or non-gazed at location. Time to detect the target is measured. The standard finding is that target detection is more efficient at the gazed-at than the nongazed-at location, indicating that attention is shifted to the gazed-at location. Critically, in the current study, an actor is wearing two identical masks - one covering his face and the other the back of his head. Thus, after the head turn, participants are presented with the profile of two faces, one looking left and one looking right. For a gaze cuing effect to emerge, participants must attribute a mental state to the actor - as looking through one mask and not the other. Over the course of four experiments we report that when mental attribution is necessary, a shift in social attention does not occur (i.e., mental attribution is not sufficient to produce a social attention effect); and when mental attribution is not necessary, a shift in social attention does occur. Thus, mental attribution is neither sufficient nor necessary for the occurrence of an involuntary shift in social attention. The present findings constrain future models of social attention that wish to link gaze cuing to mental attribution.
Article
In the presence of others, sense of agency (SoA), i.e. the perceived relationship between our own actions and external events, is reduced. The present study aimed at investigating whether the phenomenon of reduced SoA is observed in human-robot interaction, similarly to human-human interaction. To this end, we tested SoA when people interacted with a robot (Experiment 1), with a passive, non-agentic air pump (Experiment 2), or when they interacted with both a robot and a human being (Experiment 3). Participants were asked to rate the perceived control they felt on the outcome of their action while performing a diffusion of responsibility task. Results showed that the intentional agency attributed to the artificial entity differently affect the performance and the perceived SoA on the outcome of the task. Experiment 1 showed that, when participants successfully performed an action, they rated SoA over the outcome as lower in trials in which the robot was also able to act (but did not), compared to when they were performing the task alone. However, this did not occur in Experiment 2, where the artificial entity was an air pump, which had the same influence on the task as the robot, but in a passive manner and thus lacked intentional agency. Results of Experiment 3 showed that SoA was reduced similarly for the human and robot agents, threby indicating that attribution of intentional agency plays a crucial role in reduction of SoA. Together, our results suggest that interacting with robotic agents affects SoA, similarly to interacting with other humans, but differently from interacting with non-agentic mechanical devices. This has important implications for the applied of social robotics, where a subjective decrease in SoA could have negative consequences, such as in robot-assisted care in hospitals.